A parallel and sensitive software tool for methylation analysis on multicore platforms.
Tárraga, Joaquín; Pérez, Mariano; Orduña, Juan M; Duato, José; Medina, Ignacio; Dopazo, Joaquín
2015-10-01
DNA methylation analysis suffers from very long processing time, as the advent of Next-Generation Sequencers has shifted the bottleneck of genomic studies from the sequencers that obtain the DNA samples to the software that performs the analysis of these samples. The existing software for methylation analysis does not seem to scale efficiently neither with the size of the dataset nor with the length of the reads to be analyzed. As it is expected that the sequencers will provide longer and longer reads in the near future, efficient and scalable methylation software should be developed. We present a new software tool, called HPG-Methyl, which efficiently maps bisulphite sequencing reads on DNA, analyzing DNA methylation. The strategy used by this software consists of leveraging the speed of the Burrows-Wheeler Transform to map a large number of DNA fragments (reads) rapidly, as well as the accuracy of the Smith-Waterman algorithm, which is exclusively employed to deal with the most ambiguous and shortest reads. Experimental results on platforms with Intel multicore processors show that HPG-Methyl significantly outperforms in both execution time and sensitivity state-of-the-art software such as Bismark, BS-Seeker or BSMAP, particularly for long bisulphite reads. Software in the form of C libraries and functions, together with instructions to compile and execute this software. Available by sftp to anonymous@clariano.uv.es (password 'anonymous'). juan.orduna@uv.es or jdopazo@cipf.es. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Evaluating the Quantitative Capabilities of Metagenomic Analysis Software.
Kerepesi, Csaba; Grolmusz, Vince
2016-05-01
DNA sequencing technologies are applied widely and frequently today to describe metagenomes, i.e., microbial communities in environmental or clinical samples, without the need for culturing them. These technologies usually return short (100-300 base-pairs long) DNA reads, and these reads are processed by metagenomic analysis software that assign phylogenetic composition-information to the dataset. Here we evaluate three metagenomic analysis software (AmphoraNet--a webserver implementation of AMPHORA2--, MG-RAST, and MEGAN5) for their capabilities of assigning quantitative phylogenetic information for the data, describing the frequency of appearance of the microorganisms of the same taxa in the sample. The difficulties of the task arise from the fact that longer genomes produce more reads from the same organism than shorter genomes, and some software assign higher frequencies to species with longer genomes than to those with shorter ones. This phenomenon is called the "genome length bias." Dozens of complex artificial metagenome benchmarks can be found in the literature. Because of the complexity of those benchmarks, it is usually difficult to judge the resistance of a metagenomic software to this "genome length bias." Therefore, we have made a simple benchmark for the evaluation of the "taxon-counting" in a metagenomic sample: we have taken the same number of copies of three full bacterial genomes of different lengths, break them up randomly to short reads of average length of 150 bp, and mixed the reads, creating our simple benchmark. Because of its simplicity, the benchmark is not supposed to serve as a mock metagenome, but if a software fails on that simple task, it will surely fail on most real metagenomes. We applied three software for the benchmark. The ideal quantitative solution would assign the same proportion to the three bacterial taxa. We have found that AMPHORA2/AmphoraNet gave the most accurate results and the other two software were under-performers: they counted quite reliably each short read to their respective taxon, producing the typical genome length bias. The benchmark dataset is available at http://pitgroup.org/static/3RandomGenome-100kavg150bps.fna.
Analytical Design of Evolvable Software for High-Assurance Computing
2001-02-14
Mathematical expression for the Total Sum of Squares which measures the variability that results when all values are treated as a combined sample coming from...primarily interested in background on software design and high-assurance computing, research in software architecture generation or evaluation...respectively. Those readers solely interested in the validation of a software design approach should at the minimum read Chapter 6 followed by Chapter
NASA Technical Reports Server (NTRS)
Caplin, R. S.; Royer, E. R.
1978-01-01
Attempts are made to provide a total design of a Microbial Load Monitor (MLM) system flight engineering model. Activities include assembly and testing of Sample Receiving and Card Loading Devices (SRCLDs), operator related software, and testing of biological samples in the MLM. Progress was made in assembling SRCLDs with minimal leaks and which operate reliably in the Sample Loading System. Seven operator commands are used to control various aspects of the MLM such as calibrating and reading the incubating reading head, setting the clock and reading time, and status of Card. Testing of the instrument, both in hardware and biologically, was performed. Hardware testing concentrated on SRCLDs. Biological testing covered 66 clinical and seeded samples. Tentative thresholds were set and media performance listed.
Development of a vision-based pH reading system
NASA Astrophysics Data System (ADS)
Hur, Min Goo; Kong, Young Bae; Lee, Eun Je; Park, Jeong Hoon; Yang, Seung Dae; Moon, Ha Jung; Lee, Dong Hoon
2015-10-01
pH paper is generally used for pH interpretation in the QC (quality control) process of radiopharmaceuticals. pH paper is easy to handle and useful for small samples such as radio-isotopes and radioisotope (RI)-labeled compounds for positron emission tomography (PET). However, pHpaper-based detecting methods may have some errors due limitations of eye sight and inaccurate readings. In this paper, we report a new device for pH reading and related software. The proposed pH reading system is developed with a vision algorithm based on the RGB library. The pH reading system is divided into two parts. First is the reading device that consists of a light source, a CCD camera and a data acquisition (DAQ) board. To improve the accuracy of the sensitivity, we utilize the three primary colors of the LED (light emission diode) in the reading device. The use of three colors is better than the use of a single color for a white LED because of wavelength. The other is a graph user interface (GUI) program for a vision interface and report generation. The GUI program inserts the color codes of the pH paper into the database; then, the CCD camera captures the pH paper and compares its color with the RGB database image in the reading mode. The software captures and reports information on the samples, such as pH results, capture images, and library images, and saves them as excel files.
Wang, Lih-Wern; Miller, Michael J; Schmitt, Michael R; Wen, Frances K
2013-01-01
Readability formulas are often used to guide the development and evaluation of literacy-sensitive written health information. However, readability formula results may vary considerably as a result of differences in software processing algorithms and how each formula is applied. These variations complicate interpretations of reading grade level estimates, particularly without a uniform guideline for applying and interpreting readability formulas. This research sought to (1) identify commonly used readability formulas reported in the health care literature, (2) demonstrate the use of the most commonly used readability formulas on written health information, (3) compare and contrast the differences when applying common readability formulas to identical selections of written health information, and (4) provide recommendations for choosing an appropriate readability formula for written health-related materials to optimize their use. A literature search was conducted to identify the most commonly used readability formulas in health care literature. Each of the identified formulas was subsequently applied to word samples from 15 unique examples of written health information about the topic of depression and its treatment. Readability estimates from common readability formulas were compared based on text sample size, selection, formatting, software type, and/or hand calculations. Recommendations for their use were provided. The Flesch-Kincaid formula was most commonly used (57.42%). Readability formulas demonstrated variability up to 5 reading grade levels on the same text. The Simple Measure of Gobbledygook (SMOG) readability formula performed most consistently. Depending on the text sample size, selection, formatting, software, and/or hand calculations, the individual readability formula estimated up to 6 reading grade levels of variability. The SMOG formula appears best suited for health care applications because of its consistency of results, higher level of expected comprehension, use of more recent validation criteria for determining reading grade level estimates, and simplicity of use. To improve interpretation of readability results, reporting reading grade level estimates from any formula should be accompanied with information about word sample size, location of word sampling in the text, formatting, and method of calculation. Copyright © 2013 Elsevier Inc. All rights reserved.
Remediation of Deficits in Recognition of Facial Emotions in Children with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Weinger, Paige M.; Depue, Richard A.
2011-01-01
This study evaluated the efficacy of the Mind Reading interactive computer software to remediate emotion recognition deficits in children with autism spectrum disorders (ASD). Six unmedicated children with ASD and 11 unmedicated non-clinical control subjects participated in the study. The clinical sample used the software for five sessions. The…
Effects of text-to-speech software use on the reading proficiency of high school struggling readers.
Park, Hye Jin; Takahashi, Kiriko; Roberts, Kelly D; Delise, Danielle
2017-01-01
The literature highlights the benefits of text-to-speech (TTS) software when used as an assistive technology facilitating struggling readers' access to print. However, the effects of TTS software use, upon students' unassisted reading proficiency, have remained relatively unexplored. The researchers utilized an experimental design to investigate whether 9th grade struggling readers who use TTS software to read course materials demonstrate significant improvements in unassisted reading performance. A total of 164 students of 30 teachers in Hawaii participated in the study. Analyses of covariance results indicated that the TTS intervention had a significant, positive effect on student reading vocabulary and reading comprehension after 10 weeks of TTS software use (average 582 minutes). There are several limitations to the study; however, the current study opens up for discussions and need for further studies investigating TTS software as a viable reading intervention for adolescent struggling readers.
A Public Domain Software Library for Reading and Language Arts.
ERIC Educational Resources Information Center
Balajthy, Ernest
A three-year project carried out by the Microcomputers and Reading Committee of the New Jersey Reading Association involved the collection, improvement, and distribution of free microcomputer software (public domain programs) designed to deal with reading and writing skills. Acknowledging that this free software is not without limitations (poor…
ERIC Educational Resources Information Center
Rhein, Deborah; Alibrandi, Mary; Lyons, Mary; Sammons, Janice; Doyle, Luther
This bibliography, developed by Project RIMES (Reading Instructional Methods of Efficacy with Students) lists 80 software packages for teaching early reading and spelling to students at risk for reading and spelling failure. The software packages are presented alphabetically by title. Entries usually include a grade level indicator, a brief…
Atmospheric Science Data Center
2014-09-03
MI1AENG1 MISR Level 1A Engineering Data File Type 1: Reformatted Annotated Level 1A product for the camera engineering data, which represents indicators of sampled measurements. ... Status Production Report Read Software Files : Data Product Specification Versioning ...
ZOOM Lite: next-generation sequencing data mapping and visualization software
Zhang, Zefeng; Lin, Hao; Ma, Bin
2010-01-01
High-throughput next-generation sequencing technologies pose increasing demands on the efficiency, accuracy and usability of data analysis software. In this article, we present ZOOM Lite, a software for efficient reads mapping and result visualization. With a kernel capable of mapping tens of millions of Illumina or AB SOLiD sequencing reads efficiently and accurately, and an intuitive graphical user interface, ZOOM Lite integrates reads mapping and result visualization into a easy to use pipeline on desktop PC. The software handles both single-end and paired-end reads, and can output both the unique mapping result or the top N mapping results for each read. Additionally, the software takes a variety of input file formats and outputs to several commonly used result formats. The software is freely available at http://bioinfor.com/zoom/lite/. PMID:20530531
Chiang, Hsin-Yu; Liu, Chien-Hsiou
2011-01-01
Using assistive reading software may be a cost-effective way to increase the opportunity for independent learning in students with learning disabilities. However, the effectiveness and perception of assistive reading software has seldom been explored in English-as-a-second language students with learning disabilities. This research was designed to explore the perception and effect of using assistive reading software in high school students with dyslexia (one subtype of learning disability) to improve their English reading and other school performance. The Kurzweil 3000 software was used as the intervention tool in this study. Fifteen students with learning disabilities were recruited, and instruction in the usage of the Kurzweil 3000 was given. Then after 2 weeks, when they were familiarized with the use of Kurzweil 3000, interviews were used to determine the perception and potential benefit of using the software. The results suggested that the Kurzweil 3000 had an immediate impact on students' English word recognition. The students reported that the software made reading, writing, spelling, and pronouncing easier. They also comprehended more during their English class. Further study is needed to determine under which conditions certain hardware/software might be helpful for individuals with special learning needs.
Lee, Young Han
2012-01-01
The objectives are (1) to introduce an easy open-source macro program as connection software and (2) to illustrate the practical usages in radiologic reading environment by simulating the radiologic reading process. The simulation is a set of radiologic reading process to do a practical task in the radiologic reading room. The principal processes are: (1) to view radiologic images on the Picture Archiving and Communicating System (PACS), (2) to connect the HIS/EMR (Hospital Information System/Electronic Medical Record) system, (3) to make an automatic radiologic reporting system, and (4) to record and recall information of interesting cases. This simulation environment was designed by using open-source macro program as connection software. The simulation performed well on the Window-based PACS workstation. Radiologists practiced the steps of the simulation comfortably by utilizing the macro-powered radiologic environment. This macro program could automate several manual cumbersome steps in the radiologic reading process. This program successfully acts as connection software for the PACS software, EMR/HIS, spreadsheet, and other various input devices in the radiologic reading environment. A user-friendly efficient radiologic reading environment could be established by utilizing open-source macro program as connection software. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Reading Diagnosis via the Microcomputer (The Printout).
ERIC Educational Resources Information Center
Weisberg, Renee; Balajthy, Ernest
1989-01-01
Examines and evaluates microcomputer software designed to assist in diagnosing students' reading abilities and making instructional decisions. Claims that existing software shows valuable potential when used sensibly and critically by trained reading clinicians. (MM)
Using pseudoalignment and base quality to accurately quantify microbial community composition
Novembre, John
2018-01-01
Pooled DNA from multiple unknown organisms arises in a variety of contexts, for example microbial samples from ecological or human health research. Determining the composition of pooled samples can be difficult, especially at the scale of modern sequencing data and reference databases. Here we propose a novel method for taxonomic profiling in pooled DNA that combines the speed and low-memory requirements of k-mer based pseudoalignment with a likelihood framework that uses base quality information to better resolve multiply mapped reads. We apply the method to the problem of classifying 16S rRNA reads using a reference database of known organisms, a common challenge in microbiome research. Using simulations, we show the method is accurate across a variety of read lengths, with different length reference sequences, at different sample depths, and when samples contain reads originating from organisms absent from the reference. We also assess performance in real 16S data, where we reanalyze previous genetic association data to show our method discovers a larger number of quantitative trait associations than other widely used methods. We implement our method in the software Karp, for k-mer based analysis of read pools, to provide a novel combination of speed and accuracy that is uniquely suited for enhancing discoveries in microbial studies. PMID:29659582
Holt, Kathryn E; Teo, Yik Y; Li, Heng; Nair, Satheesh; Dougan, Gordon; Wain, John; Parkhill, Julian
2009-08-15
Here, we present a method for estimating the frequencies of SNP alleles present within pooled samples of DNA using high-throughput short-read sequencing. The method was tested on real data from six strains of the highly monomorphic pathogen Salmonella Paratyphi A, sequenced individually and in a pool. A variety of read mapping and quality-weighting procedures were tested to determine the optimal parameters, which afforded > or =80% sensitivity of SNP detection and strong correlation with true SNP frequency at poolwide read depth of 40x, declining only slightly at read depths 20-40x. The method was implemented in Perl and relies on the opensource software Maq for read mapping and SNP calling. The Perl script is freely available from ftp://ftp.sanger.ac.uk/pub/pathogens/pools/.
Designing robust watermark barcodes for multiplex long-read sequencing.
Ezpeleta, Joaquín; Krsticevic, Flavia J; Bulacio, Pilar; Tapia, Elizabeth
2017-03-15
To attain acceptable sample misassignment rates, current approaches to multiplex single-molecule real-time sequencing require upstream quality improvement, which is obtained from multiple passes over the sequenced insert and significantly reduces the effective read length. In order to fully exploit the raw read length on multiplex applications, robust barcodes capable of dealing with the full single-pass error rates are needed. We present a method for designing sequencing barcodes that can withstand a large number of insertion, deletion and substitution errors and are suitable for use in multiplex single-molecule real-time sequencing. The manuscript focuses on the design of barcodes for full-length single-pass reads, impaired by challenging error rates in the order of 11%. The proposed barcodes can multiplex hundreds or thousands of samples while achieving sample misassignment probabilities as low as 10-7 under the above conditions, and are designed to be compatible with chemical constraints imposed by the sequencing process. Software tools for constructing watermark barcode sets and demultiplexing barcoded reads, together with example sets of barcodes and synthetic barcoded reads, are freely available at www.cifasis-conicet.gov.ar/ezpeleta/NS-watermark . ezpeleta@cifasis-conicet.gov.ar. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
NASA Technical Reports Server (NTRS)
Caplin, R. S.; Royer, E. R.
1977-01-01
Design analysis of a microbial load monitor system flight engineering model was presented. Checkout of the card taper and media pump system was fabricated as well as the final two incubating reading heads, the sample receiving and card loading device assembly, related sterility testing, and software. Progress in these areas was summarized.
Student Reading Achievement on the Rise: Integration of Classworks Software with Technology
ERIC Educational Resources Information Center
Young, Janice L.
2014-01-01
The purpose of the study was to test the theoretical perspective that related Classworks (2008) technology to reading achievement of fourth grade students to determine if a significant difference existed in student reading achievement between the supplemental uses of Classworks software reading program to that of standard classroom instruction.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLoughlin, K.
2016-01-11
The overall aim of this project is to develop a software package, called MetaQuant, that can determine the constituents of a complex microbial sample and estimate their relative abundances by analysis of metagenomic sequencing data. The goal for Task 1 is to create a generative model describing the stochastic process underlying the creation of sequence read pairs in the data set. The stages in this generative process include the selection of a source genome sequence for each read pair, with probability dependent on its abundance in the sample. The other stages describe the evolution of the source genome from itsmore » nearest common ancestor with a reference genome, breakage of the source DNA into short fragments, and the errors in sequencing the ends of the fragments to produce read pairs.« less
ERIC Educational Resources Information Center
Karemaker, Arjette; Pitchford, Nicola J.; O'Malley, Claire
2010-01-01
The effectiveness of a reading intervention using the whole-word multimedia software "Oxford Reading Tree (ORT) for Clicker" was compared to a reading intervention using traditional ORT Big Books. Developing literacy skills and attitudes towards learning to read were assessed in a group of 17 struggling beginner readers aged 5-6 years. Each child…
An Evaluation of the Merit Reading Software Program in the Calhoun County (WV) Middle/High School
ERIC Educational Resources Information Center
Jones, Jerry D.; Staats, William D.; Bowling, Noel; Bickel, Robert D.; Cunningham, Michael L.; Cadle, Connie
2005-01-01
We were asked by Merit Software to conduct a quasi-experimental research study to evaluate the effects of its reading software on middle school students. Because the No Child Left Behind Act emphasizes the importance of evidence-based interventions and has set improving students reading comprehension as a goal, we agreed to take on this project.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bharoto,; Suparno, Nadi; Putra, Edy Giri Rachman
In 2005, the main computer for data acquisition and control system of Small-angle Neutron Scattering (SANS) BATAN Spectrometer (SMARTer) was replaced since it halted to operate the spectrometer. According to this replacement, the new software for data acquisition and control system has been developed in-house. Visual Basic programming language is used in developing the software. In the last two years, many developments have been made both in the hardware and also the software to conduct the experiment is more effective and efficient. Lately, the previous motor controller card (ISA Card) was replaced with the programmable motor controller card (PCI Card)more » for driving one motor of position sensitive detector (PSD), eight motors of four collimators, and six motors of six pinhole discs. This new control system software makes all motors can be moved simultaneously, then it reduces significantly the consuming time of setting up the instrument before running the experiment. Along with that development, the new data acquisition software under MS Windows operating system is also developed to drive a beam stopper in X-Y directions as well as to read the equipment status such as position of the collimators and PSD, to acquire neutron counts on monitor and PSD detectors, and also to manage 12 samples position automatically. A timer object which is set in one second to read the equipment status via serial port of the computer (RS232C), and general purpose interface board (GPIB) for reading the total counts of each pixel of the PSD from histogram memory was used in this new software. The experiment result displayed in real time on the main window, and the data is saved in the special format for further data reduction and analysis. The new software has been implemented and performed for experiment using a preset count or preset time mode for absolute scattering intensity method.« less
NASA Astrophysics Data System (ADS)
Bharoto, Suparno, Nadi; Putra, Edy Giri Rachman
2015-04-01
In 2005, the main computer for data acquisition and control system of Small-angle Neutron Scattering (SANS) BATAN Spectrometer (SMARTer) was replaced since it halted to operate the spectrometer. According to this replacement, the new software for data acquisition and control system has been developed in-house. Visual Basic programming language is used in developing the software. In the last two years, many developments have been made both in the hardware and also the software to conduct the experiment is more effective and efficient. Lately, the previous motor controller card (ISA Card) was replaced with the programmable motor controller card (PCI Card) for driving one motor of position sensitive detector (PSD), eight motors of four collimators, and six motors of six pinhole discs. This new control system software makes all motors can be moved simultaneously, then it reduces significantly the consuming time of setting up the instrument before running the experiment. Along with that development, the new data acquisition software under MS Windows operating system is also developed to drive a beam stopper in X-Y directions as well as to read the equipment status such as position of the collimators and PSD, to acquire neutron counts on monitor and PSD detectors, and also to manage 12 samples position automatically. A timer object which is set in one second to read the equipment status via serial port of the computer (RS232C), and general purpose interface board (GPIB) for reading the total counts of each pixel of the PSD from histogram memory was used in this new software. The experiment result displayed in real time on the main window, and the data is saved in the special format for further data reduction and analysis. The new software has been implemented and performed for experiment using a preset count or preset time mode for absolute scattering intensity method.
Atmospheric Science Data Center
2018-06-14
CALIPSO Data Read Software Callable routines in Interactive Data Language (IDL) provide basic read access to CALIPSO science data files. ... Release 4.30 (PDF) Standard Data Sets: LIDAR L1: CAL_LID_L1-Standard-V4-10 LIDAR L2: ...
The social disutility of software ownership.
Douglas, David M
2011-09-01
Software ownership allows the owner to restrict the distribution of software and to prevent others from reading the software's source code and building upon it. However, free software is released to users under software licenses that give them the right to read the source code, modify it, reuse it, and distribute the software to others. Proponents of free software such as Richard M. Stallman and Eben Moglen argue that the social disutility of software ownership is a sufficient justification for prohibiting it. This social disutility includes the social instability of disregarding laws and agreements covering software use and distribution, inequality of software access, and the inability to help others by sharing software with them. Here I consider these and other social disutility claims against withholding specific software rights from users, in particular, the rights to read the source code, duplicate, distribute, modify, imitate, and reuse portions of the software within new programs. I find that generally while withholding these rights from software users does cause some degree of social disutility, only the rights to duplicate, modify and imitate cannot legitimately be denied to users on this basis. The social disutility of withholding the rights to distribute the software, read its source code and reuse portions of it in new programs is insufficient to prohibit software owners from denying them to users. A compromise between the software owner and user can minimise the social disutility of withholding these particular rights from users. However, the social disutility caused by software patents is sufficient for rejecting such patents as they restrict the methods of reducing social disutility possible with other forms of software ownership.
Guide star catalogue data retrieval software 2
NASA Technical Reports Server (NTRS)
Smirnov, O. M.; Malkov, O. YU.
1992-01-01
The Guide Star Catalog (GSC), being the largest astronomical catalog to date, is widely used by the astronomical community for all sorts of applications, such as statistical studies of certain sky regions, searches for counterparts to observational phenomena, and generation of finder charts. It's format (2 CD-ROM's) requires minimum hardware and is ideally suited for all sorts of conditions, especially observations. Unfortunately, the actual GSC data is not easily accessible. It takes the form of FITS tables, and the coordinates of the objects are given in one coordinate system (equinox 2000). The included reading software is rudimentary at best. Thus, even generation of a simple finder chart is not a trivial undertaking. To solve this problem, at least for PC users, GUIDARES was created. GUIDARES is a user-friendly program that lets you look directly at the data in the GSC, either as a graphical sky map or as a text table. GUIDARES can read a sampling of GSC data from a given sky region, store this sampling in a text file, and display a graphical map of the sampled region in projected celestial coordinates (perfect for finder charts). GUIDARES supports rectangular and circular regions defined by coordinates in the equatorial, ecliptic (any equinox) or galactic systems.
ERIC Educational Resources Information Center
Coleman, Mari Beth; Killdare, Laura K.; Bell, Sherry Mee; Carter, Amanda M.
2014-01-01
The purpose of this study was to determine the impact of text-to-speech software on reading fluency and comprehension for four postsecondary students with below average reading fluency and comprehension including three students diagnosed with learning disabilities and concomitant conditions (e.g., attention deficit hyperactivity disorder, seizure…
Microbial community analysis using MEGAN.
Huson, Daniel H; Weber, Nico
2013-01-01
Metagenomics, the study of microbes in the environment using DNA sequencing, depends upon dedicated software tools for processing and analyzing very large sequencing datasets. One such tool is MEGAN (MEtaGenome ANalyzer), which can be used to interactively analyze and compare metagenomic and metatranscriptomic data, both taxonomically and functionally. To perform a taxonomic analysis, the program places the reads onto the NCBI taxonomy, while functional analysis is performed by mapping reads to the SEED, COG, and KEGG classifications. Samples can be compared taxonomically and functionally, using a wide range of different charting and visualization techniques. PCoA analysis and clustering methods allow high-level comparison of large numbers of samples. Different attributes of the samples can be captured and used within analysis. The program supports various input formats for loading data and can export analysis results in different text-based and graphical formats. The program is designed to work with very large samples containing many millions of reads. It is written in Java and installers for the three major computer operating systems are available from http://www-ab.informatik.uni-tuebingen.de. © 2013 Elsevier Inc. All rights reserved.
31 CFR 560.538 - Authorized transactions necessary and ordinarily incident to publishing.
Code of Federal Regulations, 2012 CFR
2012-07-01
... publication in electronic format, the addition of embedded software necessary for reading, browsing, navigating, or searching the written publication; (ii) Exporting embedded software necessary for reading, browsing, navigating, or searching a written publication in electronic format, provided that the software...
31 CFR 560.538 - Authorized transactions necessary and ordinarily incident to publishing.
Code of Federal Regulations, 2011 CFR
2011-07-01
... publication in electronic format, the addition of embedded software necessary for reading, browsing, navigating, or searching the written publication; (ii) Exporting embedded software necessary for reading, browsing, navigating, or searching a written publication in electronic format, provided that the software...
Microcomputers in the Curriculum: Micros and the First R.
ERIC Educational Resources Information Center
Balajthy, Ernest; Reinking, David
1985-01-01
Introduces the range of computer software currently available to aid in developing children's basic skills in reading, including programs for reading readiness, word recognition, vocabulary development, reading comprehension, and learning motivation. Additional information on software and computer use is provided in sidebars by Gwen Solomon and…
Teaching with Technology: Literature and Software.
ERIC Educational Resources Information Center
Allen, Denise
1994-01-01
Reviews five computer programs and compact disc-read only memory (CD-ROM) products designed to improve students' reading and problem-solving skills: (1) "Reading Realities" (Teacher Support Software); (2) "Kid Rhymes" (Creative Pursuits); (3) "First-Start Biographies" (Troll Associates); (4) "My Silly CD of ABCs" (Discis Classroom Editions); and…
TagDigger: user-friendly extraction of read counts from GBS and RAD-seq data.
Clark, Lindsay V; Sacks, Erik J
2016-01-01
In genotyping-by-sequencing (GBS) and restriction site-associated DNA sequencing (RAD-seq), read depth is important for assessing the quality of genotype calls and estimating allele dosage in polyploids. However, existing pipelines for GBS and RAD-seq do not provide read counts in formats that are both accurate and easy to access. Additionally, although existing pipelines allow previously-mined SNPs to be genotyped on new samples, they do not allow the user to manually specify a subset of loci to examine. Pipelines that do not use a reference genome assign arbitrary names to SNPs, making meta-analysis across projects difficult. We created the software TagDigger, which includes three programs for analyzing GBS and RAD-seq data. The first script, tagdigger_interactive.py, rapidly extracts read counts and genotypes from FASTQ files using user-supplied sets of barcodes and tags. Input and output is in CSV format so that it can be opened by spreadsheet software. Tag sequences can also be imported from the Stacks, TASSEL-GBSv2, TASSEL-UNEAK, or pyRAD pipelines, and a separate file can be imported listing the names of markers to retain. A second script, tag_manager.py, consolidates marker names and sequences across multiple projects. A third script, barcode_splitter.py, assists with preparing FASTQ data for deposit in a public archive by splitting FASTQ files by barcode and generating MD5 checksums for the resulting files. TagDigger is open-source and freely available software written in Python 3. It uses a scalable, rapid search algorithm that can process over 100 million FASTQ reads per hour. TagDigger will run on a laptop with any operating system, does not consume hard drive space with intermediate files, and does not require programming skill to use.
Unbiased Taxonomic Annotation of Metagenomic Samples
Fosso, Bruno; Pesole, Graziano; Rosselló, Francesc
2018-01-01
Abstract The classification of reads from a metagenomic sample using a reference taxonomy is usually based on first mapping the reads to the reference sequences and then classifying each read at a node under the lowest common ancestor of the candidate sequences in the reference taxonomy with the least classification error. However, this taxonomic annotation can be biased by an imbalanced taxonomy and also by the presence of multiple nodes in the taxonomy with the least classification error for a given read. In this article, we show that the Rand index is a better indicator of classification error than the often used area under the receiver operating characteristic (ROC) curve and F-measure for both balanced and imbalanced reference taxonomies, and we also address the second source of bias by reducing the taxonomic annotation problem for a whole metagenomic sample to a set cover problem, for which a logarithmic approximation can be obtained in linear time and an exact solution can be obtained by integer linear programming. Experimental results with a proof-of-concept implementation of the set cover approach to taxonomic annotation in a next release of the TANGO software show that the set cover approach further reduces ambiguity in the taxonomic annotation obtained with TANGO without distorting the relative abundance profile of the metagenomic sample. PMID:29028181
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitledge, T.E.; Malloy, S.C.; Patton, C.J.
This manual was assembled for use as a guide for analyzing the nutrient content of seawater samples collected in the marine coastal zone of the Northeast United States and the Bering Sea. Some modifications (changes in dilution or sample pump tube sizes) may be necessary to achieve optimum measurements in very pronounced oligotrophic, eutrophic or brackish areas. Information is presented under the following section headings: theory and mechanics of automated analysis; continuous flow system description; operation of autoanalyzer system; cookbook of current nutrient methods; automated analyzer and data analysis software; computer interfacing and hardware modifications; and trouble shooting. The threemore » appendixes are entitled: references and additional reading; manifold components and chemicals; and software listings. (JGB)« less
ABMapper: a suffix array-based tool for multi-location searching and splice-junction mapping.
Lou, Shao-Ke; Ni, Bing; Lo, Leung-Yau; Tsui, Stephen Kwok-Wing; Chan, Ting-Fung; Leung, Kwong-Sak
2011-02-01
Sequencing reads generated by RNA-sequencing (RNA-seq) must first be mapped back to the genome through alignment before they can be further analyzed. Current fast and memory-saving short-read mappers could give us a quick view of the transcriptome. However, they are neither designed for reads that span across splice junctions nor for repetitive reads, which can be mapped to multiple locations in the genome (multi-reads). Here, we describe a new software package: ABMapper, which is specifically designed for exploring all putative locations of reads that are mapped to splice junctions or repetitive in nature. The software is freely available at: http://abmapper.sourceforge.net/. The software is written in C++ and PERL. It runs on all major platforms and operating systems including Windows, Mac OS X and LINUX.
NASA Astrophysics Data System (ADS)
Xuan, C.; Channell, J. E.
2009-12-01
With the increasing efficiency of acquiring paleomagnetic data from u-channel or discrete samples, large volumes of data can be accumulated within a short time period. It is often critical to visualize and process these data in “real time” as measurements proceed, so that the measurement plan can be dictated accordingly. New MATLABTM software, UPmag and DPmag, are introduced for easy and rapid analysis of natural remanent magnetization (NRM) and laboratory-induced remanent magnetization data for u-channel and discrete samples, respectively. UPmag comprises three MATLABTM graphic user interfaces: UVIEW, UDIR, and UINT. UVIEW allows users to open and check through measurement data from the magnetometer as well as to correct detected flux-jumps in the data, and to export files for further treatment. UDIR reads the *.dir file generated by UVIEW, automatically calculates component directions using selectable demagnetization range(s) with anchored or free origin, and displays orthogonal projections and stepwise intensity plots for any position along the u-channel sample. UDIR can also display data on equal area stereographic projections and draw virtual geomagnetic poles (VGP) on various map projections. UINT provides a convenient platform to evaluate relative paleointensity estimates using the *.int files that can be exported from UVIEW. DPmag comprises two MATLABTM graphic user interfaces: DDIR and DFISHER. DDIR reads output files from the discrete sample magnetometer measurement system. DDIR allows users to calculate component directions for each discrete sample, to plot the demagnetization data on orthogonal projections and equal area projections, as well as to show the stepwise intensity data. DFISHER reads the *.pca file exported from DDIR, calculates VGP and Fisher statistics for data from selected groups of samples, and plots the results on equal area projections and as VGPs on a range of map projections. Data and plots from UPmag and DPmag can be exported to various file formats.
Potocki, Anna; Magnan, Annie; Ecalle, Jean
2015-01-01
Four groups of poor readers were identified among a population of students with learning disabilities attending a special class in secondary school: normal readers; specific poor decoders; specific poor comprehenders, and general poor readers (deficits in both decoding and comprehension). These students were then trained with a software program designed to encourage either their word decoding skills or their text comprehension skills. After 5 weeks of training, we observed that the students experiencing word reading deficits and trained with the decoding software improved primarily in the reading fluency task while those exhibiting comprehension deficits and trained with the comprehension software showed improved performance in listening and reading comprehension. But interestingly, the latter software also led to improved performance on the word recognition task. This result suggests that, for these students, training interventions focused at the text level and its comprehension might be more beneficial for reading in general (i.e., for the two components of reading) than word-level decoding trainings. Copyright © 2015 Elsevier Ltd. All rights reserved.
Development of an automated film-reading system for ballistic ranges
NASA Technical Reports Server (NTRS)
Yates, Leslie A.
1992-01-01
Software for an automated film-reading system that uses personal computers and digitized shadowgraphs is described. The software identifies pixels associated with fiducial-line and model images, and least-squares procedures are used to calculate the positions and orientations of the images. Automated position and orientation readings for sphere and cone models are compared to those obtained using a manual film reader. When facility calibration errors are removed from these readings, the accuracy of the automated readings is better than the pixel resolution, and it is equal to, or better than, the manual readings. The effects of film-reading and facility-calibration errors on calculated aerodynamic coefficients is discussed.
DIEGO: detection of differential alternative splicing using Aitchison's geometry.
Doose, Gero; Bernhart, Stephan H; Wagener, Rabea; Hoffmann, Steve
2018-03-15
Alternative splicing is a biological process of fundamental importance in most eukaryotes. It plays a pivotal role in cell differentiation and gene regulation and has been associated with a number of different diseases. The widespread availability of RNA-Sequencing capacities allows an ever closer investigation of differentially expressed isoforms. However, most tools for differential alternative splicing (DAS) analysis do not take split reads, i.e. the most direct evidence for a splice event, into account. Here, we present DIEGO, a compositional data analysis method able to detect DAS between two sets of RNA-Seq samples based on split reads. The python tool DIEGO works without isoform annotations and is fast enough to analyze large experiments while being robust and accurate. We provide python and perl parsers for common formats. The software is available at: www.bioinf.uni-leipzig.de/Software/DIEGO. steve@bioinf.uni-leipzig.de. Supplementary data are available at Bioinformatics online.
The Role of Reading Fluency in Children's Text Comprehension.
Álvarez-Cañizo, Marta; Suárez-Coalla, Paz; Cuetos, Fernando
2015-01-01
Understanding a written text requires some higher cognitive abilities that not all children have. Some children have these abilities, since they understand oral texts; however, they have difficulties with written texts, probably due to problems in reading fluency. The aim of this study was to determine which aspects of reading fluency are related to reading comprehension. Four expositive texts, two written and two read by the evaluator, were presented to a sample of 103 primary school children (third and sixth grade). Each text was followed by four comprehension questions. From this sample we selected two groups of participants in each grade, 10 with good results in comprehension of oral and written texts, and 10 with good results in oral and poor in written comprehension. These 40 subjects were asked to read aloud a new text while they were recorded. Using Praat software some prosodic parameters were measured, such as pausing and reading rate (number and duration of the pauses and utterances), pitch and intensity changes and duration in declarative, exclamatory, and interrogative sentences and also errors and duration in words by frequency and stress. We compared the results of both groups with ANOVAs. The results showed that children with less reading comprehension made more inappropriate pauses and also intersentential pauses before comma than the other group and made more mistakes in content words; significant differences were also found in the final declination of pitch in declarative sentences and in the F0 range in interrogative ones. These results confirm that reading comprehension problems in children are related to a lack in the development of a good reading fluency.
NASA Technical Reports Server (NTRS)
Yates, Leslie A.
1992-01-01
Software for an automated film-reading system that uses personal computers and digitized shadowgraphs is described. The software identifies pixels associated with fiducial-line and model images, and least-squares procedures are used to calculate the positions and orientations of the images. Automated position and orientation readings for sphere and cone models are compared to those obtained using a manual film reader. When facility calibration errors are removed from these readings, the accuracy of the automated readings is better than the pixel resolution, and it is equal to, or better than, the manual readings. The effects of film-reading and facility-calibration errors on calculated aerodynamic coefficients is discussed.
ERIC Educational Resources Information Center
1996
This software product presents multi-level stories to capture the interest of children in grades two through five, while teaching them crucial reading comprehension skills. With stories touching on everything from the invention of velcro to the journey of food through the digestive system, the open-ended reading comprehension program is versatile…
Illuminator, a desktop program for mutation detection using short-read clonal sequencing.
Carr, Ian M; Morgan, Joanne E; Diggle, Christine P; Sheridan, Eamonn; Markham, Alexander F; Logan, Clare V; Inglehearn, Chris F; Taylor, Graham R; Bonthron, David T
2011-10-01
Current methods for sequencing clonal populations of DNA molecules yield several gigabases of data per day, typically comprising reads of < 100 nt. Such datasets permit widespread genome resequencing and transcriptome analysis or other quantitative tasks. However, this huge capacity can also be harnessed for the resequencing of smaller (gene-sized) target regions, through the simultaneous parallel analysis of multiple subjects, using sample "tagging" or "indexing". These methods promise to have a huge impact on diagnostic mutation analysis and candidate gene testing. Here we describe a software package developed for such studies, offering the ability to resolve pooled samples carrying barcode tags and to align reads to a reference sequence using a mutation-tolerant process. The program, Illuminator, can identify rare sequence variants, including insertions and deletions, and permits interactive data analysis on standard desktop computers. It facilitates the effective analysis of targeted clonal sequencer data without dedicated computational infrastructure or specialized training. Copyright © 2011 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
1996
This software product presents multi-level stories to capture the interest of children in grades two through five, while teaching them crucial reading comprehension skills. With stories touching on everything from superstars to sports facts, the open-ended reading comprehension program is versatile and easy to use for educators and children alike.…
Assessment of replicate bias in 454 pyrosequencing and a multi-purpose read-filtering tool.
Jérôme, Mariette; Noirot, Céline; Klopp, Christophe
2011-05-26
Roche 454 pyrosequencing platform is often considered the most versatile of the Next Generation Sequencing technology platforms, permitting the sequencing of large genomes, the analysis of variations or the study of transcriptomes. A recent reported bias leads to the production of multiple reads for a unique DNA fragment in a random manner within a run. This bias has a direct impact on the quality of the measurement of the representation of the fragments using the reads. Other cleaning steps are usually performed on the reads before assembly or alignment. PyroCleaner is a software module intended to clean 454 pyrosequencing reads in order to ease the assembly process. This program is a free software and is distributed under the terms of the GNU General Public License as published by the Free Software Foundation. It implements several filters using criteria such as read duplication, length, complexity, base-pair quality and number of undetermined bases. It also permits to clean flowgram files (.sff) of paired-end sequences generating on one hand validated paired-ends file and the other hand single read file. Read cleaning has always been an important step in sequence analysis. The pyrocleaner python module is a Swiss knife dedicated to 454 reads cleaning. It includes commonly used filters as well as specialised ones such as duplicated read removal and paired-end read verification.
Coval: Improving Alignment Quality and Variant Calling Accuracy for Next-Generation Sequencing Data
Kosugi, Shunichi; Natsume, Satoshi; Yoshida, Kentaro; MacLean, Daniel; Cano, Liliana; Kamoun, Sophien; Terauchi, Ryohei
2013-01-01
Accurate identification of DNA polymorphisms using next-generation sequencing technology is challenging because of a high rate of sequencing error and incorrect mapping of reads to reference genomes. Currently available short read aligners and DNA variant callers suffer from these problems. We developed the Coval software to improve the quality of short read alignments. Coval is designed to minimize the incidence of spurious alignment of short reads, by filtering mismatched reads that remained in alignments after local realignment and error correction of mismatched reads. The error correction is executed based on the base quality and allele frequency at the non-reference positions for an individual or pooled sample. We demonstrated the utility of Coval by applying it to simulated genomes and experimentally obtained short-read data of rice, nematode, and mouse. Moreover, we found an unexpectedly large number of incorrectly mapped reads in ‘targeted’ alignments, where the whole genome sequencing reads had been aligned to a local genomic segment, and showed that Coval effectively eliminated such spurious alignments. We conclude that Coval significantly improves the quality of short-read sequence alignments, thereby increasing the calling accuracy of currently available tools for SNP and indel identification. Coval is available at http://sourceforge.net/projects/coval105/. PMID:24116042
Read Naturally[R]. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2010
2010-01-01
"Read Naturally"[R] is an elementary and middle school supplemental reading program designed to improve reading fluency using a combination of books, audiotapes, and computer software. The program has three main strategies: repeated reading of text for developing oral reading fluency, teacher modeling of story reading, and systematic…
The Role of Reading Fluency in Children’s Text Comprehension
Álvarez-Cañizo, Marta; Suárez-Coalla, Paz; Cuetos, Fernando
2015-01-01
Understanding a written text requires some higher cognitive abilities that not all children have. Some children have these abilities, since they understand oral texts; however, they have difficulties with written texts, probably due to problems in reading fluency. The aim of this study was to determine which aspects of reading fluency are related to reading comprehension. Four expositive texts, two written and two read by the evaluator, were presented to a sample of 103 primary school children (third and sixth grade). Each text was followed by four comprehension questions. From this sample we selected two groups of participants in each grade, 10 with good results in comprehension of oral and written texts, and 10 with good results in oral and poor in written comprehension. These 40 subjects were asked to read aloud a new text while they were recorded. Using Praat software some prosodic parameters were measured, such as pausing and reading rate (number and duration of the pauses and utterances), pitch and intensity changes and duration in declarative, exclamatory, and interrogative sentences and also errors and duration in words by frequency and stress. We compared the results of both groups with ANOVAs. The results showed that children with less reading comprehension made more inappropriate pauses and also intersentential pauses before comma than the other group and made more mistakes in content words; significant differences were also found in the final declination of pitch in declarative sentences and in the F0 range in interrogative ones. These results confirm that reading comprehension problems in children are related to a lack in the development of a good reading fluency. PMID:26640452
ERIC Educational Resources Information Center
Grant, Amy; Wood, Eileen; Gottardo, Alexandra; Evans, Mary Ann; Phillips, Linda; Savage, Robert
2012-01-01
The current study developed a taxonomy of reading skills and compared this taxonomy with skills being trained in 30 commercially available software programs designed to teach emergent literacy or literacy-specific skills for children in preschool, kindergarten, and Grade 1. Outcomes suggest that, although some skills are being trained in a…
Study and Analysis of The Robot-Operated Material Processing Systems (ROMPS)
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.
1996-01-01
This is a report presenting the progress of a research grant funded by NASA for work performed during 1 Oct. 1994 - 31 Sep. 1995. The report deals with the development and investigation of potential use of software for data processing for the Robot Operated Material Processing System (ROMPS). It reports on the progress of data processing of calibration samples processed by ROMPS in space and on earth. First data were retrieved using the I/O software and manually processed using MicroSoft Excel. Then the data retrieval and processing process was automated using a program written in C which is able to read the telemetry data and produce plots of time responses of sample temperatures and other desired variables. LabView was also employed to automatically retrieve and process the telemetry data.
Normal and compound poisson approximations for pattern occurrences in NGS reads.
Zhai, Zhiyuan; Reinert, Gesine; Song, Kai; Waterman, Michael S; Luan, Yihui; Sun, Fengzhu
2012-06-01
Next generation sequencing (NGS) technologies are now widely used in many biological studies. In NGS, sequence reads are randomly sampled from the genome sequence of interest. Most computational approaches for NGS data first map the reads to the genome and then analyze the data based on the mapped reads. Since many organisms have unknown genome sequences and many reads cannot be uniquely mapped to the genomes even if the genome sequences are known, alternative analytical methods are needed for the study of NGS data. Here we suggest using word patterns to analyze NGS data. Word pattern counting (the study of the probabilistic distribution of the number of occurrences of word patterns in one or multiple long sequences) has played an important role in molecular sequence analysis. However, no studies are available on the distribution of the number of occurrences of word patterns in NGS reads. In this article, we build probabilistic models for the background sequence and the sampling process of the sequence reads from the genome. Based on the models, we provide normal and compound Poisson approximations for the number of occurrences of word patterns from the sequence reads, with bounds on the approximation error. The main challenge is to consider the randomness in generating the long background sequence, as well as in the sampling of the reads using NGS. We show the accuracy of these approximations under a variety of conditions for different patterns with various characteristics. Under realistic assumptions, the compound Poisson approximation seems to outperform the normal approximation in most situations. These approximate distributions can be used to evaluate the statistical significance of the occurrence of patterns from NGS data. The theory and the computational algorithm for calculating the approximate distributions are then used to analyze ChIP-Seq data using transcription factor GABP. Software is available online (www-rcf.usc.edu/∼fsun/Programs/NGS_motif_power/NGS_motif_power.html). In addition, Supplementary Material can be found online (www.liebertonline.com/cmb).
ERIC Educational Resources Information Center
Gladhart, Marsha A.
1994-01-01
Reviews two computer software programs for children: (1) "Ready, Set, Read with Bananas and Jack" (Sierra Discovery Series), available for Windows or Macintosh systems, which uses animation and sound to teach early reading skills; and (2) "Word Connection" (Action Software), a Macintosh program that creates word puzzles. (MDM)
Teaching with Technology. Software That's Right for You.
ERIC Educational Resources Information Center
Allen, Denise
1995-01-01
Recommends software to help teachers plan curriculum in the areas of comprehensive language arts ("Cornerstone"); writing and information ("Keroppi Day Hopper"); creative writing and imagination ("Imagination Express"); reading ("Jo-Jo's Reading Circus"); math ("Careers in Math: From Architects to Astronauts") and nature ("Eyewitness"). Provides…
Read Naturally. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2007
2007-01-01
"Read Naturally" is designed to improve reading fluency using a combination of books, audiotapes, and computer software. According to the developer's web site, this program has three main strategies: repeated reading of text for developing oral reading fluency, teacher modeling of story reading, and systematic monitoring of student…
Read Naturally. Revised. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2007
2007-01-01
"Read Naturally" is designed to improve reading fluency using a combination of books, audio-tapes, and computer software. This program includes three main strategies: repeated reading of English text for oral reading fluency development, teacher modeling of story reading, and systematic monitoring of student progress by teachers.…
Read Naturally. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2006
2006-01-01
"Read Naturally" is designed to improve reading fluency using a combination of books, audio-tapes, and computer software. This program includes three main strategies: (1) repeated reading of English text for oral reading fluency development; (2) teacher modeling of story reading; and (3) systematic monitoring of student progress by…
Reading Computer Programs: Instructor’s Guide to Exercises
1990-08-01
activities that underlie effective writing, many of which are similar to those underlying software development . The module draws on related work in a number...Instructor’s Guide and Exercises Abstract: The ability to read and understand a computer program is a criti- cal skill for the software developer , yet this...skill is seldom developed in any systematic way in the education or training of software professionals. These materials discuss the importance of
READ 180. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2009
2009-01-01
"READ 180" is a reading program designed for students in elementary through high school whose reading achievement is below the proficient level. The goal of "READ 180" is to address gaps in students' skills through the use of a computer program, literature, and direct instruction in reading skills. The software component of the…
LMC: Logarithmantic Monte Carlo
NASA Astrophysics Data System (ADS)
Mantz, Adam B.
2017-06-01
LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).
A four-alternative forced choice (4AFC) software for observer performance evaluation in radiology
NASA Astrophysics Data System (ADS)
Zhang, Guozhi; Cockmartin, Lesley; Bosmans, Hilde
2016-03-01
Four-alternative forced choice (4AFC) test is a psychophysical method that can be adopted for observer performance evaluation in radiological studies. While the concept of this method is well established, difficulties to handle large image data, perform unbiased sampling, and keep track of the choice made by the observer have restricted its application in practice. In this work, we propose an easy-to-use software that can help perform 4AFC tests with DICOM images. The software suits for any experimental design that follows the 4AFC approach. It has a powerful image viewing system that favorably simulates the clinical reading environment. The graphical interface allows the observer to adjust various viewing parameters and perform the selection with very simple operations. The sampling process involved in 4AFC as well as the speed and accuracy of the choice made by the observer is precisely monitored in the background and can be easily exported for test analysis. The software has also a defensive mechanism for data management and operation control that minimizes the possibility of mistakes from user during the test. This software can largely facilitate the use of 4AFC approach in radiological observer studies and is expected to have widespread applicability.
ERIC Educational Resources Information Center
Shamir, Haya; Feehan, Kathryn; Yoder, Erik
2017-01-01
This study explores the efficacy of the Waterford Early Reading program (ERP) for teaching kindergarten and first grade students' early reading concepts. Students attended 3 elementary schools in Alabama. The treatment group used the software program whereas the control group did not use the software. Analyses revealed a significant treatment…
Computer Software: Does It Support a New View of Reading?
ERIC Educational Resources Information Center
Case, Carolyn J.
A study examined commercially available computer software to ascertain its degree of congruency with current methods of reading instruction (the Interactive model) at the first and second grade levels. A survey was conducted of public school educators in Connecticut and experts in the field to determine their level of satisfaction with available…
Accelerated Reader: Evaluation Report and Executive Summary
ERIC Educational Resources Information Center
Gorard, Stephen; Siddiqui, Nadia; See, Beng Huat
2015-01-01
Accelerated Reader (AR) is a whole-group reading management and monitoring program that aims to foster the habit of independent reading among primary and early secondary age pupils. The internet-based software initially screens pupils according to their reading levels, and suggests books that match their reading age and reading interest. Pupils…
Lange, Alissa A; Mulhern, Gerry; Wylie, Judith
2009-01-01
The present study investigated the effects of using an assistive software homophone tool on the assisted proofreading performance and unassisted basic skills of secondary-level students with reading difficulties. Students aged 13 to 15 years proofread passages for homophonic errors under three conditions: with the homophone tool, with homophones highlighted only, or with no help. The group using the homophone tool significantly outperformed the other two groups on assisted proofreading and outperformed the others on unassisted spelling, although not significantly. Remedial (unassisted) improvements in automaticity of word recognition, homophone proofreading, and basic reading were found over all groups. Results elucidate the differential contributions of each function of the homophone tool and suggest that with the proper training, assistive software can help not only students with diagnosed disabilities but also those with generally weak reading skills.
Wang, Ying; Hu, Haiyan; Li, Xiaoman
2016-08-01
Metagenomics is a next-generation omics field currently impacting postgenomic life sciences and medicine. Binning metagenomic reads is essential for the understanding of microbial function, compositions, and interactions in given environments. Despite the existence of dozens of computational methods for metagenomic read binning, it is still very challenging to bin reads. This is especially true for reads from unknown species, from species with similar abundance, and/or from low-abundance species in environmental samples. In this study, we developed a novel taxonomy-dependent and alignment-free approach called MBMC (Metagenomic Binning by Markov Chains). Different from all existing methods, MBMC bins reads by measuring the similarity of reads to the trained Markov chains for different taxa instead of directly comparing reads with known genomic sequences. By testing on more than 24 simulated and experimental datasets with species of similar abundance, species of low abundance, and/or unknown species, we report here that MBMC reliably grouped reads from different species into separate bins. Compared with four existing approaches, we demonstrated that the performance of MBMC was comparable with existing approaches when binning reads from sequenced species, and superior to existing approaches when binning reads from unknown species. MBMC is a pivotal tool for binning metagenomic reads in the current era of Big Data and postgenomic integrative biology. The MBMC software can be freely downloaded at http://hulab.ucf.edu/research/projects/metagenomics/MBMC.html .
Analysis of quality raw data of second generation sequencers with Quality Assessment Software.
Ramos, Rommel Tj; Carneiro, Adriana R; Baumbach, Jan; Azevedo, Vasco; Schneider, Maria Pc; Silva, Artur
2011-04-18
Second generation technologies have advantages over Sanger; however, they have resulted in new challenges for the genome construction process, especially because of the small size of the reads, despite the high degree of coverage. Independent of the program chosen for the construction process, DNA sequences are superimposed, based on identity, to extend the reads, generating contigs; mismatches indicate a lack of homology and are not included. This process improves our confidence in the sequences that are generated. We developed Quality Assessment Software, with which one can review graphs showing the distribution of quality values from the sequencing reads. This software allow us to adopt more stringent quality standards for sequence data, based on quality-graph analysis and estimated coverage after applying the quality filter, providing acceptable sequence coverage for genome construction from short reads. Quality filtering is a fundamental step in the process of constructing genomes, as it reduces the frequency of incorrect alignments that are caused by measuring errors, which can occur during the construction process due to the size of the reads, provoking misassemblies. Application of quality filters to sequence data, using the software Quality Assessment, along with graphing analyses, provided greater precision in the definition of cutoff parameters, which increased the accuracy of genome construction.
Using Noldus Observer XT for research on deaf signers learning to read: an innovative methodology.
Ducharme, Daphne A; Arcand, Isabelle
2009-08-01
Despite years of research on the reading problems of deaf students, we still do not know how deaf signers who read well actually crack the code of print. How connections are made between sign language and written language is still an open question. In this article, we show how the Noldus Observer XT software can be used to conduct an in-depth analysis of the online behavior of deaf readers. First, we examine factors that may have an impact on reading behavior. Then, we describe how we videotaped teachers with their deaf student signers of langue des signes québécoise during a reading task, how we conducted a recall activity to better understand the students' reading behavior, and how we used this innovative software to analyze the taped footage. Finally, we discuss the contribution this type of research can have on the future reading behavior of deaf students.
PAnalyzer: a software tool for protein inference in shotgun proteomics.
Prieto, Gorka; Aloria, Kerman; Osinalde, Nerea; Fullaondo, Asier; Arizmendi, Jesus M; Matthiesen, Rune
2012-11-05
Protein inference from peptide identifications in shotgun proteomics must deal with ambiguities that arise due to the presence of peptides shared between different proteins, which is common in higher eukaryotes. Recently data independent acquisition (DIA) approaches have emerged as an alternative to the traditional data dependent acquisition (DDA) in shotgun proteomics experiments. MSE is the term used to name one of the DIA approaches used in QTOF instruments. MSE data require specialized software to process acquired spectra and to perform peptide and protein identifications. However the software available at the moment does not group the identified proteins in a transparent way by taking into account peptide evidence categories. Furthermore the inspection, comparison and report of the obtained results require tedious manual intervention. Here we report a software tool to address these limitations for MSE data. In this paper we present PAnalyzer, a software tool focused on the protein inference process of shotgun proteomics. Our approach considers all the identified proteins and groups them when necessary indicating their confidence using different evidence categories. PAnalyzer can read protein identification files in the XML output format of the ProteinLynx Global Server (PLGS) software provided by Waters Corporation for their MSE data, and also in the mzIdentML format recently standardized by HUPO-PSI. Multiple files can also be read simultaneously and are considered as technical replicates. Results are saved to CSV, HTML and mzIdentML (in the case of a single mzIdentML input file) files. An MSE analysis of a real sample is presented to compare the results of PAnalyzer and ProteinLynx Global Server. We present a software tool to deal with the ambiguities that arise in the protein inference process. Key contributions are support for MSE data analysis by ProteinLynx Global Server and technical replicates integration. PAnalyzer is an easy to use multiplatform and free software tool.
PAnalyzer: A software tool for protein inference in shotgun proteomics
2012-01-01
Background Protein inference from peptide identifications in shotgun proteomics must deal with ambiguities that arise due to the presence of peptides shared between different proteins, which is common in higher eukaryotes. Recently data independent acquisition (DIA) approaches have emerged as an alternative to the traditional data dependent acquisition (DDA) in shotgun proteomics experiments. MSE is the term used to name one of the DIA approaches used in QTOF instruments. MSE data require specialized software to process acquired spectra and to perform peptide and protein identifications. However the software available at the moment does not group the identified proteins in a transparent way by taking into account peptide evidence categories. Furthermore the inspection, comparison and report of the obtained results require tedious manual intervention. Here we report a software tool to address these limitations for MSE data. Results In this paper we present PAnalyzer, a software tool focused on the protein inference process of shotgun proteomics. Our approach considers all the identified proteins and groups them when necessary indicating their confidence using different evidence categories. PAnalyzer can read protein identification files in the XML output format of the ProteinLynx Global Server (PLGS) software provided by Waters Corporation for their MSE data, and also in the mzIdentML format recently standardized by HUPO-PSI. Multiple files can also be read simultaneously and are considered as technical replicates. Results are saved to CSV, HTML and mzIdentML (in the case of a single mzIdentML input file) files. An MSE analysis of a real sample is presented to compare the results of PAnalyzer and ProteinLynx Global Server. Conclusions We present a software tool to deal with the ambiguities that arise in the protein inference process. Key contributions are support for MSE data analysis by ProteinLynx Global Server and technical replicates integration. PAnalyzer is an easy to use multiplatform and free software tool. PMID:23126499
Medium Fidelity Simulation of Oxygen Tank Venting
NASA Technical Reports Server (NTRS)
Sweet, Adam; Kurien, James; Lau, Sonie (Technical Monitor)
2001-01-01
The item to he cleared is a medium-fidelity software simulation model of a vented cryogenic tank. Such tanks are commonly used to transport cryogenic liquids such as liquid oxygen via truck, and have appeared on liquid-fueled rockets for decades. This simulation model works with the HCC simulation system that was developed by Xerox PARC and NASA Ames Research Center. HCC has been previously cleared for distribution. When used with the HCC software, the model generates simulated readings for the tank pressure and temperature as the simulated cryogenic liquid boils off and is vented. Failures (such as a broken vent valve) can be injected into the simulation to produce readings corresponding to the failure. Release of this simulation will allow researchers to test their software diagnosis systems by attempting to diagnose the simulated failure from the simulated readings. This model does not contain any encryption software nor can it perform any control tasks that might be export controlled.
Lee, Sejoon; Lee, Soohyun; Ouellette, Scott; Park, Woong-Yang; Lee, Eunjung A; Park, Peter J
2017-06-20
In many next-generation sequencing (NGS) studies, multiple samples or data types are profiled for each individual. An important quality control (QC) step in these studies is to ensure that datasets from the same subject are properly paired. Given the heterogeneity of data types, file types and sequencing depths in a multi-dimensional study, a robust program that provides a standardized metric for genotype comparisons would be useful. Here, we describe NGSCheckMate, a user-friendly software package for verifying sample identities from FASTQ, BAM or VCF files. This tool uses a model-based method to compare allele read fractions at known single-nucleotide polymorphisms, considering depth-dependent behavior of similarity metrics for identical and unrelated samples. Our evaluation shows that NGSCheckMate is effective for a variety of data types, including exome sequencing, whole-genome sequencing, RNA-seq, ChIP-seq, targeted sequencing and single-cell whole-genome sequencing, with a minimal requirement for sequencing depth (>0.5X). An alignment-free module can be run directly on FASTQ files for a quick initial check. We recommend using this software as a QC step in NGS studies. https://github.com/parklab/NGSCheckMate. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Rapid Threat Organism Recognition Pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Kelly P.; Solberg, Owen D.; Schoeniger, Joseph S.
2013-05-07
The RAPTOR computational pipeline identifies microbial nucleic acid sequences present in sequence data from clinical samples. It takes as input raw short-read genomic sequence data (in particular, the type generated by the Illumina sequencing platforms) and outputs taxonomic evaluation of detected microbes in various human-readable formats. This software was designed to assist in the diagnosis or characterization of infectious disease, by detecting pathogen sequences in nucleic acid sequence data from clinical samples. It has also been applied in the detection of algal pathogens, when algal biofuel ponds became unproductive. RAPTOR first trims and filters genomic sequence reads based on qualitymore » and related considerations, then performs a quick alignment to the human (or other host) genome to filter out host sequences, then performs a deeper search against microbial genomes. Alignment to a protein sequence database is optional. Alignment results are summarized and placed in a taxonomic framework using the Lowest Common Ancestor algorithm.« less
"To Gloss or Not To Gloss": An Investigation of Reading Comprehension Online.
ERIC Educational Resources Information Center
Lomika, Lara L.
1998-01-01
Investigated effects of multimedia reading software on reading comprehension. Twelve college students enrolled in a second semester French course were instructed to think aloud during reading of text on the computer screen. They read text under one of three conditions: full glossing, limited glossing, no glossing. Suggests computerized reading…
Investigating Deaf Students' Use of Visual Multimedia Resources in Reading Comprehension
ERIC Educational Resources Information Center
Nikolaraizi, Magda; Vekiri, Ioanna; Easterbrooks, Susan R.
2013-01-01
A mixed research design was used to examine how deaf students used the visual resources of a multimedia software package that was designed to support reading comprehension. The viewing behavior of 8 deaf students, ages 8-12 years, was recorded during their interaction with multimedia software that included narrative texts enriched with Greek Sign…
Electronic Thermometer Readings
NASA Technical Reports Server (NTRS)
2001-01-01
NASA Stennis' adaptive predictive algorithm for electronic thermometers uses sample readings during the initial rise in temperature and applies an algorithm that accurately and rapidly predicts the steady state temperature. The final steady state temperature of an object can be calculated based on the second-order logarithm of the temperature signals acquired by the sensor and predetermined variables from the sensor characteristics. These variables are calculated during tests of the sensor. Once the variables are determined, relatively little data acquisition and data processing time by the algorithm is required to provide a near-accurate approximation of the final temperature. This reduces the delay in the steady state response time of a temperature sensor. This advanced algorithm can be implemented in existing software or hardware with an erasable programmable read-only memory (EPROM). The capability for easy integration eliminates the expense of developing a whole new system that offers the benefits provided by NASA Stennis' technology.
Encouraging Recreational Reading (The Printout).
ERIC Educational Resources Information Center
Balajthy, Ernest
1988-01-01
Describes computer software, including "The Electronic Bookshelf" and "Return to Reading," which provides motivation for recreational reading in various ways, including: quizzes, games based on books, and whole language activities for children's literature and young adult fiction. (MM)
Schneider, Florian R; Mann, Alexander B; Konorov, Igor; Delso, Gaspar; Paul, Stephan; Ziegler, Sibylle I
2012-06-01
A one-day laboratory course on positron emission tomography (PET) for the education of physics students and PhD students in medical physics has been set up. In the course, the physical background and the principles of a PET scanner are introduced. Course attendees set the system in operation, calibrate it using a (22)Na point source and reconstruct different source geometries filled with (18)F. The PET scanner features an individual channel read-out of 96 lutetium oxyorthosilicate (LSO) scintillator crystals coupled to avalanche photodiodes (APD). The analog data of each APD are digitized by fast sampling analog to digital converters (SADC) and processed within field programmable gate arrays (FPGA) to extract amplitudes and time stamps. All SADCs are continuously sampling with a precise rate of 80MHz, which is synchronous for the whole system. The data is transmitted via USB to a Linux PC, where further processing and the image reconstruction are performed. The course attendees get an insight into detector techniques, modern read-out electronics, data acquisition and PET image reconstruction. In addition, a short introduction to some common software applications used in particle and high energy physics is part of the course. Copyright © 2011. Published by Elsevier GmbH.
Microcomputer Activities Which Encourage the Reading-Writing Connection.
ERIC Educational Resources Information Center
Balajthy, Ernest
Many reading teachers, cognizant of the creative opportunities for skill development allowed by new reading-writing software, are choosing to use microcomputers in their classrooms full-time. Adventure story creation programs capitalize on reading-writing integration by allowing children, with appropriate assistance, to create their own…
Navigating through the minefield of read-across: from research to practical tools (WC10)
Read-across is used for regulatory purposes as a data gap filling technique. Research efforts have focused on the scientific justification and documentation challenges involved in read-across predictions. Software tools have also been developed to facilitate read-across predictio...
Watt, Stuart; Jiao, Wei; Brown, Andrew M K; Petrocelli, Teresa; Tran, Ben; Zhang, Tong; McPherson, John D; Kamel-Reid, Suzanne; Bedard, Philippe L; Onetto, Nicole; Hudson, Thomas J; Dancey, Janet; Siu, Lillian L; Stein, Lincoln; Ferretti, Vincent
2013-09-01
Using sequencing information to guide clinical decision-making requires coordination of a diverse set of people and activities. In clinical genomics, the process typically includes sample acquisition, template preparation, genome data generation, analysis to identify and confirm variant alleles, interpretation of clinical significance, and reporting to clinicians. We describe a software application developed within a clinical genomics study, to support this entire process. The software application tracks patients, samples, genomic results, decisions and reports across the cohort, monitors progress and sends reminders, and works alongside an electronic data capture system for the trial's clinical and genomic data. It incorporates systems to read, store, analyze and consolidate sequencing results from multiple technologies, and provides a curated knowledge base of tumor mutation frequency (from the COSMIC database) annotated with clinical significance and drug sensitivity to generate reports for clinicians. By supporting the entire process, the application provides deep support for clinical decision making, enabling the generation of relevant guidance in reports for verification by an expert panel prior to forwarding to the treating physician. Copyright © 2013 Elsevier Inc. All rights reserved.
Investigating the Effects of the Academy of Reading Program on Middle School Reading Achievement
ERIC Educational Resources Information Center
Myers, Brenda Gail
2016-01-01
Using a quantitative ex post facto causal comparative research design, this study analyzed the effects of the Academy of Reading software program on students' reading achievement. Tennessee Comprehensive Assessment Program (TCAP) reading scale scores of students in the fourth, fifth, and sixth grades from 2013-2014 were utilized in this study. The…
ERIC Educational Resources Information Center
Loucky, John Paul
This article summarizes software which can help to enhance both local and specific reading skills (often done through what is known as intensive reading) and global or general reading skills (known as extensive reading). Although the use of computerized bilingual dictionaries (CBDs) and translation websites of various types does not appear to…
ERIC Educational Resources Information Center
Vollands, Stacy R.; And Others
A study evaluated the effect software for self-assessment and management of reading practice had on reading achievement and motivation in two primary schools in Aberdeen, Scotland. The program utilized was The Accelerated Reader (AR) which was designed to enable curriculum based assessment of reading comprehension within the classroom. Students…
Read Naturally [R]. What Works Clearinghouse Intervention Report. Updated
ERIC Educational Resources Information Center
What Works Clearinghouse, 2013
2013-01-01
The “Read Naturally[R]” program is a supplemental reading program that aims to improve reading fluency, accuracy, and comprehension of elementary and middle school students using a combination of texts, audio CDs, and computer software. The program uses one of four products that share a common fluency-building strategy: “Read Naturally[R] Masters…
ERIC Educational Resources Information Center
Balajthy, Ernest
1997-01-01
Presents the first year's results of a continuing project to monitor the availability of software of relevance for literacy education purposes. Concludes there is an enormous amount of software available for use by teachers of reading and literacy--whereas drill-and-practice software is the largest category of software available, large numbers of…
ReadXplorer—visualization and analysis of mapped sequences
Hilker, Rolf; Stadermann, Kai Bernd; Doppmeier, Daniel; Kalinowski, Jörn; Stoye, Jens; Straube, Jasmin; Winnebald, Jörn; Goesmann, Alexander
2014-01-01
Motivation: Fast algorithms and well-arranged visualizations are required for the comprehensive analysis of the ever-growing size of genomic and transcriptomic next-generation sequencing data. Results: ReadXplorer is a software offering straightforward visualization and extensive analysis functions for genomic and transcriptomic DNA sequences mapped on a reference. A unique specialty of ReadXplorer is the quality classification of the read mappings. It is incorporated in all analysis functions and displayed in ReadXplorer's various synchronized data viewers for (i) the reference sequence, its base coverage as (ii) normalizable plot and (iii) histogram, (iv) read alignments and (v) read pairs. ReadXplorer's analysis capability covers RNA secondary structure prediction, single nucleotide polymorphism and deletion–insertion polymorphism detection, genomic feature and general coverage analysis. Especially for RNA-Seq data, it offers differential gene expression analysis, transcription start site and operon detection as well as RPKM value and read count calculations. Furthermore, ReadXplorer can combine or superimpose coverage of different datasets. Availability and implementation: ReadXplorer is available as open-source software at http://www.readxplorer.org along with a detailed manual. Contact: rhilker@mikrobio.med.uni-giessen.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24790157
An Integrated System for Wildlife Sensing
2014-08-14
design requirement. “Sensor Controller” software. A custom Sensor Controller application was developed for the Android device in order to collect...and log readings from that device’s sensors. “Camera Controller” software. A custom Camera Controller application was developed for the Android device...into 2 separate Android applications (Figure 4). The Sensor Controller logs readings periodically from the Android device’s organic sensors, and
User-friendly tools on handheld devices for observer performance study
NASA Astrophysics Data System (ADS)
Matsumoto, Takuya; Hara, Takeshi; Shiraishi, Junji; Fukuoka, Daisuke; Abe, Hiroyuki; Matsusako, Masaki; Yamada, Akira; Zhou, Xiangrong; Fujita, Hiroshi
2012-02-01
ROC studies require complex procedures to select cases from many data samples, and to set confidence levels in each selected case to generate ROC curves. In some observer performance studies, researchers have to develop software with specific graphical user interface (GUI) to obtain confidence levels from readers. Because ROC studies could be designed for various clinical situations, it is difficult task for preparing software corresponding to every ROC studies. In this work, we have developed software for recording confidence levels during observer studies on tiny personal handheld devices such as iPhone, iPod touch, and iPad. To confirm the functions of our software, three radiologists performed observer studies to detect lung nodules by using public database of chest radiograms published by Japan Society of Radiological Technology. The output in text format conformed to the format for the famous ROC kit from the University of Chicago. Times required for the reading each case was recorded very precisely.
ERIC Educational Resources Information Center
Wood, Sarah G.; Moxley, Jerad H.; Tighe, Elizabeth L.; Wagner, Richard K.
2018-01-01
Text-to-speech and related read-aloud tools are being widely implemented in an attempt to assist students' reading comprehension skills. Read-aloud software, including text-to-speech, is used to translate written text into spoken text, enabling one to listen to written text while reading along. It is not clear how effective text-to-speech is at…
ERIC Educational Resources Information Center
Gibson, Lenwood, Jr.; Cartledge, Gwendolyn; Keyes, Starr E.; Yawn, Christopher D.
2014-01-01
The current study investigated the effects of a repeated reading intervention on the oral reading fluency (ORF) and comprehension on generalization passages for eight, first-grade students with reading risk. The intervention involved a commercial computerized program (Read Naturally Software Edition [RNSE], 2009) and a generalization principle…
A hybrid short read mapping accelerator
2013-01-01
Background The rapid growth of short read datasets poses a new challenge to the short read mapping problem in terms of sensitivity and execution speed. Existing methods often use a restrictive error model for computing the alignments to improve speed, whereas more flexible error models are generally too slow for large-scale applications. A number of short read mapping software tools have been proposed. However, designs based on hardware are relatively rare. Field programmable gate arrays (FPGAs) have been successfully used in a number of specific application areas, such as the DSP and communications domains due to their outstanding parallel data processing capabilities, making them a competitive platform to solve problems that are “inherently parallel”. Results We present a hybrid system for short read mapping utilizing both FPGA-based hardware and CPU-based software. The computation intensive alignment and the seed generation operations are mapped onto an FPGA. We present a computationally efficient, parallel block-wise alignment structure (Align Core) to approximate the conventional dynamic programming algorithm. The performance is compared to the multi-threaded CPU-based GASSST and BWA software implementations. For single-end alignment, our hybrid system achieves faster processing speed than GASSST (with a similar sensitivity) and BWA (with a higher sensitivity); for pair-end alignment, our design achieves a slightly worse sensitivity than that of BWA but has a higher processing speed. Conclusions This paper shows that our hybrid system can effectively accelerate the mapping of short reads to a reference genome based on the seed-and-extend approach. The performance comparison to the GASSST and BWA software implementations under different conditions shows that our hybrid design achieves a high degree of sensitivity and requires less overall execution time with only modest FPGA resource utilization. Our hybrid system design also shows that the performance bottleneck for the short read mapping problem can be changed from the alignment stage to the seed generation stage, which provides an additional requirement for the future development of short read aligners. PMID:23441908
Facilitating text reading in posterior cortical atrophy.
Yong, Keir X X; Rajdev, Kishan; Shakespeare, Timothy J; Leff, Alexander P; Crutch, Sebastian J
2015-07-28
We report (1) the quantitative investigation of text reading in posterior cortical atrophy (PCA), and (2) the effects of 2 novel software-based reading aids that result in dramatic improvements in the reading ability of patients with PCA. Reading performance, eye movements, and fixations were assessed in patients with PCA and typical Alzheimer disease and in healthy controls (experiment 1). Two reading aids (single- and double-word) were evaluated based on the notion that reducing the spatial and oculomotor demands of text reading might support reading in PCA (experiment 2). Mean reading accuracy in patients with PCA was significantly worse (57%) compared with both patients with typical Alzheimer disease (98%) and healthy controls (99%); spatial aspects of passages were the primary determinants of text reading ability in PCA. Both aids led to considerable gains in reading accuracy (PCA mean reading accuracy: single-word reading aid = 96%; individual patient improvement range: 6%-270%) and self-rated measures of reading. Data suggest a greater efficiency of fixations and eye movements under the single-word reading aid in patients with PCA. These findings demonstrate how neurologic characterization of a neurodegenerative syndrome (PCA) and detailed cognitive analysis of an important everyday skill (reading) can combine to yield aids capable of supporting important everyday functional abilities. This study provides Class III evidence that for patients with PCA, 2 software-based reading aids (single-word and double-word) improve reading accuracy. © 2015 American Academy of Neurology.
Facilitating text reading in posterior cortical atrophy
Rajdev, Kishan; Shakespeare, Timothy J.; Leff, Alexander P.; Crutch, Sebastian J.
2015-01-01
Objective: We report (1) the quantitative investigation of text reading in posterior cortical atrophy (PCA), and (2) the effects of 2 novel software-based reading aids that result in dramatic improvements in the reading ability of patients with PCA. Methods: Reading performance, eye movements, and fixations were assessed in patients with PCA and typical Alzheimer disease and in healthy controls (experiment 1). Two reading aids (single- and double-word) were evaluated based on the notion that reducing the spatial and oculomotor demands of text reading might support reading in PCA (experiment 2). Results: Mean reading accuracy in patients with PCA was significantly worse (57%) compared with both patients with typical Alzheimer disease (98%) and healthy controls (99%); spatial aspects of passages were the primary determinants of text reading ability in PCA. Both aids led to considerable gains in reading accuracy (PCA mean reading accuracy: single-word reading aid = 96%; individual patient improvement range: 6%–270%) and self-rated measures of reading. Data suggest a greater efficiency of fixations and eye movements under the single-word reading aid in patients with PCA. Conclusions: These findings demonstrate how neurologic characterization of a neurodegenerative syndrome (PCA) and detailed cognitive analysis of an important everyday skill (reading) can combine to yield aids capable of supporting important everyday functional abilities. Classification of evidence: This study provides Class III evidence that for patients with PCA, 2 software-based reading aids (single-word and double-word) improve reading accuracy. PMID:26138948
ERIC Educational Resources Information Center
Wade, Erin; Boon, Richard T.; Spencer, Vicky G.
2010-01-01
The aim of this research brief was to explore the efficacy of story mapping, with the integration of Kidspiration[C] software, to enhance the reading comprehension skills of story grammar components for elementary-age students. Three students served as the participants, two in third grade and one in fourth, with specific learning disabilities…
ERIC Educational Resources Information Center
Wood, Eileen; Gottardo, Alexandra; Grant, Amy; Evans, Mary Ann; Phillips, Linda; Savage, Robert
2012-01-01
As computers become an increasingly ubiquitous part of young children's lives there is a need to examine how best to harness digital technologies to promote learning in early childhood education contexts. The development of emergent literacy skills is 1 domain for which numerous software programs are available for young learners. In this study, we…
ERIC Educational Resources Information Center
Bennett, Jessica G.; Gardner, Ralph, III; Cartledge, Gwendolyn; Ramnath, Rajiv; Council, Morris R., III
2017-01-01
This study investigated the effects of a multicomponent, supplemental intervention on the reading fluency of second-grade African-American urban students who showed reading and special education risk. The packaged intervention combined repeated readings and culturally relevant stories, delivered through a novel computer software program to enhance…
Automated Processing and Evaluation of Anti-Nuclear Antibody Indirect Immunofluorescence Testing.
Ricchiuti, Vincent; Adams, Joseph; Hardy, Donna J; Katayev, Alexander; Fleming, James K
2018-01-01
Indirect immunofluorescence (IIF) is considered by the American College of Rheumatology (ACR) and the international consensus on ANA patterns (ICAP) the gold standard for the screening of anti-nuclear antibodies (ANA). As conventional IIF is labor intensive, time-consuming, subjective, and poorly standardized, there have been ongoing efforts to improve the standardization of reagents and to develop automated platforms for assay incubation, microscopy, and evaluation. In this study, the workflow and performance characteristics of a fully automated ANA IIF system (Sprinter XL, EUROPattern Suite, IFA 40: HEp-20-10 cells) were compared to a manual approach using visual microscopy with a filter device for single-well titration and to technologist reading. The Sprinter/EUROPattern system enabled the processing of large daily workload cohorts in less than 8 h and the reduction of labor hands-on time by more than 4 h. Regarding the discrimination of positive from negative samples, the overall agreement of the EUROPattern software with technologist reading was higher (95.6%) than when compared to the current method (89.4%). Moreover, the software was consistent with technologist reading in 80.6-97.5% of patterns and 71.0-93.8% of titers. In conclusion, the Sprinter/EUROPattern system provides substantial labor savings and good concordance with technologist ANA IIF microscopy, thus increasing standardization, laboratory efficiency, and removing subjectivity.
Automated Processing and Evaluation of Anti-Nuclear Antibody Indirect Immunofluorescence Testing
Ricchiuti, Vincent; Adams, Joseph; Hardy, Donna J.; Katayev, Alexander; Fleming, James K.
2018-01-01
Indirect immunofluorescence (IIF) is considered by the American College of Rheumatology (ACR) and the international consensus on ANA patterns (ICAP) the gold standard for the screening of anti-nuclear antibodies (ANA). As conventional IIF is labor intensive, time-consuming, subjective, and poorly standardized, there have been ongoing efforts to improve the standardization of reagents and to develop automated platforms for assay incubation, microscopy, and evaluation. In this study, the workflow and performance characteristics of a fully automated ANA IIF system (Sprinter XL, EUROPattern Suite, IFA 40: HEp-20-10 cells) were compared to a manual approach using visual microscopy with a filter device for single-well titration and to technologist reading. The Sprinter/EUROPattern system enabled the processing of large daily workload cohorts in less than 8 h and the reduction of labor hands-on time by more than 4 h. Regarding the discrimination of positive from negative samples, the overall agreement of the EUROPattern software with technologist reading was higher (95.6%) than when compared to the current method (89.4%). Moreover, the software was consistent with technologist reading in 80.6–97.5% of patterns and 71.0–93.8% of titers. In conclusion, the Sprinter/EUROPattern system provides substantial labor savings and good concordance with technologist ANA IIF microscopy, thus increasing standardization, laboratory efficiency, and removing subjectivity. PMID:29780386
Nine Easy Steps to Avoiding Software Copyright Infringement.
ERIC Educational Resources Information Center
Gamble, Lanny R.; Anderson, Larry S.
1989-01-01
To avoid microcomputer software copyright infringement, administrators must be aware of the law, read the software agreements, maintain good records, submit all software registration cards, provide secure storage, post warnings, be consistent when establishing and enforcing policies, consider a site license, and ensure the legality of currently…
Tracking Positions and Attitudes of Mars Rovers
NASA Technical Reports Server (NTRS)
Ali, Khaled; vanelli, Charles; Biesiadecki, Jeffrey; Martin, Alejandro San; Maimone, Mark; Cheng, Yang; Alexander, James
2006-01-01
The Surface Attitude Position and Pointing (SAPP) software, which runs on computers aboard the Mars Exploration Rovers, tracks the positions and attitudes of the rovers on the surface of Mars. Each rover acquires data on attitude from a combination of accelerometer readings and images of the Sun acquired autonomously, using a pointable camera to search the sky for the Sun. Depending on the nature of movement commanded remotely by operators on Earth, the software propagates attitude and position by use of either (1) accelerometer and gyroscope readings or (2) gyroscope readings and wheel odometry. Where necessary, visual odometry is performed on images to fine-tune the position updates, particularly on high-wheel-slip terrain. The attitude data are used by other software and ground-based personnel for pointing a high-gain antenna, planning and execution of driving, and positioning and aiming scientific instruments.
ERIC Educational Resources Information Center
Braten, Ivar; Ferguson, Leila E.; Anmarkrud, Oistein; Stromso, Helge I.
2013-01-01
Sixty-five Norwegian 10th graders used the software Read&Answer 2.0 (Vidal-Abarca et al., 2011) to read five different texts presenting conflicting views on the controversial scientific issue of sun exposure and health. Participants were administered a multiple-choice topic-knowledge measure before and after reading, a word recognition task,…
Computer Software Configuration Item-Specific Flight Software Image Transfer Script Generator
NASA Technical Reports Server (NTRS)
Bolen, Kenny; Greenlaw, Ronald
2010-01-01
A K-shell UNIX script enables the International Space Station (ISS) Flight Control Team (FCT) operators in NASA s Mission Control Center (MCC) in Houston to transfer an entire or partial computer software configuration item (CSCI) from a flight software compact disk (CD) to the onboard Portable Computer System (PCS). The tool is designed to read the content stored on a flight software CD and generate individual CSCI transfer scripts that are capable of transferring the flight software content in a given subdirectory on the CD to the scratch directory on the PCS. The flight control team can then transfer the flight software from the PCS scratch directory to the Electronically Erasable Programmable Read Only Memory (EEPROM) of an ISS Multiplexer/ Demultiplexer (MDM) via the Indirect File Transfer capability. The individual CSCI scripts and the CSCI Specific Flight Software Image Transfer Script Generator (CFITSG), when executed a second time, will remove all components from their original execution. The tool will identify errors in the transfer process and create logs of the transferred software for the purposes of configuration management.
An optimized protocol for generation and analysis of Ion Proton sequencing reads for RNA-Seq.
Yuan, Yongxian; Xu, Huaiqian; Leung, Ross Ka-Kit
2016-05-26
Previous studies compared running cost, time and other performance measures of popular sequencing platforms. However, comprehensive assessment of library construction and analysis protocols for Proton sequencing platform remains unexplored. Unlike Illumina sequencing platforms, Proton reads are heterogeneous in length and quality. When sequencing data from different platforms are combined, this can result in reads with various read length. Whether the performance of the commonly used software for handling such kind of data is satisfactory is unknown. By using universal human reference RNA as the initial material, RNaseIII and chemical fragmentation methods in library construction showed similar result in gene and junction discovery number and expression level estimated accuracy. In contrast, sequencing quality, read length and the choice of software affected mapping rate to a much larger extent. Unspliced aligner TMAP attained the highest mapping rate (97.27 % to genome, 86.46 % to transcriptome), though 47.83 % of mapped reads were clipped. Long reads could paradoxically reduce mapping in junctions. With reference annotation guide, the mapping rate of TopHat2 significantly increased from 75.79 to 92.09 %, especially for long (>150 bp) reads. Sailfish, a k-mer based gene expression quantifier attained highly consistent results with that of TaqMan array and highest sensitivity. We provided for the first time, the reference statistics of library preparation methods, gene detection and quantification and junction discovery for RNA-Seq by the Ion Proton platform. Chemical fragmentation performed equally well with the enzyme-based one. The optimal Ion Proton sequencing options and analysis software have been evaluated.
Computer-Assisted Language Learning for Japanese on the Macintosh: An Update of What's Available.
ERIC Educational Resources Information Center
Darnall, Cliff; And Others
This paper outlines a presentation on available Macintosh computer software for learning Japanese. The software systems described are categorized by their emphasis on speaking, writing, or reading, with a special section on software for young learners. Software that emphasizes spoken language includes "Berlitz for Business…
A Massively Parallel Computational Method of Reading Index Files for SOAPsnv.
Zhu, Xiaoqian; Peng, Shaoliang; Liu, Shaojie; Cui, Yingbo; Gu, Xiang; Gao, Ming; Fang, Lin; Fang, Xiaodong
2015-12-01
SOAPsnv is the software used for identifying the single nucleotide variation in cancer genes. However, its performance is yet to match the massive amount of data to be processed. Experiments reveal that the main performance bottleneck of SOAPsnv software is the pileup algorithm. The original pileup algorithm's I/O process is time-consuming and inefficient to read input files. Moreover, the scalability of the pileup algorithm is also poor. Therefore, we designed a new algorithm, named BamPileup, aiming to improve the performance of sequential read, and the new pileup algorithm implemented a parallel read mode based on index. Using this method, each thread can directly read the data start from a specific position. The results of experiments on the Tianhe-2 supercomputer show that, when reading data in a multi-threaded parallel I/O way, the processing time of algorithm is reduced to 3.9 s and the application program can achieve a speedup up to 100×. Moreover, the scalability of the new algorithm is also satisfying.
Concurrent and Accurate Short Read Mapping on Multicore Processors.
Martínez, Héctor; Tárraga, Joaquín; Medina, Ignacio; Barrachina, Sergio; Castillo, Maribel; Dopazo, Joaquín; Quintana-Ortí, Enrique S
2015-01-01
We introduce a parallel aligner with a work-flow organization for fast and accurate mapping of RNA sequences on servers equipped with multicore processors. Our software, HPG Aligner SA (HPG Aligner SA is an open-source application. The software is available at http://www.opencb.org, exploits a suffix array to rapidly map a large fraction of the RNA fragments (reads), as well as leverages the accuracy of the Smith-Waterman algorithm to deal with conflictive reads. The aligner is enhanced with a careful strategy to detect splice junctions based on an adaptive division of RNA reads into small segments (or seeds), which are then mapped onto a number of candidate alignment locations, providing crucial information for the successful alignment of the complete reads. The experimental results on a platform with Intel multicore technology report the parallel performance of HPG Aligner SA, on RNA reads of 100-400 nucleotides, which excels in execution time/sensitivity to state-of-the-art aligners such as TopHat 2+Bowtie 2, MapSplice, and STAR.
Analysis of read-out heating rate effects on the glow peaks of TLD-100 using WinGCF software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauk, Sabar, E-mail: sabar@usm.my; Hussin, Siti Fatimah; Alam, Md. Shah
This study was done to analyze the effects of the read-out heating rate on the LiF:Mg,Ti (TLD-100) thermoluminescent dosimeters (TLD) glow peaks using WinGCF computer software. The TLDs were exposed to X-ray photons with a potential difference of 72 kVp and 200 mAs in air and were read-out using a Harshaw 3500 TLD reader. The TLDs were read-out using four read-out heating rates at 10, 7, 4 and 1 °C s{sup −1}. It was observed that lowering the heating rate could separate more glow peaks. The activation energy for peak 5 was found to be lower than that for peakmore » 4. The peak maximum temperature and the integral value of the main peak decreased as the heating rate decreases.« less
Tagging of Test Tubes with Electronic p-Chips for Use in Biorepositories.
Mandecki, Wlodek; Kopacka, Wesley M; Qian, Ziye; Ertwine, Von; Gedzberg, Katie; Gruda, Maryann; Reinhardt, David; Rodriguez, Efrain
2017-08-01
A system has been developed to electronically tag and track test tubes used in biorepositories. The system is based on a light-activated microtransponder, also known as a "p-Chip." One of the pressing problems with storing and retrieving biological samples at low temperatures is the difficulty of reliably reading the identification (ID) number that links each storage tube with the database containing sample details. Commonly used barcodes are not always reliable at low temperatures because of poor adhesion of the label to the test tube and problems with reading under conditions of frost and ice accumulation. Traditional radio frequency identification (RFID) tags are not cost effective and are too large for this application. The system described herein consists of the p-Chip, p-Chip-tagged test tubes, two ID readers (for single tubes or for racks of tubes), and software. We also describe a robot that is configured for retrofitting legacy test tubes in biorepositories with p-Chips while maintaining the temperature of the sample below -50°C at all times. The main benefits of the p-Chip over other RFID devices are its small size (600 × 600 × 100 μm) that allows even very small tubes or vials to be tagged, low cost due to the chip's unitary construction, durability, and the ability to read the ID through frost and ice.
ERIC Educational Resources Information Center
Balajthy, Ernest
This publication is a collection of eight articles and ten software reviews written by the author for "Micro Missive" since 1984. "Micro Missive" is a quarterly newsletter that has regularly informed International Reading Association members of new developments in computer-based instruction and reading/language arts through articles, software…
Atropos: specific, sensitive, and speedy trimming of sequencing reads.
Didion, John P; Martin, Marcel; Collins, Francis S
2017-01-01
A key step in the transformation of raw sequencing reads into biological insights is the trimming of adapter sequences and low-quality bases. Read trimming has been shown to increase the quality and reliability while decreasing the computational requirements of downstream analyses. Many read trimming software tools are available; however, no tool simultaneously provides the accuracy, computational efficiency, and feature set required to handle the types and volumes of data generated in modern sequencing-based experiments. Here we introduce Atropos and show that it trims reads with high sensitivity and specificity while maintaining leading-edge speed. Compared to other state-of-the-art read trimming tools, Atropos achieves significant increases in trimming accuracy while remaining competitive in execution times. Furthermore, Atropos maintains high accuracy even when trimming data with elevated rates of sequencing errors. The accuracy, high performance, and broad feature set offered by Atropos makes it an appropriate choice for the pre-processing of Illumina, ABI SOLiD, and other current-generation short-read sequencing datasets. Atropos is open source and free software written in Python (3.3+) and available at https://github.com/jdidion/atropos.
Atropos: specific, sensitive, and speedy trimming of sequencing reads
Collins, Francis S.
2017-01-01
A key step in the transformation of raw sequencing reads into biological insights is the trimming of adapter sequences and low-quality bases. Read trimming has been shown to increase the quality and reliability while decreasing the computational requirements of downstream analyses. Many read trimming software tools are available; however, no tool simultaneously provides the accuracy, computational efficiency, and feature set required to handle the types and volumes of data generated in modern sequencing-based experiments. Here we introduce Atropos and show that it trims reads with high sensitivity and specificity while maintaining leading-edge speed. Compared to other state-of-the-art read trimming tools, Atropos achieves significant increases in trimming accuracy while remaining competitive in execution times. Furthermore, Atropos maintains high accuracy even when trimming data with elevated rates of sequencing errors. The accuracy, high performance, and broad feature set offered by Atropos makes it an appropriate choice for the pre-processing of Illumina, ABI SOLiD, and other current-generation short-read sequencing datasets. Atropos is open source and free software written in Python (3.3+) and available at https://github.com/jdidion/atropos. PMID:28875074
The Relationship between Computer Games and Reading Achievement
ERIC Educational Resources Information Center
Reed, Tammy Dotson
2010-01-01
Illiteracy rates are increasing. The negative social and economic effects caused by weak reading skills include political unrest, social and health service inequality, poverty, and employment challenges. This quantitative study explored the proposition that the use of computer software games would increase reading achievement in second grade…
DOT National Transportation Integrated Search
1998-05-01
Recent technological advances in computer hardware, software, and image processing have led to the development of automated license plate reading equipment. This equipment has primarily been developed for enforcement and security applications, such a...
ERIC Educational Resources Information Center
Khan, Muhammad Ahmad; Gorard, Stephen
2012-01-01
We report here the overall results of a cluster randomised controlled trial of the use of computer-aided instruction with 672 Year 7 pupils in 23 secondary school classes in the north of England. A new piece of commercial software, claimed on the basis of publisher testing to be effective in improving reading after just six weeks of use in the…
MetaCAA: A clustering-aided methodology for efficient assembly of metagenomic datasets.
Reddy, Rachamalla Maheedhar; Mohammed, Monzoorul Haque; Mande, Sharmila S
2014-01-01
A key challenge in analyzing metagenomics data pertains to assembly of sequenced DNA fragments (i.e. reads) originating from various microbes in a given environmental sample. Several existing methodologies can assemble reads originating from a single genome. However, these methodologies cannot be applied for efficient assembly of metagenomic sequence datasets. In this study, we present MetaCAA - a clustering-aided methodology which helps in improving the quality of metagenomic sequence assembly. MetaCAA initially groups sequences constituting a given metagenome into smaller clusters. Subsequently, sequences in each cluster are independently assembled using CAP3, an existing single genome assembly program. Contigs formed in each of the clusters along with the unassembled reads are then subjected to another round of assembly for generating the final set of contigs. Validation using simulated and real-world metagenomic datasets indicates that MetaCAA aids in improving the overall quality of assembly. A software implementation of MetaCAA is available at https://metagenomics.atc.tcs.com/MetaCAA. Copyright © 2014 Elsevier Inc. All rights reserved.
The Empirical Investigation of Perspective-Based Reading
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Green, Scott; Laitenberger, Oliver; Shull, Forrest; Sorumgard, Sivert; Zelkowitz, Marvin V.
1995-01-01
We consider reading techniques a fundamental means of achieving high quality software. Due to lack of research in this area, we are experimenting with the application and comparison of various reading techniques. This paper deals with our experiences with Perspective Based Reading (PBR) a particular reading technique for requirement documents. The goal of PBR is to provide operation scenarios where members of a review team read a document from a particular perspective (eg., tester, developer, user). Our assumption is that the combination of different perspective provides better coverage of the document than the same number of readers using their usual technique. To test the efficacy of PBR, we conducted two runs of a controlled experiment in the environment of NASA GSFC Software Engineering Laboratory (SEL), using developers from the environment. The subjects read two types of documents, one generic in nature and the other from the NASA Domain, using two reading techniques, PBR and their usual technique. The results from these experiment as well as the experimental design, are presented and analyzed. When there is a statistically significant distinction, PBR performs better than the subjects' usual technique. However, PBR appears to be more effective on the generic documents than on the NASA documents.
VarDetect: a nucleotide sequence variation exploratory tool
Ngamphiw, Chumpol; Kulawonganunchai, Supasak; Assawamakin, Anunchai; Jenwitheesuk, Ekachai; Tongsima, Sissades
2008-01-01
Background Single nucleotide polymorphisms (SNPs) are the most commonly studied units of genetic variation. The discovery of such variation may help to identify causative gene mutations in monogenic diseases and SNPs associated with predisposing genes in complex diseases. Accurate detection of SNPs requires software that can correctly interpret chromatogram signals to nucleotides. Results We present VarDetect, a stand-alone nucleotide variation exploratory tool that automatically detects nucleotide variation from fluorescence based chromatogram traces. Accurate SNP base-calling is achieved using pre-calculated peak content ratios, and is enhanced by rules which account for common sequence reading artifacts. The proposed software tool is benchmarked against four other well-known SNP discovery software tools (PolyPhred, novoSNP, Genalys and Mutation Surveyor) using fluorescence based chromatograms from 15 human genes. These chromatograms were obtained from sequencing 16 two-pooled DNA samples; a total of 32 individual DNA samples. In this comparison of automatic SNP detection tools, VarDetect achieved the highest detection efficiency. Availability VarDetect is compatible with most major operating systems such as Microsoft Windows, Linux, and Mac OSX. The current version of VarDetect is freely available at . PMID:19091032
Technical Advances and Fifth Grade Reading Comprehension: Do Students Benefit?
ERIC Educational Resources Information Center
Fountaine, Drew
This paper takes a look at some recent studies on utilization of technical tools, primarily personal computers and software, for improving fifth-grade students' reading comprehension. Specifically, the paper asks what benefits an educator can expect students to derive from closed-captioning and computer-assisted reading comprehension products. It…
Computational Fluids Domain Reduction to a Simplified Fluid Network
2012-04-19
readily available read/ write software library. Code components from the open source projects OpenFoam and Paraview were explored for their adaptability...to the project. Both Paraview and OpenFoam read polyhedral mesh. OpenFoam does not read results data. Paraview actually allows for user “filters
Supporting Struggling Readers in Secondary School Science Classes
ERIC Educational Resources Information Center
Roberts, Kelly D.; Takahashi, Kiriko; Park, Hye-Jin; Stodden, Robert A.
2012-01-01
Many secondary school students struggle to read complex expository text such as science textbooks. This article provides step-by-step guidance on how to foster expository reading for struggling readers in secondary school science classes. Two strategies are introduced: Text-to-Speech (TTS) Software as a reading compensatory strategy and the…
2010-01-01
Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service. PMID:21034504
Evaluation of three read-depth based CNV detection tools using whole-exome sequencing data.
Yao, Ruen; Zhang, Cheng; Yu, Tingting; Li, Niu; Hu, Xuyun; Wang, Xiumin; Wang, Jian; Shen, Yiping
2017-01-01
Whole exome sequencing (WES) has been widely accepted as a robust and cost-effective approach for clinical genetic testing of small sequence variants. Detection of copy number variants (CNV) within WES data have become possible through the development of various algorithms and software programs that utilize read-depth as the main information. The aim of this study was to evaluate three commonly used, WES read-depth based CNV detection programs using high-resolution chromosomal microarray analysis (CMA) as a standard. Paired CMA and WES data were acquired for 45 samples. A total of 219 CNVs (size ranged from 2.3 kb - 35 mb) identified on three CMA platforms (Affymetrix, Agilent and Illumina) were used as standards. CNVs were called from WES data using XHMM, CoNIFER, and CNVnator with modified settings. All three software packages detected an elevated proportion of small variants (< 20 kb) compared to CMA. XHMM and CoNIFER had poor detection sensitivity (22.2 and 14.6%), which correlated with the number of capturing probes involved. CNVnator detected most variants and had better sensitivity (87.7%); however, suffered from an overwhelming detection of small CNVs below 20 kb, which required further confirmation. Size estimation of variants was exaggerated by CNVnator and understated by XHMM and CoNIFER. Low concordances of CNV, detected by three different read-depth based programs, indicate the immature status of WES-based CNV detection. Low sensitivity and uncertain specificity of WES-based CNV detection in comparison with CMA based CNV detection suggests that CMA will continue to play an important role in detecting clinical grade CNV in the NGS era, which is largely based on WES.
Version 2.0 Visual Sample Plan (VSP): UXO Module Code Description and Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, Richard O.; Wilson, John E.; O'Brien, Robert F.
2003-05-06
The Pacific Northwest National Laboratory (PNNL) is developing statistical methods for determining the amount of geophysical surveys conducted along transects (swaths) that are needed to achieve specified levels of confidence of finding target areas (TAs) of anomalous readings and possibly unexploded ordnance (UXO) at closed, transferring and transferred (CTT) Department of Defense (DoD) ranges and other sites. The statistical methods developed by PNNL have been coded into the UXO module of the Visual Sample Plan (VSP) software code that is being developed by PNNL with support from the DoD, the U.S. Department of Energy (DOE, and the U.S. Environmental Protectionmore » Agency (EPA). (The VSP software and VSP Users Guide (Hassig et al, 2002) may be downloaded from http://dqo.pnl.gov/vsp.) This report describes and documents the statistical methods developed and the calculations and verification testing that have been conducted to verify that VSPs implementation of these methods is correct and accurate.« less
Qi, Peng; Gimode, Davis; Saha, Dipnarayan; Schröder, Stephan; Chakraborty, Debkanta; Wang, Xuewen; Dida, Mathews M; Malmberg, Russell L; Devos, Katrien M
2018-06-15
Research on orphan crops is often hindered by a lack of genomic resources. With the advent of affordable sequencing technologies, genotyping an entire genome or, for large-genome species, a representative fraction of the genome has become feasible for any crop. Nevertheless, most genotyping-by-sequencing (GBS) methods are geared towards obtaining large numbers of markers at low sequence depth, which excludes their application in heterozygous individuals. Furthermore, bioinformatics pipelines often lack the flexibility to deal with paired-end reads or to be applied in polyploid species. UGbS-Flex combines publicly available software with in-house python and perl scripts to efficiently call SNPs from genotyping-by-sequencing reads irrespective of the species' ploidy level, breeding system and availability of a reference genome. Noteworthy features of the UGbS-Flex pipeline are an ability to use paired-end reads as input, an effective approach to cluster reads across samples with enhanced outputs, and maximization of SNP calling. We demonstrate use of the pipeline for the identification of several thousand high-confidence SNPs with high representation across samples in an F 3 -derived F 2 population in the allotetraploid finger millet. Robust high-density genetic maps were constructed using the time-tested mapping program MAPMAKER which we upgraded to run efficiently and in a semi-automated manner in a Windows Command Prompt Environment. We exploited comparative GBS with one of the diploid ancestors of finger millet to assign linkage groups to subgenomes and demonstrate the presence of chromosomal rearrangements. The paper combines GBS protocol modifications, a novel flexible GBS analysis pipeline, UGbS-Flex, recommendations to maximize SNP identification, updated genetic mapping software, and the first high-density maps of finger millet. The modules used in the UGbS-Flex pipeline and for genetic mapping were applied to finger millet, an allotetraploid selfing species without a reference genome, as a case study. The UGbS-Flex modules, which can be run independently, are easily transferable to species with other breeding systems or ploidy levels.
Does Whole-Word Multimedia Software Support Literacy Acquisition?
ERIC Educational Resources Information Center
Karemaker, Arjette M.; Pitchford, Nicola J.; O'Malley, Claire
2010-01-01
This study examined the extent to which multimedia features of typical literacy learning software provide added benefits for developing literacy skills compared with typical whole-class teaching methods. The effectiveness of the multimedia software Oxford Reading Tree (ORT) for Clicker in supporting early literacy acquisition was investigated…
Fiction and Non-Fiction Reading and Comprehension in Preferred Books
ERIC Educational Resources Information Center
Topping, Keith J.
2015-01-01
Are the books preferred and most enjoyed by children harder than other books they read? Are non-fiction books read and understood at the same level of difficulty as fiction books? The Accelerated Reader software offers computerized comprehension quizzes of real books individually chosen by children, giving children (and teachers, librarians, and…
Peng, Hao; Yang, Yifan; Zhe, Shandian; Wang, Jian; Gribskov, Michael; Qi, Yuan
2017-01-01
Abstract Motivation High-throughput mRNA sequencing (RNA-Seq) is a powerful tool for quantifying gene expression. Identification of transcript isoforms that are differentially expressed in different conditions, such as in patients and healthy subjects, can provide insights into the molecular basis of diseases. Current transcript quantification approaches, however, do not take advantage of the shared information in the biological replicates, potentially decreasing sensitivity and accuracy. Results We present a novel hierarchical Bayesian model called Differentially Expressed Isoform detection from Multiple biological replicates (DEIsoM) for identifying differentially expressed (DE) isoforms from multiple biological replicates representing two conditions, e.g. multiple samples from healthy and diseased subjects. DEIsoM first estimates isoform expression within each condition by (1) capturing common patterns from sample replicates while allowing individual differences, and (2) modeling the uncertainty introduced by ambiguous read mapping in each replicate. Specifically, we introduce a Dirichlet prior distribution to capture the common expression pattern of replicates from the same condition, and treat the isoform expression of individual replicates as samples from this distribution. Ambiguous read mapping is modeled as a multinomial distribution, and ambiguous reads are assigned to the most probable isoform in each replicate. Additionally, DEIsoM couples an efficient variational inference and a post-analysis method to improve the accuracy and speed of identification of DE isoforms over alternative methods. Application of DEIsoM to an hepatocellular carcinoma (HCC) dataset identifies biologically relevant DE isoforms. The relevance of these genes/isoforms to HCC are supported by principal component analysis (PCA), read coverage visualization, and the biological literature. Availability and implementation The software is available at https://github.com/hao-peng/DEIsoM Contact pengh@alumni.purdue.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:28595376
DMRfinder: efficiently identifying differentially methylated regions from MethylC-seq data.
Gaspar, John M; Hart, Ronald P
2017-11-29
DNA methylation is an epigenetic modification that is studied at a single-base resolution with bisulfite treatment followed by high-throughput sequencing. After alignment of the sequence reads to a reference genome, methylation counts are analyzed to determine genomic regions that are differentially methylated between two or more biological conditions. Even though a variety of software packages is available for different aspects of the bioinformatics analysis, they often produce results that are biased or require excessive computational requirements. DMRfinder is a novel computational pipeline that identifies differentially methylated regions efficiently. Following alignment, DMRfinder extracts methylation counts and performs a modified single-linkage clustering of methylation sites into genomic regions. It then compares methylation levels using beta-binomial hierarchical modeling and Wald tests. Among its innovative attributes are the analyses of novel methylation sites and methylation linkage, as well as the simultaneous statistical analysis of multiple sample groups. To demonstrate its efficiency, DMRfinder is benchmarked against other computational approaches using a large published dataset. Contrasting two replicates of the same sample yielded minimal genomic regions with DMRfinder, whereas two alternative software packages reported a substantial number of false positives. Further analyses of biological samples revealed fundamental differences between DMRfinder and another software package, despite the fact that they utilize the same underlying statistical basis. For each step, DMRfinder completed the analysis in a fraction of the time required by other software. Among the computational approaches for identifying differentially methylated regions from high-throughput bisulfite sequencing datasets, DMRfinder is the first that integrates all the post-alignment steps in a single package. Compared to other software, DMRfinder is extremely efficient and unbiased in this process. DMRfinder is free and open-source software, available on GitHub ( github.com/jsh58/DMRfinder ); it is written in Python and R, and is supported on Linux.
Project SYNERGY: Software Support for Underprepared Students. Software Implementation Report.
ERIC Educational Resources Information Center
Anandam, Kamala; And Others
Miami-Dade Community College's (MDCC's) implementation and assessment of computer software as a part of Project SYNERGY, a multi-institutional project funded by the International Business Machines (IBM) Corporation designed to seek technological solutions for helping students underprepared in reading, writing and mathematics, is described in this…
31 CFR 560.538 - Authorized transactions necessary and ordinarily incident to publishing.
Code of Federal Regulations, 2014 CFR
2014-07-01
... written publication in electronic format, the addition of embedded software necessary for reading, browsing, navigating, or searching the written publication; and (ii) Exporting embedded software necessary... that the software is designated as “EAR99” under the Export Administration Regulations, 15 CFR parts...
31 CFR 538.529 - Authorized transactions necessary and ordinarily incident to publishing.
Code of Federal Regulations, 2014 CFR
2014-07-01
... written publication in electronic format, the addition of embedded software necessary for reading, browsing, navigating, or searching the written publication; (ii) Exporting embedded software necessary for... software is classified as “EAR 99” under the Export Administration Regulations, 15 CFR parts 730-774 (the...
31 CFR 560.538 - Authorized transactions necessary and ordinarily incident to publishing.
Code of Federal Regulations, 2013 CFR
2013-07-01
... written publication in electronic format, the addition of embedded software necessary for reading, browsing, navigating, or searching the written publication; and (ii) Exporting embedded software necessary... that the software is designated as “EAR99” under the Export Administration Regulations, 15 CFR parts...
31 CFR 538.529 - Authorized transactions necessary and ordinarily incident to publishing.
Code of Federal Regulations, 2013 CFR
2013-07-01
... written publication in electronic format, the addition of embedded software necessary for reading, browsing, navigating, or searching the written publication; (ii) Exporting embedded software necessary for... software is classified as “EAR 99” under the Export Administration Regulations, 15 CFR parts 730-774 (the...
31 CFR 538.529 - Authorized transactions necessary and ordinarily incident to publishing.
Code of Federal Regulations, 2011 CFR
2011-07-01
... written publication in electronic format, the addition of embedded software necessary for reading, browsing, navigating, or searching the written publication; (ii) Exporting embedded software necessary for... software is classified as “EAR 99” under the Export Administration Regulations, 15 CFR parts 730-774 (the...
31 CFR 538.529 - Authorized transactions necessary and ordinarily incident to publishing.
Code of Federal Regulations, 2012 CFR
2012-07-01
... written publication in electronic format, the addition of embedded software necessary for reading, browsing, navigating, or searching the written publication; (ii) Exporting embedded software necessary for... software is classified as “EAR 99” under the Export Administration Regulations, 15 CFR parts 730-774 (the...
Assistive Software Tools for Secondary-Level Students with Literacy Difficulties
ERIC Educational Resources Information Center
Lange, Alissa A.; McPhillips, Martin; Mulhern, Gerry; Wylie, Judith
2006-01-01
The present study assessed the compensatory effectiveness of four assistive software tools (speech synthesis, spellchecker, homophone tool, and dictionary) on literacy. Secondary-level students (N = 93) with reading difficulties completed computer-based tests of literacy skills. Training on their respective software followed for those assigned to…
Unlocking Short Read Sequencing for Metagenomics
Rodrigue, Sébastien; Materna, Arne C.; Timberlake, Sonia C.; ...
2010-07-28
We describe an experimental and computational pipeline yielding millions of reads that can exceed 200 bp with quality scores approaching that of traditional Sanger sequencing. The method combines an automatable gel-less library construction step with paired-end sequencing on a short-read instrument. With appropriately sized library inserts, mate-pair sequences can overlap, and we describe the SHERA software package that joins them to form a longer composite read.
Zhao, Shanrong; Prenger, Kurt; Smith, Lance
2013-01-01
RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets. PMID:25937948
Zhao, Shanrong; Prenger, Kurt; Smith, Lance
2013-01-01
RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets.
TopHat: discovering splice junctions with RNA-Seq
Trapnell, Cole; Pachter, Lior; Salzberg, Steven L.
2009-01-01
Motivation: A new protocol for sequencing the messenger RNA in a cell, known as RNA-Seq, generates millions of short sequence fragments in a single run. These fragments, or ‘reads’, can be used to measure levels of gene expression and to identify novel splice variants of genes. However, current software for aligning RNA-Seq data to a genome relies on known splice junctions and cannot identify novel ones. TopHat is an efficient read-mapping algorithm designed to align reads from an RNA-Seq experiment to a reference genome without relying on known splice sites. Results: We mapped the RNA-Seq reads from a recent mammalian RNA-Seq experiment and recovered more than 72% of the splice junctions reported by the annotation-based software from that study, along with nearly 20 000 previously unreported junctions. The TopHat pipeline is much faster than previous systems, mapping nearly 2.2 million reads per CPU hour, which is sufficient to process an entire RNA-Seq experiment in less than a day on a standard desktop computer. We describe several challenges unique to ab initio splice site discovery from RNA-Seq reads that will require further algorithm development. Availability: TopHat is free, open-source software available from http://tophat.cbcb.umd.edu Contact: cole@cs.umd.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19289445
Li, Qiling; Li, Min; Ma, Li; Li, Wenzhi; Wu, Xuehong; Richards, Jendai; Fu, Guoxing; Xu, Wei; Bythwood, Tameka; Li, Xu; Wang, Jianxin; Song, Qing
2014-01-01
Background The use of DNA from archival formalin and paraffin embedded (FFPE) tissue for genetic and epigenetic analyses may be problematic, since the DNA is often degraded and only limited amounts may be available. Thus, it is currently not known whether genome-wide methylation can be reliably assessed in DNA from archival FFPE tissue. Methodology/Principal Findings Ovarian tissues, which were obtained and formalin-fixed and paraffin-embedded in either 1999 or 2011, were sectioned and stained with hematoxylin-eosin (H&E).Epithelial cells were captured by laser micro dissection, and their DNA subjected to whole genomic bisulfite conversion, whole genomic polymerase chain reaction (PCR) amplification, and purification. Sequencing and software analyses were performed to identify the extent of genomic methylation. We observed that 31.7% of sequence reads from the DNA in the 1999 archival FFPE tissue, and 70.6% of the reads from the 2011 sample, could be matched with the genome. Methylation rates of CpG on the Watson and Crick strands were 32.2% and 45.5%, respectively, in the 1999 sample, and 65.1% and 42.7% in the 2011 sample. Conclusions/Significance We have developed an efficient method that allows DNA methylation to be assessed in archival FFPE tissue samples. PMID:25133528
A comprehensive quality control workflow for paired tumor-normal NGS experiments.
Schroeder, Christopher M; Hilke, Franz J; Löffler, Markus W; Bitzer, Michael; Lenz, Florian; Sturm, Marc
2017-06-01
Quality control (QC) is an important part of all NGS data analysis stages. Many available tools calculate QC metrics from different analysis steps of single sample experiments (raw reads, mapped reads and variant lists). Multi-sample experiments, as sequencing of tumor-normal pairs, require additional QC metrics to ensure validity of results. These multi-sample QC metrics still lack standardization. We therefore suggest a new workflow for QC of DNA sequencing of tumor-normal pairs. With this workflow well-known single-sample QC metrics and additional metrics specific for tumor-normal pairs can be calculated. The segmentation into different tools offers a high flexibility and allows reuse for other purposes. All tools produce qcML, a generic XML format for QC of -omics experiments. qcML uses quality metrics defined in an ontology, which was adapted for NGS. All QC tools are implemented in C ++ and run both under Linux and Windows. Plotting requires python 2.7 and matplotlib. The software is available under the 'GNU General Public License version 2' as part of the ngs-bits project: https://github.com/imgag/ngs-bits. christopher.schroeder@med.uni-tuebingen.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
MetaSRA: normalized human sample-specific metadata for the Sequence Read Archive.
Bernstein, Matthew N; Doan, AnHai; Dewey, Colin N
2017-09-15
The NCBI's Sequence Read Archive (SRA) promises great biological insight if one could analyze the data in the aggregate; however, the data remain largely underutilized, in part, due to the poor structure of the metadata associated with each sample. The rules governing submissions to the SRA do not dictate a standardized set of terms that should be used to describe the biological samples from which the sequencing data are derived. As a result, the metadata include many synonyms, spelling variants and references to outside sources of information. Furthermore, manual annotation of the data remains intractable due to the large number of samples in the archive. For these reasons, it has been difficult to perform large-scale analyses that study the relationships between biomolecular processes and phenotype across diverse diseases, tissues and cell types present in the SRA. We present MetaSRA, a database of normalized SRA human sample-specific metadata following a schema inspired by the metadata organization of the ENCODE project. This schema involves mapping samples to terms in biomedical ontologies, labeling each sample with a sample-type category, and extracting real-valued properties. We automated these tasks via a novel computational pipeline. The MetaSRA is available at metasra.biostat.wisc.edu via both a searchable web interface and bulk downloads. Software implementing our computational pipeline is available at http://github.com/deweylab/metasra-pipeline. cdewey@biostat.wisc.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
[Comparision of Different Methods of Area Measurement in Irregular Scar].
Ran, D; Li, W J; Sun, Q G; Li, J Q; Xia, Q
2016-10-01
To determine a measurement standard of irregular scar area by comparing the advantages and disadvantages of different measurement methods in measuring same irregular scar area. Irregular scar area was scanned by digital scanning and measured by coordinate reading method, AutoCAD pixel method, Photoshop lasso pixel method, Photoshop magic bar filled pixel method and Foxit PDF reading software, and some aspects of these methods such as measurement time, repeatability, whether could be recorded and whether could be traced were compared and analyzed. There was no significant difference in the scar areas by the measurement methods above. However, there was statistical difference in the measurement time and repeatability by one or multi performers and only Foxit PDF reading software could be traced back. The methods above can be used for measuring scar area, but each one has its advantages and disadvantages. It is necessary to develop new measurement software for forensic identification. Copyright© by the Editorial Department of Journal of Forensic Medicine
Using ICT to Foster (Pre) Reading and Writing Skills in Young Children
ERIC Educational Resources Information Center
Voogt, Joke; McKenney, Susan
2008-01-01
This study examines how technology can support the development of emergent reading and writing skills in four- to five-year-old children. The research was conducted with PictoPal, an intervention which features a software package that uses images and text in three main activity areas: reading, writing, and authentic applications. This article…
ERIC Educational Resources Information Center
Hawk, Kim; And Others
This document contains a project report and tutor manual from a demonstration project conducted by the Fayette County Community Action Agency (FCCAA) of Fayette County, Pennsylvania to evaluate the effectiveness of using computers and selected reading software in conjunction with traditional one-on-one tutoring to teach reading skills to…
ERIC Educational Resources Information Center
Cazzell, Samantha; Browarnik, Brooke; Skinner, Amy; Skinner, Christopher; Cihak, David; Ciancio, Dennis; McCurdy, Merilee; Forbes, Bethany
2016-01-01
A multiple-baseline across-students design was used to evaluate the effects of a computer-based flashcard reading (CFR) intervention, developed using Microsoft PowerPoint software, on students' ability to read health-related words within 3 seconds. The students were three adults with intellectual disabilities enrolled in a postsecondary college…
Eye vs. Text Movement: Which Technique Leads to Faster Reading Comprehension?
ERIC Educational Resources Information Center
Abdellah, Antar Solhy
2009-01-01
Eye fixation is a frequent problem that faces foreign language learners and hinders the flow of their reading comprehension. Although students are usually advised to read fast/skim to overcome this problem, eye fixation persists. The present study investigates the effect of using a paper-based program as compared to a computer-based software in…
An Interactive Software Program to Develop Pianists' Sight-Reading Ability
ERIC Educational Resources Information Center
Tsangari, Victoria
2010-01-01
Musical sight-reading, or sight-playing, is defined as "the ability to play music from a printed score or part for the first time without benefit of practice." While this is the most strict definition of the term, also known as "prima vista" (at first sight), some use the term "sight-reading" even if some rehearsal…
Electronic Storybooks: A Constructivist Approach to Improving Reading Motivation in Grade 1 Students
ERIC Educational Resources Information Center
Ciampa, Katia
2012-01-01
This study stemmed from a concern of the perceived decline in students' reading motivation after the early years of schooling. This research investigated the effectiveness of online eBooks on eight grade 1 students' reading motivation. Eight students were given ten 25-minute sessions with the software programs over 15 weeks. Qualitative data were…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busbey, A.B.
A number of methods and products, both hardware and software, to allow data exchange between Apple Macintosh computers and MS-DOS based systems. These included serial null modem connections, MS-DOS hardware and/or software emulation, MS-DOS disk-reading hardware and networking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McNally, N.; Liu, Xiang Yang; Choudary, P.V.
1997-01-01
The authors describe a microplate-based high-throughput procedure for rapid assay of the enzyme activities of nitrate reductase and nitrite reductase, using extremely small volumes of reagents. The new procedure offers the advantages of rapidity, small sample size-nanoliter volumes, low cost, and a dramatic increase in the throughput sample number that can be analyzed simultaneously. Additional advantages can be accessed by using microplate reader application software packages that permit assigning a group type to the wells, recording of the data on exportable data files and exercising the option of using the kinetic or endpoint reading modes. The assay can also bemore » used independently for detecting nitrite residues/contamination in environmental/food samples. 10 refs., 2 figs.« less
Software Reporting Metrics. Revision 2.
1985-11-01
MITRE Corporation and ESD. Some of the data has been obtained from Dr. Barry Boehm’s Software Engineering Economics (Ref. 1). Thanks are also given to...data level control management " SP = structured programming Barry W. Boehm, Software Engineering Economics, &©1981, p. 122. Reprinted by permission of...investigated and implemented in future prototypes. 43 REFERENCES For further reading: " 1. Boehm, Barry W. Software Engineering Economics; Englewood
Boiler: lossy compression of RNA-seq alignments using coverage vectors
Pritt, Jacob; Langmead, Ben
2016-01-01
We describe Boiler, a new software tool for compressing and querying large collections of RNA-seq alignments. Boiler discards most per-read data, keeping only a genomic coverage vector plus a few empirical distributions summarizing the alignments. Since most per-read data is discarded, storage footprint is often much smaller than that achieved by other compression tools. Despite this, the most relevant per-read data can be recovered; we show that Boiler compression has only a slight negative impact on results given by downstream tools for isoform assembly and quantification. Boiler also allows the user to pose fast and useful queries without decompressing the entire file. Boiler is free open source software available from github.com/jpritt/boiler. PMID:27298258
Attending Behaviors of ADHD Children in Math and Reading Using Various Types of Software.
ERIC Educational Resources Information Center
Ford, Mary Jane; And Others
1993-01-01
Compared the effects of using various computer software programs on the attending behavior of children with attention-deficit hyperactive disorder (ADHD). Found that the attention of ADHD children increased while they used software with a game format when animation was not excessive. Other factors affecting nonattending behaviors included the…
Automated Speech Rate Measurement in Dysarthria.
Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc
2015-06-01
In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. The new algorithm was trained and tested using Dutch speech samples of 36 speakers with no history of speech impairment and 40 speakers with mild to moderate dysarthria. We tested the algorithm under various conditions: according to speech task type (sentence reading, passage reading, and storytelling) and algorithm optimization method (speaker group optimization and individual speaker optimization). Correlations between automated and human SR determination were calculated for each condition. High correlations between automated and human SR determination were found in the various testing conditions. The new algorithm measures SR in a sufficiently reliable manner. It is currently being integrated in a clinical software tool for assessing and managing prosody in dysarthric speech. Further research is needed to fine-tune the algorithm to severely dysarthric speech, to make the algorithm less sensitive to background noise, and to evaluate how the algorithm deals with syllabic consonants.
The Empirical Investigation of Perspective-Based Reading
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Green, Scott; Laitenberger, Oliver; Shull, Forrest; Sorumgard, Sivert; Zelkowitz, Marvin V.
1996-01-01
We consider reading techniques a fundamental means of achieving high quality software. Due to the lack of research in this area, we are experimenting with the application and comparison of various reading techniques. This paper deals with our experiences with Perspective-Based Reading (PBR), a particular reading technique for requirements documents. The goal of PBR is to provide operational scenarios where members of a review team read a document from a particular perspective (e.g., tester, developer, user). Our assumption is that the combination of different perspectives provides better coverage of the document than the same number of readers using their usual technique.
ERIC Educational Resources Information Center
Science and Children, 1988
1988-01-01
Reviews six software packages for use with school age children ranging from grade 3 to grade 12. Includes "The Microcomputer Based Lab Project: Motion, Sound"; "Genetics"; "Geologic History"; "The Microscope Simulator"; and "Wiz Works" all for Apple II and "Reading for Information: Level…
What's New in Software? Computer Programs for Unobtrusive, Informal Evaluation.
ERIC Educational Resources Information Center
Hedley, Carolyn
1985-01-01
Teachers can use microcomputers in informal assessment of learning disabled students' academic achievement, math and science progress, reading comprehension, cognitive processes, motivation and social interaction. Selected software for unobtrusive, informal assessment is listed. (CL)
Atmospheric Science Data Center
2013-04-01
... free of charge from JPL, upon completion of a license agreement. hdfscan software consists of two components - a core hdf file ... at the Jet Propulsion Laboratory. To obtain the license agreement, go to the MISR Science Software web page , read the introductory ...
ERIC Educational Resources Information Center
Pirnay-Dummer, Pablo; Ifenthaler, Dirk
2011-01-01
Our study integrates automated natural language-oriented assessment and analysis methodologies into feasible reading comprehension tasks. With the newly developed T-MITOCAR toolset, prose text can be automatically converted into an association net which has similarities to a concept map. The "text to graph" feature of the software is based on…
ERIC Educational Resources Information Center
Urdegar, Steven M.
2014-01-01
My Virtual Reading Coach (MVRC) is an online program for students who have been identified as struggling readers. It is used as an intervention within the Response to Intervention (RtI) framework, as well as for students with disabilities. The software addresses reading sub-skills (i.e., comprehension, fluency, phonemic awareness, phonics, and…
ERIC Educational Resources Information Center
Zhang, M.
2013-01-01
The abundant scientific resources on the Web provide great opportunities for students to expand their science learning, yet easy access to information does not ensure learning. Prior research has found that middle school students tend to read Web-based scientific resources in a shallow, superficial manner. A software tool was designed to support…
ERIC Educational Resources Information Center
Bennett, Susan V.; Calderone, Cynthia; Dedrick, Robert F.; Gunn, AnnMarie Alberton
2015-01-01
In this mixed method research, we examined the effects of reading and singing software program (RSSP) as a reading intervention on struggling readers' reading achievement as measured by the Florida Comprehensive Assessment Test, the high stakes state test administered in the state of Florida, at one elementary school. Our team defined struggling…
Indel variant analysis of short-read sequencing data with Scalpel
Fang, Han; Bergmann, Ewa A; Arora, Kanika; Vacic, Vladimir; Zody, Michael C; Iossifov, Ivan; O’Rawe, Jason A; Wu, Yiyang; Barron, Laura T Jimenez; Rosenbaum, Julie; Ronemus, Michael; Lee, Yoon-ha; Wang, Zihua; Dikoglu, Esra; Jobanputra, Vaidehi; Lyon, Gholson J; Wigler, Michael; Schatz, Michael C; Narzisi, Giuseppe
2017-01-01
As the second most common type of variation in the human genome, insertions and deletions (indels) have been linked to many diseases, but the discovery of indels of more than a few bases in size from short-read sequencing data remains challenging. Scalpel (http://scalpel.sourceforge.net) is an open-source software for reliable indel detection based on the microassembly technique. It has been successfully used to discover mutations in novel candidate genes for autism, and it is extensively used in other large-scale studies of human diseases. This protocol gives an overview of the algorithm and describes how to use Scalpel to perform highly accurate indel calling from whole-genome and whole-exome sequencing data. We provide detailed instructions for an exemplary family-based de novo study, but we also characterize the other two supported modes of operation: single-sample and somatic analysis. Indel normalization, visualization and annotation of the mutations are also illustrated. Using a standard server, indel discovery and characterization in the exonic regions of the example sequencing data can be completed in ~5 h after read mapping. PMID:27854363
Use it or Lose it? Wii Brain Exercise Practice and Reading for Domain Knowledge
Ackerman, Phillip L.; Kanfer, Ruth; Calderwood, Charles
2010-01-01
We investigated the training effects and transfer effects associated with two approaches to cognitive activities (so-called “brain training”) that might mitigate age-related cognitive decline. A sample of 78 adults between the ages of 50 and 71 completed 20, one-hour training sessions with the Wii Big Brain Academy software over the course of one month, and in a second month, completed 20, one-hour reading sessions with articles on four different current topics (order of assignment was counterbalanced for the participants). An extensive battery of cognitive and perceptual speed ability measures was administered before and after each month of cognitive training activities, along with a battery of domain-knowledge tests. Results indicated substantial improvements on the Wii tasks, somewhat less improvement on the domain-knowledge tests, and practice-related improvements on 6 of the 10 ability tests. However, there was no significant transfer-of-training from either the Wii practice or the reading tasks to measures of cognitive and perceptual speed abilities. Implications for these findings are discussed in terms of adult intellectual development and maintenance. PMID:20822257
How do I resolve problems reading the binary data?
Atmospheric Science Data Center
2014-12-08
... affecting compilation would be differing versions of the operating system and compilers the read software are being run on. Big ... Unix machines are Big Endian architecture while Linux systems are Little Endian architecture. Data generated on a Unix machine are ...
Enhancing Literacy Skills through Technology.
ERIC Educational Resources Information Center
Sistek-Chandler, Cynthia
2003-01-01
Discusses how to use technology to enhance literacy skills. Highlights include defining literacy, including information literacy; research to support reading and writing instruction; literacy software; thinking skills; organizational strategies for writing and reading; how technology can individualize literacy instruction; and a new genre of…
MIPE: A metagenome-based community structure explorer and SSU primer evaluation tool
Zhou, Quan
2017-01-01
An understanding of microbial community structure is an important issue in the field of molecular ecology. The traditional molecular method involves amplification of small subunit ribosomal RNA (SSU rRNA) genes by polymerase chain reaction (PCR). However, PCR-based amplicon approaches are affected by primer bias and chimeras. With the development of high-throughput sequencing technology, unbiased SSU rRNA gene sequences can be mined from shotgun sequencing-based metagenomic or metatranscriptomic datasets to obtain a reflection of the microbial community structure in specific types of environment and to evaluate SSU primers. However, the use of short reads obtained through next-generation sequencing for primer evaluation has not been well resolved. The software MIPE (MIcrobiota metagenome Primer Explorer) was developed to adapt numerous short reads from metagenomes and metatranscriptomes. Using metagenomic or metatranscriptomic datasets as input, MIPE extracts and aligns rRNA to reveal detailed information on microbial composition and evaluate SSU rRNA primers. A mock dataset, a real Metagenomics Rapid Annotation using Subsystem Technology (MG-RAST) test dataset, two PrimerProspector test datasets and a real metatranscriptomic dataset were used to validate MIPE. The software calls Mothur (v1.33.3) and the SILVA database (v119) for the alignment and classification of rRNA genes from a metagenome or metatranscriptome. MIPE can effectively extract shotgun rRNA reads from a metagenome or metatranscriptome and is capable of classifying these sequences and exhibiting sensitivity to different SSU rRNA PCR primers. Therefore, MIPE can be used to guide primer design for specific environmental samples. PMID:28350876
Capturing User Reading Behaviors for Personalized Document Summarization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Jiang, Hao; Lau, Francis
2011-01-01
We propose a new personalized document summarization method that observes a user's personal reading preferences. These preferences are inferred from the user's reading behaviors, including facial expressions, gaze positions, and reading durations that were captured during the user's past reading activities. We compare the performance of our algorithm with that of a few peer algorithms and software packages. The results of our comparative study show that our algorithm can produce more superior personalized document summaries than all the other methods in that the summaries generated by our algorithm can better satisfy a user's personal preferences.
SUGAR: graphical user interface-based data refiner for high-throughput DNA sequencing.
Sato, Yukuto; Kojima, Kaname; Nariai, Naoki; Yamaguchi-Kabata, Yumi; Kawai, Yosuke; Takahashi, Mamoru; Mimori, Takahiro; Nagasaki, Masao
2014-08-08
Next-generation sequencers (NGSs) have become one of the main tools for current biology. To obtain useful insights from the NGS data, it is essential to control low-quality portions of the data affected by technical errors such as air bubbles in sequencing fluidics. We develop a software SUGAR (subtile-based GUI-assisted refiner) which can handle ultra-high-throughput data with user-friendly graphical user interface (GUI) and interactive analysis capability. The SUGAR generates high-resolution quality heatmaps of the flowcell, enabling users to find possible signals of technical errors during the sequencing. The sequencing data generated from the error-affected regions of a flowcell can be selectively removed by automated analysis or GUI-assisted operations implemented in the SUGAR. The automated data-cleaning function based on sequence read quality (Phred) scores was applied to a public whole human genome sequencing data and we proved the overall mapping quality was improved. The detailed data evaluation and cleaning enabled by SUGAR would reduce technical problems in sequence read mapping, improving subsequent variant analysis that require high-quality sequence data and mapping results. Therefore, the software will be especially useful to control the quality of variant calls to the low population cells, e.g., cancers, in a sample with technical errors of sequencing procedures.
Zhai, Peng; Yang, Longshu; Guo, Xiao; Wang, Zhe; Guo, Jiangtao; Wang, Xiaoqi; Zhu, Huaiqiu
2017-10-02
During the past decade, the development of high throughput nucleic sequencing and mass spectrometry analysis techniques have enabled the characterization of microbial communities through metagenomics, metatranscriptomics, metaproteomics and metabolomics data. To reveal the diversity of microbial communities and interactions between living conditions and microbes, it is necessary to introduce comparative analysis based upon integration of all four types of data mentioned above. Comparative meta-omics, especially comparative metageomics, has been established as a routine process to highlight the significant differences in taxon composition and functional gene abundance among microbiota samples. Meanwhile, biologists are increasingly concerning about the correlations between meta-omics features and environmental factors, which may further decipher the adaptation strategy of a microbial community. We developed a graphical comprehensive analysis software named MetaComp comprising a series of statistical analysis approaches with visualized results for metagenomics and other meta-omics data comparison. This software is capable to read files generated by a variety of upstream programs. After data loading, analyses such as multivariate statistics, hypothesis testing of two-sample, multi-sample as well as two-group sample and a novel function-regression analysis of environmental factors are offered. Here, regression analysis regards meta-omic features as independent variable and environmental factors as dependent variables. Moreover, MetaComp is capable to automatically choose an appropriate two-group sample test based upon the traits of input abundance profiles. We further evaluate the performance of its choice, and exhibit applications for metagenomics, metaproteomics and metabolomics samples. MetaComp, an integrative software capable for applying to all meta-omics data, originally distills the influence of living environment on microbial community by regression analysis. Moreover, since the automatically chosen two-group sample test is verified to be outperformed, MetaComp is friendly to users without adequate statistical training. These improvements are aiming to overcome the new challenges under big data era for all meta-omics data. MetaComp is available at: http://cqb.pku.edu.cn/ZhuLab/MetaComp/ and https://github.com/pzhaipku/MetaComp/ .
Earthquake Analysis (EA) Software for The Earthquake Observatories
NASA Astrophysics Data System (ADS)
Yanik, K.; Tezel, T.
2009-04-01
There are many software that can used for observe the seismic signals and locate the earthquakes, but some of them commercial and has technical support. For this reason, many seismological observatories developed and use their own seismological software packets which are convenient with their seismological network. In this study, we introduce our software which has some capabilities that it can read seismic signals and process and locate the earthquakes. This software is used by the General Directorate of Disaster Affairs Earthquake Research Department Seismology Division (here after ERD) and will improve according to the new requirements. ERD network consist of 87 seismic stations that 63 of them were equipped with 24 bite digital Guralp CMG-3T, 16 of them with analogue short period S-13-Geometrics and 8 of them 24 bite digital short period S-13j-DR-24 Geometrics seismometers. Data is transmitted with satellite from broadband stations, whereas leased line used from short period stations. Daily data archive capacity is 4 GB. In big networks, it is very important that observe the seismic signals and locate the earthquakes as soon as possible. This is possible, if they use software which was developed considering their network properties. When we started to develop a software for big networks as our, we recognized some realities that all known seismic format data should be read without any convert process, observing of the only selected stations and do this on the map directly, add seismic files with import command, establishing relation between P and S phase readings and location solutions, store in database and entering to the program with user name and password. In this way, we can prevent data disorder and repeated phase readings. There are many advantages, when data store on the database proxies. These advantages are easy access to data from anywhere using ethernet, publish the bulletin and catalogues using website, easily sending of short message (sms) and e-mail, data reading from anywhere that has ethernet connection and store the results in same centre. The Earthqukae Analysis (EA) program was developed considering above facilities. Microsoft Visual Basic 6.0 and Microsoft GDI tools were used as a basement for program development. EA program can image five different seismic formats (gcf, suds, seisan, sac, nanometrics-y) without any conversion and use all seismic process facilities that are filtering (band-pass, low-pass, high-pass), fast fourier transform, offset adjustment etc.
RNA-SeQC: RNA-seq metrics for quality control and process optimization.
DeLuca, David S; Levin, Joshua Z; Sivachenko, Andrey; Fennell, Timothy; Nazaire, Marc-Danie; Williams, Chris; Reich, Michael; Winckler, Wendy; Getz, Gad
2012-06-01
RNA-seq, the application of next-generation sequencing to RNA, provides transcriptome-wide characterization of cellular activity. Assessment of sequencing performance and library quality is critical to the interpretation of RNA-seq data, yet few tools exist to address this issue. We introduce RNA-SeQC, a program which provides key measures of data quality. These metrics include yield, alignment and duplication rates; GC bias, rRNA content, regions of alignment (exon, intron and intragenic), continuity of coverage, 3'/5' bias and count of detectable transcripts, among others. The software provides multi-sample evaluation of library construction protocols, input materials and other experimental parameters. The modularity of the software enables pipeline integration and the routine monitoring of key measures of data quality such as the number of alignable reads, duplication rates and rRNA contamination. RNA-SeQC allows investigators to make informed decisions about sample inclusion in downstream analysis. In summary, RNA-SeQC provides quality control measures critical to experiment design, process optimization and downstream computational analysis. See www.genepattern.org to run online, or www.broadinstitute.org/rna-seqc/ for a command line tool.
ERIC Educational Resources Information Center
Wood, Clare; Pillinger, Claire; Jackson, Emma
2010-01-01
This paper reports an extended analysis of the study reported in [Wood, C. (2005). "Beginning readers' use of 'talking books' software can affect their reading strategies." "Journal of Research in Reading, 28," 170-182.], in which five and six-year-old children received either six sessions using specially designed talking books or six sessions of…
ERIC Educational Resources Information Center
Hassanpour, Masoumeh; Ghonsooly, Behzad; Nooghabi, Mehdi Jabbari; Shafiee, Mohammad Naser
2017-01-01
This quasi-experimental study examined the relationship between students' metacognitive awareness and willingness to read English medical texts. So, a model was proposed and tested using structural equation modeling (SEM) with R software. Participants included 98 medical students of two classes. One class was assigned as the control group and the…
ERIC Educational Resources Information Center
Teeter, Phyllis Anne; Smith, Philip L.
The final report of the 2-year project describes the development and validation of microcomputer software to help assess reading disabled elementary grade children and to provide basic reading instruction. Accomplishments of the first year included: design of the STAR Neuro-Cognitive Assessment Program which includes a reproduction of…
The mission events graphic generator software: A small tool with big results
NASA Technical Reports Server (NTRS)
Lupisella, Mark; Leibee, Jack; Scaffidi, Charles
1993-01-01
Utilization of graphics has long been a useful methodology for many aspects of spacecraft operations. A personal computer based software tool that implements straight-forward graphics and greatly enhances spacecraft operations is presented. This unique software tool is the Mission Events Graphic Generator (MEGG) software which is used in support of the Hubble Space Telescope (HST) Project. MEGG reads the HST mission schedule and generates a graphical timeline.
Process Tailoring and the Software Capability Maturity Model(sm).
1995-11-01
A Discipline For Software Engineering, Addison-Wesley, 1995; Humphrey . This book summarizes the costs and benefits of a Personal Software Process ( PSP ...1994. [Humphrey95] Humphrey , Watts S . A Discipline For Software Engineering. Reading, MA: Addison-Wesley Publishing Company, 1995. CMUISEI-94-TR-24 43...practiced and institutionalized. 8 CMU/SEI-94-TR-24 . Leveraging mo n o s I cDocument" IRevise & Analyze Organizational LessonsApproach ’"- Define Processes
1997-12-01
Watts Humphrey and is described in his book A Discipline for Software Engineering [ Humphrey 95]. Its intended use is to guide the planning and...Pat; Humphrey , Watts S .; Khajenoori, Soheil; Macke, Susan; & Matvya, Annette. "Introducing the Personal Software Process: Three Industry Case... Humphrey 95] Humphrey , Watts S . A Discipline for Software Engineering. Reading, Ma.: Addison-Wesley, 1995. [Mauchly 40] Mauchly, J.W. "Significance
Software to Promote Young Children's Growth in Literacy: A Comparison of Online and Offline Formats
ERIC Educational Resources Information Center
Wood, Eileen; Grant, Amy K.; Gottardo, Alexandra; Savage, Robert; Evans, Mary Ann
2017-01-01
The primary goal of this research was to extend our understanding of the strengths and weaknesses inherent in online and offline early literacy software programs designed for young learners. A taxonomy of reading skills was used to contrast online software with offline closed system (compact disc) based programs with respect to number of skills…
ERIC Educational Resources Information Center
Biggs, Marie C.; Homan, Susan P.; Dedrick, Robert; Minick, Vanessa; Rasinski, Timothy
2008-01-01
Software that teaches users to sing in tune and in rhythm while providing real-time pitch tracking was used in a study of struggling middle school readers. The software, Carry-a-Tune (CAT) was originally developed to improve singing; however, since it involves a repeated reading format, we used it to determine its effect on comprehension and…
Software for Alignment of Segments of a Telescope Mirror
NASA Technical Reports Server (NTRS)
Hall, Drew P.; Howard, Richard T.; Ly, William C.; Rakoczy, John M.; Weir, John M.
2006-01-01
The Segment Alignment Maintenance System (SAMS) software is designed to maintain the overall focus and figure of the large segmented primary mirror of the Hobby-Eberly Telescope. This software reads measurements made by sensors attached to the segments of the primary mirror and from these measurements computes optimal control values to send to actuators that move the mirror segments.
Bi, Lei; Guan, Chun-jie; Yang, Guan-e; Yang, Fei; Yan, Hong-yu; Li, Qing-shan
2016-04-01
The purple photosynthetic bacterium Rhodopseudomonas palustris has been widely applied to enhance the therapeutic effects of traditional Chinese medicine using novel biotransformation technology. However, comprehensive studies of the R. palustris biotransformation mechanism are rare. Therefore, investigation of the expression patterns of genes involved in metabolic pathways that are active during the biotransformation process is essential to elucidate this complicated mechanism. To promote further study of the biotransformation of R. palustris, we assembled all R. palustris transcripts using Trinity software and performed differential expression analysis of the resulting unigenes. A total of 9725, 7341 and 10,963 unigenes were obtained by assembling the alpha-rhamnetin-3-rhamnoside-treated R. palustris (RPB) reads, control R. palustris (RPS) reads and combined RPB&RPS reads, respectively. A total of 9971 unigenes assembled from the RPB&RPS reads were mapped to the nr, nt, Swiss-Prot, Gene Ontology (GO), Clusters of Orthologous Groups (COGs) and Kyoto Encyclopedia of Genes and Genomes (KEGG) (E-value <0.00001) databases using BLAST software. A total of 3360 unique differentially expressed genes (DEGs) in RPB versus RPS were identified, among which 922 unigenes were up-regulated and 2438 were down-regulated. The unigenes were mapped to the KEGG database, resulting in the identification of 7676 pathways among all annotated unigenes and 2586 pathways among the DEGs. Some sets of functional unigenes annotated to important metabolic pathways and environmental information processing were differentially expressed between the RPS and RPB samples, including those involved in energy metabolism (18.4% of total DEGs), carbohydrate metabolism (36.0% of total DEGs), ABC transport (6.0% of total DEGs), the two-component system (8.6% of total DEGs), cell motility (4.3% of total DEGs) and the cell cycle (1.5% of total DEGs). We also identified 19 transcripts annotated as hydrolytic enzymes and other enzymes involved in ARR catabolism in R. palustris. We present the first comparative transcriptome profiles of RPB and RPS samples to facilitate elucidation of the molecular mechanism of biotransformation in R. palustris. Furthermore, we propose two putative ARR biotransformation mechanisms in R. palustris. These analytical results represent a useful genomic resource for in-depth research into the molecular basis of biotransformation and genetic modification in R. palustris. Copyright © 2016 Elsevier GmbH. All rights reserved.
Transient upset models in computer systems
NASA Technical Reports Server (NTRS)
Mason, G. M.
1983-01-01
Essential factors for the design of transient upset monitors for computers are discussed. The upset is a system level event that is software dependent. It can occur in the program flow, the opcode set, the opcode address domain, the read address domain, and the write address domain. Most upsets are in the program flow. It is shown that simple, external monitors functioning transparently relative to the system operations can be built if a detailed accounting is made of the characteristics of the faults that can happen. Sample applications are provided for different states of the Z-80 and 8085 based system.
Digital Equipment Corporation's CRDOM Software and Database Publications.
ERIC Educational Resources Information Center
Adams, Michael Q.
1986-01-01
Acquaints information professionals with Digital Equipment Corporation's compact optical disk read-only-memory (CDROM) search and retrieval software and growing library of CDROM database publications (COMPENDEX, Chemical Abstracts Services). Highlights include MicroBASIS, boolean operators, range operators, word and phrase searching, proximity…
Software for Automated Reading of STEP Files by I-DEAS(trademark)
NASA Technical Reports Server (NTRS)
Pinedo, John
2003-01-01
A program called "readstep" enables the I-DEAS(tm) computer-aided-design (CAD) software to automatically read Standard for the Exchange of Product Model Data (STEP) files. (The STEP format is one of several used to transfer data between dissimilar CAD programs.) Prior to the development of "readstep," it was necessary to read STEP files into I-DEAS(tm) one at a time in a slow process that required repeated intervention by the user. In operation, "readstep" prompts the user for the location of the desired STEP files and the names of the I-DEAS(tm) project and model file, then generates an I-DEAS(tm) program file called "readstep.prg" and two Unix shell programs called "runner" and "controller." The program "runner" runs I-DEAS(tm) sessions that execute readstep.prg, while "controller" controls the execution of "runner" and edits readstep.prg if necessary. The user sets "runner" and "controller" into execution simultaneously, and then no further intervention by the user is required. When "runner" has finished, the user should see only parts from successfully read STEP files present in the model file. STEP files that could not be read successfully (e.g., because of format errors) should be regenerated before attempting to read them again.
Boiler: lossy compression of RNA-seq alignments using coverage vectors.
Pritt, Jacob; Langmead, Ben
2016-09-19
We describe Boiler, a new software tool for compressing and querying large collections of RNA-seq alignments. Boiler discards most per-read data, keeping only a genomic coverage vector plus a few empirical distributions summarizing the alignments. Since most per-read data is discarded, storage footprint is often much smaller than that achieved by other compression tools. Despite this, the most relevant per-read data can be recovered; we show that Boiler compression has only a slight negative impact on results given by downstream tools for isoform assembly and quantification. Boiler also allows the user to pose fast and useful queries without decompressing the entire file. Boiler is free open source software available from github.com/jpritt/boiler. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Marchant, A; Mougel, F; Almeida, C; Jacquin-Joly, E; Costa, J; Harry, M
2015-04-01
High throughput sequencing (HTS) provides new research opportunities for work on non-model organisms, such as differential expression studies between populations exposed to different environmental conditions. However, such transcriptomic studies first require the production of a reference assembly. The choice of sampling procedure, sequencing strategy and assembly workflow is crucial. To develop a reliable reference transcriptome for Triatoma brasiliensis, the major Chagas disease vector in Northeastern Brazil, different de novo assembly protocols were generated using various datasets and software. Both 454 and Illumina sequencing technologies were applied on RNA extracted from antennae and mouthparts from single or pooled individuals. The 454 library yielded 278 Mb. Fifteen Illumina libraries were constructed and yielded nearly 360 million RNA-seq single reads and 46 million RNA-seq paired-end reads for nearly 45 Gb. For the 454 reads, we used three assemblers, Newbler, CAP3 and/or MIRA and for the Illumina reads, the Trinity assembler. Ten assembly workflows were compared using these programs separately or in combination. To compare the assemblies obtained, quantitative and qualitative criteria were used, including contig length, N50, contig number and the percentage of chimeric contigs. Completeness of the assemblies was estimated using the CEGMA pipeline. The best assembly (57,657 contigs, completeness of 80 %, <1 % chimeric contigs) was a hybrid assembly leading to recommend the use of (1) a single individual with large representation of biological tissues, (2) merging both long reads and short paired-end Illumina reads, (3) several assemblers in order to combine the specific advantages of each.
The Effect of Interactive CD-ROM/Digitized Audio Courseware on Reading among Low-Literate Adults.
ERIC Educational Resources Information Center
Gretes, John A.; Green, Michael
1994-01-01
Compares a multimedia adult literacy instructional course, Reading to Educate and Develop Yourself (READY), to traditional classroom instruction by studying effects of replacing conventional learning tools with computer-assisted instruction (CD-ROMs and audio software). Results reveal that READY surpassed traditional instruction for virtually…
Learning, Change, and the Utopia of Play
ERIC Educational Resources Information Center
Moulthrop, Stuart
2007-01-01
This article looks at some of the rhetoric surrounding video games and other forms of interactive software as additions or alternatives to school curricula. It focuses particularly on the need to articulate ways to "read" videogames in order to achieve significant cultural impact. Noting that reading, even as metaphor, tends to invoke…
Omics Metadata Management Software (OMMS).
Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo
2015-01-01
Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. The OMMS can be obtained at http://omms.sandia.gov.
Omics Metadata Management Software (OMMS)
Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo
2015-01-01
Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. Availability The OMMS can be obtained at http://omms.sandia.gov PMID:26124554
Orthographic learning and the role of text-to-speech software in Dutch disabled readers.
Staels, Eva; Van den Broeck, Wim
2015-01-01
In this study, we examined whether orthographic learning can be demonstrated in disabled readers learning to read in a transparent orthography (Dutch). In addition, we tested the effect of the use of text-to-speech software, a new form of direct instruction, on orthographic learning. Both research goals were investigated by replicating Share's self-teaching paradigm. A total of 65 disabled Dutch readers were asked to read eight stories containing embedded homophonic pseudoword targets (e.g., Blot/Blod), with or without the support of text-to-speech software. The amount of orthographic learning was assessed 3 or 7 days later by three measures of orthographic learning. First, the results supported the presence of orthographic learning during independent silent reading by demonstrating that target spellings were correctly identified more often, named more quickly, and spelled more accurately than their homophone foils. Our results support the hypothesis that all readers, even poor readers of transparent orthographies, are capable of developing word-specific knowledge. Second, a negative effect of text-to-speech software on orthographic learning was demonstrated in this study. This negative effect was interpreted as the consequence of passively listening to the auditory presentation of the text. We clarify how these results can be interpreted within current theoretical accounts of orthographic learning and briefly discuss implications for remedial interventions. © Hammill Institute on Disabilities 2013.
Software Requirements for the A-7E Aircraft.
1992-08-31
DIWI(a) 14 ASCU SSU-20 (1,2) READ 8 DIWI(a) 15 BITE FAIL SAFE (2) READ 2 DIW3(a) 0 TACAN PARITY VALID (3) READ 2 DIW3(a) 10-14 AGE TEST EQUIPMENT (2...69 2.1.5: Arm am ent Station Control Unit ( ASCU ...times. 69 I Chapter 2 ALSPAUGH, FAULK. BRITTON, PARKER. PARNAS, AND SHORE 3 2.1.5. Armament Station Control Unit ( ASCU ) The Armament Station Control
What's New in Software? Hot New Tool: The Hypertext.
ERIC Educational Resources Information Center
Hedley, Carolyn N.
1989-01-01
This article surveys recent developments in hypertext software, a highly interactive nonsequential reading/writing/database approach to research and teaching that allows paths to be created through related materials including text, graphics, video, and animation sources. Described are uses, advantages, and problems of hypertext. (PB)
The De Novo Transcriptome and Its Functional Annotation in the Seed Beetle Callosobruchus maculatus.
Sayadi, Ahmed; Immonen, Elina; Bayram, Helen; Arnqvist, Göran
2016-01-01
Despite their unparalleled biodiversity, the genomic resources available for beetles (Coleoptera) remain relatively scarce. We present an integrative and high quality annotated transcriptome of the beetle Callosobruchus maculatus, an important and cosmopolitan agricultural pest as well as an emerging model species in ecology and evolutionary biology. Using Illumina sequencing technology, we sequenced 492 million read pairs generated from 51 samples of different developmental stages (larvae, pupae and adults) of C. maculatus. Reads were de novo assembled using the Trinity software, into a single combined assembly as well as into three separate assemblies based on data from the different developmental stages. The combined assembly generated 218,192 transcripts and 145,883 putative genes. Putative genes were annotated with the Blast2GO software and the Trinotate pipeline. In total, 33,216 putative genes were successfully annotated using Blastx against the Nr (non-redundant) database and 13,382 were assigned to 34,100 Gene Ontology (GO) terms. We classified 5,475 putative genes into Clusters of Orthologous Groups (COG) and 116 metabolic pathways maps were predicted based on the annotation. Our analyses suggested that the transcriptional specificity increases with ontogeny. For example, out of 33,216 annotated putative genes, 51 were only expressed in larvae, 63 only in pupae and 171 only in adults. Our study illustrates the importance of including samples from several developmental stages when the aim is to provide an integrative and high quality annotated transcriptome. Our results will represent an invaluable resource for those working with the ecology, evolution and pest control of C. maculatus, as well for comparative studies of the transcriptomics and genomics of beetles more generally.
The De Novo Transcriptome and Its Functional Annotation in the Seed Beetle Callosobruchus maculatus
Sayadi, Ahmed; Immonen, Elina; Bayram, Helen
2016-01-01
Despite their unparalleled biodiversity, the genomic resources available for beetles (Coleoptera) remain relatively scarce. We present an integrative and high quality annotated transcriptome of the beetle Callosobruchus maculatus, an important and cosmopolitan agricultural pest as well as an emerging model species in ecology and evolutionary biology. Using Illumina sequencing technology, we sequenced 492 million read pairs generated from 51 samples of different developmental stages (larvae, pupae and adults) of C. maculatus. Reads were de novo assembled using the Trinity software, into a single combined assembly as well as into three separate assemblies based on data from the different developmental stages. The combined assembly generated 218,192 transcripts and 145,883 putative genes. Putative genes were annotated with the Blast2GO software and the Trinotate pipeline. In total, 33,216 putative genes were successfully annotated using Blastx against the Nr (non-redundant) database and 13,382 were assigned to 34,100 Gene Ontology (GO) terms. We classified 5,475 putative genes into Clusters of Orthologous Groups (COG) and 116 metabolic pathways maps were predicted based on the annotation. Our analyses suggested that the transcriptional specificity increases with ontogeny. For example, out of 33,216 annotated putative genes, 51 were only expressed in larvae, 63 only in pupae and 171 only in adults. Our study illustrates the importance of including samples from several developmental stages when the aim is to provide an integrative and high quality annotated transcriptome. Our results will represent an invaluable resource for those working with the ecology, evolution and pest control of C. maculatus, as well for comparative studies of the transcriptomics and genomics of beetles more generally. PMID:27442123
Illeghems, Koen; De Vuyst, Luc; Papalexandratou, Zoi; Weckx, Stefan
2012-01-01
This is the first report on the phylogenetic analysis of the community diversity of a single spontaneous cocoa bean box fermentation sample through a metagenomic approach involving 454 pyrosequencing. Several sequence-based and composition-based taxonomic profiling tools were used and evaluated to avoid software-dependent results and their outcome was validated by comparison with previously obtained culture-dependent and culture-independent data. Overall, this approach revealed a wider bacterial (mainly γ-Proteobacteria) and fungal diversity than previously found. Further, the use of a combination of different classification methods, in a software-independent way, helped to understand the actual composition of the microbial ecosystem under study. In addition, bacteriophage-related sequences were found. The bacterial diversity depended partially on the methods used, as composition-based methods predicted a wider diversity than sequence-based methods, and as classification methods based solely on phylogenetic marker genes predicted a more restricted diversity compared with methods that took all reads into account. The metagenomic sequencing analysis identified Hanseniaspora uvarum, Hanseniaspora opuntiae, Saccharomyces cerevisiae, Lactobacillus fermentum, and Acetobacter pasteurianus as the prevailing species. Also, the presence of occasional members of the cocoa bean fermentation process was revealed (such as Erwinia tasmaniensis, Lactobacillus brevis, Lactobacillus casei, Lactobacillus rhamnosus, Lactococcus lactis, Leuconostoc mesenteroides, and Oenococcus oeni). Furthermore, the sequence reads associated with viral communities were of a restricted diversity, dominated by Myoviridae and Siphoviridae, and reflecting Lactobacillus as the dominant host. To conclude, an accurate overview of all members of a cocoa bean fermentation process sample was revealed, indicating the superiority of metagenomic sequencing over previously used techniques.
Wu, Zhenyang; Fu, Yuhua; Cao, Jianhua; Yu, Mei; Tang, Xiaohui; Zhao, Shuhong
2014-01-01
MicroRNAs (miRNAs) play a key role in many biological processes by regulating gene expression at the post-transcriptional level. A number of miRNAs have been identified from livestock species. However, compared with other animals, such as pigs and cows, the number of miRNAs identified in goats is quite low, particularly in hair follicles. In this study, to investigate the functional roles of miRNAs in goat hair follicles of goats with different coat colors, we sequenced miRNAs from two hair follicles samples (white and black) using Solexa sequencing. A total of 35,604,016 reads were obtained, which included 30,878,637 clean reads (86.73%). MiRDeep2 software identified 214 miRNAs. Among them, 205 were conserved among species and nine were novel miRNAs. Furthermore, DESeq software identified six differentially expressed miRNAs. Quantitative PCR confirmed differential expression of two miRNAs, miR-10b and miR-211. KEGG pathways were analyzed using the DAVID website for the predicted target genes of the differentially expressed miRNAs. Several signaling pathways including Notch and MAPK pathways may affect the process of coat color formation. Our study showed that the identified miRNAs might play an essential role in black and white follicle formation in goats. PMID:24879525
Hiscox, Lucy; Leonavičiūtė, Erika; Humby, Trevor
2014-08-01
Dyslexia is associated with difficulties in language-specific skills such as spelling, writing and reading; the difficulty in acquiring literacy skills is not a result of low intelligence or the absence of learning opportunity, but these issues will persist throughout life and could affect long-term education. Writing is a complex process involving many different functions, integrated by the working memory system; people with dyslexia have a working memory deficit, which means that concentration on writing quality may be detrimental to understanding. We confirm impaired working memory in a sample of university students with (compensated) dyslexia, and using a within-subject design with three test conditions, we show that these participants demonstrated better understanding of a piece of text if they had used automatic spelling correction software during a dictation/transcription task. We hypothesize that the use of the autocorrecting software reduced demand on working memory, by allowing word writing to be more automatic, thus enabling better processing and understanding of the content of the transcriptions and improved recall. Long-term and regular use of autocorrecting assistive software should be beneficial for people with and without dyslexia and may improve confidence, written work, academic achievement and self-esteem, which are all affected in dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.
Artificial Intelligence Software for Assessing Postural Stability
NASA Technical Reports Server (NTRS)
Lieberman, Erez; Forth, Katharine; Paloski, William
2013-01-01
A software package reads and analyzes pressure distributions from sensors mounted under a person's feet. Pressure data from sensors mounted in shoes, or in a platform, can be used to provide a description of postural stability (assessing competence to deficiency) and enables the determination of the person's present activity (running, walking, squatting, falling). This package has three parts: a preprocessing algorithm for reading input from pressure sensors; a Hidden Markov Model (HMM), which is used to determine the person's present activity and level of sensing-motor competence; and a suite of graphical algorithms, which allows visual representation of the person's activity and vestibular function over time.
ERIC Educational Resources Information Center
Rudner, Lawrence M.; Glass Gene V.; Evartt, David L.; Emery, Patrick J.
This manual and the accompanying software are intended to provide a step-by-step guide to conducting a meta-analytic study along with references for further reading and free high-quality software, "Meta-Stat.""Meta-Stat" is a comprehensive package designed to help in the meta-analysis of research studies in the social and behavioral sciences.…
The Source to S2K Conversion System.
1978-12-01
mandgement system Provides. As for all software production, the cost of writing this program is high, particularily considering it may be executed only...research, and 3 findlly, implement the system using disciplined, structured software engineering principles. In order to properly document how these...complete read step is required (as done by the Michigan System and EXPRESS) or software support outside the conversion system (as in CODS) is required
Software Engineering Education Directory. Software Engineering Curriculum Project
1991-05-01
1986 with a questionnaire mailed to schools selected from Peterson’s Graduate Programs in Engineering and Applied Sciences 1986. We contacted schools...the publi- cation more complete. To discuss any issues related to this report, please contact: Education Program Software Engineering Institute...considered to be required course reading. How to Use This Section This portion of the directory is organized by state (in the U.S.), province (in
Alview: Portable Software for Viewing Sequence Reads in BAM Formatted Files.
Finney, Richard P; Chen, Qing-Rong; Nguyen, Cu V; Hsu, Chih Hao; Yan, Chunhua; Hu, Ying; Abawi, Massih; Bian, Xiaopeng; Meerzaman, Daoud M
2015-01-01
The name Alview is a contraction of the term Alignment Viewer. Alview is a compiled to native architecture software tool for visualizing the alignment of sequencing data. Inputs are files of short-read sequences aligned to a reference genome in the SAM/BAM format and files containing reference genome data. Outputs are visualizations of these aligned short reads. Alview is written in portable C with optional graphical user interface (GUI) code written in C, C++, and Objective-C. The application can run in three different ways: as a web server, as a command line tool, or as a native, GUI program. Alview is compatible with Microsoft Windows, Linux, and Apple OS X. It is available as a web demo at https://cgwb.nci.nih.gov/cgi-bin/alview. The source code and Windows/Mac/Linux executables are available via https://github.com/NCIP/alview.
ERIC Educational Resources Information Center
Allen, Denise
1994-01-01
Reviews three educational computer software products: (1) a compact disc-read only memory (CD-ROM) bundle of five mathematics programs from the Apple Education Series; (2) "Sammy's Science House," with science activities for preschool through second grade (Edmark); and (3) "The Cat Came Back," an interactive CD-ROM game designed to build language…
A Computer Supported Teamwork Project for People with a Visual Impairment.
ERIC Educational Resources Information Center
Hale, Greg
2000-01-01
Discussion of the use of computer supported teamwork (CSTW) in team-based organizations focuses on problems that visually impaired people have reading graphical user interface software via screen reader software. Describes a project that successfully used email for CSTW, and suggests issues needing further research. (LRW)
No-Fail Software Gifts for Kids.
ERIC Educational Resources Information Center
Buckleitner, Warren
1996-01-01
Reviews children's software packages: (1) "Fun 'N Games"--nonviolent games and activities; (2) "Putt-Putt Saves the Zoo"--matching, logic games, and animal facts; (3) "Big Job"--12 logic games with video from job sites; (4) "JumpStart First Grade"--15 activities introducing typical school lessons; and (5) "Read, Write, & Type!"--progressively…
Voice Recognition Software Accuracy with Second Language Speakers of English.
ERIC Educational Resources Information Center
Coniam, D.
1999-01-01
Explores the potential of the use of voice-recognition technology with second-language speakers of English. Involves the analysis of the output produced by a small group of very competent second-language subjects reading a text into the voice recognition software Dragon Systems "Dragon NaturallySpeaking." (Author/VWL)
Trends in Literacy Software Publication and Marketing: Multicultural Themes.
ERIC Educational Resources Information Center
Balajthy, Ernest
This article provides data and discussion of multicultural theme-related issues arising from analysis of a detailed database of commercial software products targeted to reading and literacy education. The database consisted of 1152 titles, representing the offerings of 104 publishers and distributors. Of the titles, 62 were identified as having…
Nonfiction Reading Comprehension in Middle School: Exploring in Interactive Software Approach
ERIC Educational Resources Information Center
Wolff, Evelyn S.; Isecke, Harriet; Rhoads, Christopher; Madura, John P.
2013-01-01
The struggles of students in the United States to comprehend non-fiction science text are well documented. Middle school students, in particular, have minimal instruction in comprehending nonfiction and flounder on assessments. This article describes the development process of the Readorium software, an interactive web-based program being…
How do I obtain read software for data?
Atmospheric Science Data Center
2015-11-30
... product-landing page. Simply locate and select the project link from the Projects Supported page for the project that you would ... page where you can access it if it is available, note that a missing tab on the product page indicates that there is no software specific to ...
Taylor, Stuart A; Charman, Susan C; Lefere, Philippe; McFarland, Elizabeth G; Paulson, Erik K; Yee, Judy; Aslam, Rizwan; Barlow, John M; Gupta, Arun; Kim, David H; Miller, Chad M; Halligan, Steve
2008-02-01
To prospectively compare the diagnostic performance and time efficiency of both second and concurrent computer-aided detection (CAD) reading paradigms for retrospectively obtained computed tomographic (CT) colonography data sets by using consensus reading (three radiologists) of colonoscopic findings as a reference standard. Ethical permission, HIPAA compliance (for U.S. institutions), and patient consent were obtained from all institutions for use of CT colonography data sets in this study. Ten radiologists each read 25 CT colonography data sets (12 men, 13 women; mean age, 61 years) containing 69 polyps (28 were 1-5 mm, 41 were >or=6 mm) by using workstations integrated with CAD software. Reading was randomized to either "second read" CAD (applied only after initial unassisted assessment) or "concurrent read" CAD (applied at the start of assessment). Data sets were reread 6 weeks later by using the opposing paradigm. Polyp sensitivity and reading times were compared by using multilevel logistic and linear regression, respectively. Receiver operating characteristic (ROC) curves were generated. Compared with the unassisted read, odds of improved polyp (>or=6 mm) detection were 1.5 (95% confidence interval [CI]: 1.0, 2.2) and 1.3 (95% CI: 0.9, 1.9) by using CAD as second and concurrent reader, respectively. Detection odds by using CAD concurrently were 0.87 (95% CI: 0.59, 1.3) and 0.76 (95% CI: 0.57, 1.01) those of second read CAD, excluding and including polyps 1-5 mm, respectively. The concurrent read took 2.9 minutes (95% CI: -3.8, -1.9) less than did second read. The mean areas under the ROC curve (95% CI) for the unassisted read, second read CAD, and concurrent read CAD were 0.83 (95% CI: 0.78, 0.87), 0.86 (95% CI: 0.82, 0.90), and 0.88 (95% CI: 0.83, 0.92), respectively. CAD is more time efficient when used concurrently than when used as a second reader, with similar sensitivity for polyps 6 mm or larger. However, use of second read CAD maximizes sensitivity, particularly for smaller lesions. (c) RSNA, 2007.
Adolescents and "Autographics": Reading and Writing Coming-of-Age Graphic Novels
ERIC Educational Resources Information Center
Hughes, Janette Michelle; King, Alyson; Perkins, Peggy; Fuke, Victor
2011-01-01
Students at two different sites (a 12th-grade English class focused on workplace preparation and an alternative program for students who had been expelled from school) read graphic novels and, using ComicLife software, created their own graphic sequences called "autographics" based on their personal experiences. The authors explore how…
Reading, Writing, and Documentation and Managing the Development of User Documentation.
ERIC Educational Resources Information Center
Lindberg, Wayne; Hoffman, Terrye
1987-01-01
The first of two articles addressing the issue of user documentation for computer software discusses the need to teach users how to read documentation. The second presents a guide for writing documentation that is based on the instructional systems design model, and makes suggestions for the desktop publishing of user manuals. (CLB)
Making English Accessible: Using ELECTRONIC NETWORKS FOR INTERACTION (ENFI) in the Classroom.
ERIC Educational Resources Information Center
Peyton, Joy Kreeft; French, Martha
Electronic Networks for Interaction (ENFI), an instructional tool for teaching reading and writing using computer technology, improves the English reading and writing of deaf students at all educational levels. Chapters address these topics: (1) the origins of the technique; (2) how ENFI works in the classroom and laboratory (software, lab…
ERIC Educational Resources Information Center
Balajthy, Ernest; Reuber, Kristin; Damon, Corrine J.
A study investigated software choices of graduate-level clinicians in a university reading clinic to determine computer use and effectiveness in literacy instruction. The clinic involved students of varying ability, ages 7-12, using 24 Power Macintosh computers equipped with "ClarisWorks,""Kid Pix,""Student Writing…
Expert Systems: A Challenge for the Reading Profession.
ERIC Educational Resources Information Center
Balajthy, Ernest
The expert systems are designed to imitate the reasoning of a human expert in a content area field. Designed to be advisors, these software systems combine the content area knowledge and decision-making ability of an expert with the user's understanding and knowledge of particular circumstances. The reading diagnosis system, the RD2P System…
What's New in Software? Mastery of the Computer through Desktop Publishing.
ERIC Educational Resources Information Center
Hedley, Carolyn N.; Ellsworth, Nancy J.
1993-01-01
Offers thoughts on the phenomenon of the underuse of classroom computers. Argues that desktop publishing is one way of overcoming the computer malaise occurring in schools, using the incentive of classroom reading and writing for mastery of many aspects of computer production, including writing, illustrating, reading, and publishing. (RS)
Addressing the English Language Arts Technology Standard in a Secondary Reading Methodology Course.
ERIC Educational Resources Information Center
Merkley, Donna J.; Schmidt, Denise A.; Allen, Gayle
2001-01-01
Describes efforts to integrate technology into a reading methodology course for secondary English majors. Discusses the use of e-mail, multimedia, distance education for videoconferences, online discussion technology, subject-specific software, desktop publishing, a database management system, a concept mapping program, and the use of the World…
The Effects of ABRACADABRA on Reading Outcomes: A Meta-Analysis of Applied Field Research
ERIC Educational Resources Information Center
Abrami, Philip; Borohkovski, Eugene; Lysenko, Larysa
2015-01-01
This meta-analysis summarizes research on the effects of a comprehensive, interactive web-based software (AXXX) on the development of reading competencies among kindergarteners and elementary students. Findings from seven randomized control trials and quasi-experimental studies undertaken in a variety of contexts across Canada, Australia and Kenya…
R-WISE: A Computerized Environment for Tutoring Critical Literacy.
ERIC Educational Resources Information Center
Carlson, P.; Crevoisier, M.
This paper describes a computerized environment for teaching the conceptual patterns of critical literacy. While the full implementation of the software covers both reading and writing, this paper covers only the writing aspects of R-WISE (Reading and Writing in a Supportive Environment). R-WISE consists of a suite of computerized…
Munger, Steven C.; Raghupathy, Narayanan; Choi, Kwangbom; Simons, Allen K.; Gatti, Daniel M.; Hinerfeld, Douglas A.; Svenson, Karen L.; Keller, Mark P.; Attie, Alan D.; Hibbs, Matthew A.; Graber, Joel H.; Chesler, Elissa J.; Churchill, Gary A.
2014-01-01
Massively parallel RNA sequencing (RNA-seq) has yielded a wealth of new insights into transcriptional regulation. A first step in the analysis of RNA-seq data is the alignment of short sequence reads to a common reference genome or transcriptome. Genetic variants that distinguish individual genomes from the reference sequence can cause reads to be misaligned, resulting in biased estimates of transcript abundance. Fine-tuning of read alignment algorithms does not correct this problem. We have developed Seqnature software to construct individualized diploid genomes and transcriptomes for multiparent populations and have implemented a complete analysis pipeline that incorporates other existing software tools. We demonstrate in simulated and real data sets that alignment to individualized transcriptomes increases read mapping accuracy, improves estimation of transcript abundance, and enables the direct estimation of allele-specific expression. Moreover, when applied to expression QTL mapping we find that our individualized alignment strategy corrects false-positive linkage signals and unmasks hidden associations. We recommend the use of individualized diploid genomes over reference sequence alignment for all applications of high-throughput sequencing technology in genetically diverse populations. PMID:25236449
Software Compensates Electronic-Nose Readings for Humidity
NASA Technical Reports Server (NTRS)
Zhou, Hanying
2007-01-01
A computer program corrects for the effects of humidity on the readouts of an array of chemical sensors (an "electronic nose"). To enable the use of this program, the array must incorporate an independent humidity sensor in addition to sensors designed to detect analytes other than water vapor. The basic principle of the program was described in "Compensating for Effects of Humidity on Electronic Noses" (NPO-30615), NASA Tech Briefs, Vol. 28, No. 6 (June 2004), page 63. To recapitulate: The output of the humidity sensor is used to generate values that are subtracted from the outputs of the other sensors to correct for contributions of humidity to those readings. Hence, in principle, what remains after corrections are the contributions of the analytes only. The outputs of the non-humidity sensors are then deconvolved to obtain the concentrations of the analytes. In addition, the humidity reading is retained as an analyte reading in its own right. This subtraction of the humidity background increases the ability of the software to identify such events as spills in which contaminants may be present in small concentrations and accompanied by large changes in humidity.
Kim, Jeong Rye; Shim, Woo Hyun; Yoon, Hee Mang; Hong, Sang Hyup; Lee, Jin Seong; Cho, Young Ah; Kim, Sangki
2017-12-01
The purpose of this study is to evaluate the accuracy and efficiency of a new automatic software system for bone age assessment and to validate its feasibility in clinical practice. A Greulich-Pyle method-based deep-learning technique was used to develop the automatic software system for bone age determination. Using this software, bone age was estimated from left-hand radiographs of 200 patients (3-17 years old) using first-rank bone age (software only), computer-assisted bone age (two radiologists with software assistance), and Greulich-Pyle atlas-assisted bone age (two radiologists with Greulich-Pyle atlas assistance only). The reference bone age was determined by the consensus of two experienced radiologists. First-rank bone ages determined by the automatic software system showed a 69.5% concordance rate and significant correlations with the reference bone age (r = 0.992; p < 0.001). Concordance rates increased with the use of the automatic software system for both reviewer 1 (63.0% for Greulich-Pyle atlas-assisted bone age vs 72.5% for computer-assisted bone age) and reviewer 2 (49.5% for Greulich-Pyle atlas-assisted bone age vs 57.5% for computer-assisted bone age). Reading times were reduced by 18.0% and 40.0% for reviewers 1 and 2, respectively. Automatic software system showed reliably accurate bone age estimations and appeared to enhance efficiency by reducing reading times without compromising the diagnostic accuracy.
A Decision Model for Selection of Microcomputers and Operating Systems.
1984-06-01
is resilting in application software (for microccmputers) being developed almost exclu- sively tor the IBM PC and compatiole systems. NAVDAC ielt that...location can be indepen- dently accessed. RAN memory is also often called read/ write memory, hecause new information can be written into and read from...when power is lost; this is also read/ write memory. Bubble memory, however, has significantly slower access times than RAM or RON and also is not preva
Roosaare, Märt; Vaher, Mihkel; Kaplinski, Lauris; Möls, Märt; Andreson, Reidar; Lepamets, Maarja; Kõressaar, Triinu; Naaber, Paul; Kõljalg, Siiri; Remm, Maido
2017-01-01
Fast, accurate and high-throughput identification of bacterial isolates is in great demand. The present work was conducted to investigate the possibility of identifying isolates from unassembled next-generation sequencing reads using custom-made guide trees. A tool named StrainSeeker was developed that constructs a list of specific k -mers for each node of any given Newick-format tree and enables the identification of bacterial isolates in 1-2 min. It uses a novel algorithm, which analyses the observed and expected fractions of node-specific k -mers to test the presence of each node in the sample. This allows StrainSeeker to determine where the isolate branches off the guide tree and assign it to a clade whereas other tools assign each read to a reference genome. Using a dataset of 100 Escherichia coli isolates, we demonstrate that StrainSeeker can predict the clades of E. coli with 92% accuracy and correct tree branch assignment with 98% accuracy. Twenty-five thousand Illumina HiSeq reads are sufficient for identification of the strain. StrainSeeker is a software program that identifies bacterial isolates by assigning them to nodes or leaves of a custom-made guide tree. StrainSeeker's web interface and pre-computed guide trees are available at http://bioinfo.ut.ee/strainseeker. Source code is stored at GitHub: https://github.com/bioinfo-ut/StrainSeeker.
BarraCUDA - a fast short read sequence aligner using graphics processing units
2012-01-01
Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net PMID:22244497
Simulating Next-Generation Sequencing Datasets from Empirical Mutation and Sequencing Models
Stephens, Zachary D.; Hudson, Matthew E.; Mainzer, Liudmila S.; Taschuk, Morgan; Weber, Matthew R.; Iyer, Ravishankar K.
2016-01-01
An obstacle to validating and benchmarking methods for genome analysis is that there are few reference datasets available for which the “ground truth” about the mutational landscape of the sample genome is known and fully validated. Additionally, the free and public availability of real human genome datasets is incompatible with the preservation of donor privacy. In order to better analyze and understand genomic data, we need test datasets that model all variants, reflecting known biology as well as sequencing artifacts. Read simulators can fulfill this requirement, but are often criticized for limited resemblance to true data and overall inflexibility. We present NEAT (NExt-generation sequencing Analysis Toolkit), a set of tools that not only includes an easy-to-use read simulator, but also scripts to facilitate variant comparison and tool evaluation. NEAT has a wide variety of tunable parameters which can be set manually on the default model or parameterized using real datasets. The software is freely available at github.com/zstephens/neat-genreads. PMID:27893777
The readability of pediatric patient education materials on the World Wide Web.
D'Alessandro, D M; Kingsley, P; Johnson-West, J
2001-07-01
Literacy is a national and international problem. Studies have shown the readability of adult and pediatric patient education materials to be too high for average adults. Materials should be written at the 8th-grade level or lower. To determine the general readability of pediatric patient education materials designed for adults on the World Wide Web (WWW). GeneralPediatrics.com (http://www.generalpediatrics.com) is a digital library serving the medical information needs of pediatric health care providers, patients, and families. Documents from 100 different authoritative Web sites designed for laypersons were evaluated using a built-in computer software readability formula (Flesch Reading Ease and Flesch-Kincaid reading levels) and hand calculation methods (Fry Formula and SMOG methods). Analysis of variance and paired t tests determined significance. Eighty-nine documents constituted the final sample; they covered a wide spectrum of pediatric topics. The overall Flesch Reading Ease score was 57.0. The overall mean Fry Formula was 12.0 (12th grade, 0 months of schooling) and SMOG was 12.2. The overall Flesch-Kincaid grade level was significantly lower (P<.0001), at a mean of 7.1, when compared with the other 2 methods. All author and institution groups had an average reading level above 10.6 by the Fry Formula and SMOG methods. Pediatric patient education materials on the WWW are not written at an appropriate reading level for the average adult. We propose that a practical reading level and how it was determined be included on all patient education materials on the WWW for general guidance in material selection. We discuss suggestions for improved readability of patient education materials.
The readout system for the ArTeMis camera
NASA Astrophysics Data System (ADS)
Doumayrou, E.; Lortholary, M.; Dumaye, L.; Hamon, G.
2014-07-01
During ArTeMiS observations at the APEX telescope (Chajnantor, Chile), 5760 bolometric pixels from 20 arrays at 300mK, corresponding to 3 submillimeter focal planes at 450μm, 350μm and 200μm, have to be read out simultaneously at 40Hz. The read out system, made of electronics and software, is the full chain from the cryostat to the telescope. The readout electronics consists of cryogenic buffers at 4K (NABU), based on CMOS technology, and of warm electronic acquisition systems called BOLERO. The bolometric signal given by each pixel has to be amplified, sampled, converted, time stamped and formatted in data packets by the BOLERO electronics. The time stamping is obtained by the decoding of an IRIG-B signal given by APEX and is key to ensure the synchronization of the data with the telescope. Specifically developed for ArTeMiS, BOLERO is an assembly of analogue and digital FPGA boards connected directly on the top of the cryostat. Two detectors arrays (18*16 pixels), one NABU and one BOLERO interconnected by ribbon cables constitute the unit of the electronic architecture of ArTeMiS. In total, the 20 detectors for the tree focal planes are read by 10 BOLEROs. The software is working on a Linux operating system, it runs on 2 back-end computers (called BEAR) which are small and robust PCs with solid state disks. They gather the 10 BOLEROs data fluxes, and reconstruct the focal planes images. When the telescope scans the sky, the acquisitions are triggered thanks to a specific network protocol. This interface with APEX enables to synchronize the acquisition with the observations on sky: the time stamped data packets are sent during the scans to the APEX software that builds the observation FITS files. A graphical user interface enables the setting of the camera and the real time display of the focal plane images, which is essential in laboratory and commissioning phases. The software is a set of C++, Labview and Python, the qualities of which are respectively used for rapidity, powerful graphic interfacing and scripting. The commands to the camera can be sequenced in Python scripts. The paper describes the whole electronic and software readout chain designed to fulfill the specificities of ArTeMiS and its performances. The specific options used are explained, for example, the limited room in the Cassegrain cabin of APEX has led us to a quite compact design. This system was successfully used in summer 2013 for the commissioning and the first scientific observations with a preliminary set of 4 detectors at 350μm.
List processing software for the LeCroy 1821 Segment Manager Interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorries, T.; Moore, C.; Pordes, R.
1987-05-01
Many experiments at Fermilab now include some FASTBUS electronics in their data readout. The software reported in this paper provides general support for the LeCroy 1821 interface. The list processing device drivers allow FASTBUS data to be read out efficiently into the Fermilab Computing Department supported data acquisition systems.
Technology-Based Literature Plans for Elementary Students (Technology Links to Literacy).
ERIC Educational Resources Information Center
Wepner, Shelley B.
1991-01-01
Presents ideas for incorporating software into each guided reading phase for two realistic fiction books: Lois Lowry's "Anastasia on Her Own" and Barthe DeClements's "The Fourth Grade Wizards." Discusses how each skeletal plan uses three pieces of software to enliven students' oral and written thoughts about the books'…
Polisher (conflicting versions 2.0.8 on IM Form, 1.0 on abstract)
DOE Office of Scientific and Technical Information (OSTI.GOV)
2008-09-18
Polisher is a software package designed to facilitate the error correction of an assembled genome using Illumia read data. The software addresses substandard regions by automatically correcting consensus errors and/or suggesting primer walking reactions to improve the quality of the bases. This is done by performing the following:...........
Injecting Errors for Testing Built-In Test Software
NASA Technical Reports Server (NTRS)
Gender, Thomas K.; Chow, James
2010-01-01
Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers
ERIC Educational Resources Information Center
Balajthy, Ernest
Intended for reading and language arts teachers at all educational levels, this guide presents information to be used by teachers in constructing their own computer assisted educational software using the BASIC programming language and Apple computers. Part 1 provides an overview of the components of traditional tutorial and drill-and-practice…
Using ClassDojo to Help with Classroom Management during Guided Reading
ERIC Educational Resources Information Center
Chiarelli, MaryAnne; Szabo, Susan; Williams, Susan
2015-01-01
This study examined the use of a free behavioral management software program to see if it was successful to help first grade students recognize and self-monitor their behaviors while working in centers during teacher directed guided reading time. The study found that ClassDojo had a positive impact on these first grade students' behaviors and…
An evaluation of the Intel 2920 digital signal processing integrated circuit
NASA Technical Reports Server (NTRS)
Heller, J.
1981-01-01
The circuit consists of a digital to analog converter, accumulator, read write memory and UV erasable read only memory. The circuit can convert an analog signal to a digital representation, perform mathematical operations on the digital signal and subsequently convert the digital signal to an analog output. Development software tailored for programming the 2920 is presented.
EBooks and Accommodations: Is This the Future of Print Accommodation?
ERIC Educational Resources Information Center
Cavanaugh, Terence
2002-01-01
This article explains the three components of eBooks: an eBook file, software to read the eBook, and a hardware device to read it on. The use of eBooks for students with special needs, the advantages of eBooks, built in accommodations, and creating accommodations are discussed. EBook resources are included. (Contains references.) (CR)
How Will the Ed-Tech Industry Shape Student Reading?
ERIC Educational Resources Information Center
Watters, Audrey
2014-01-01
The promise is that education technologies will reshape the ways in which we teach and learn, the ways in which we read and write and communicate. Indeed, new hardware and new software are often marketed to schools and libraries with language that stresses their transformative and innovative potential, even when, upon closer inspection, it may…
ERIC Educational Resources Information Center
Diamantes, Thomas
2007-01-01
This paper discusses the critical relation between professor, student and technology in the process of encouraging graduate students to read required textbook sections. It discusses how using online management software, graduate students are required to submit weekly chapter summaries to the professor by 5:00 pm on Fridays. In addition to the…
Safikhani, Zhaleh; Sadeghi, Mehdi; Pezeshk, Hamid; Eslahchi, Changiz
2013-01-01
Recent advances in the sequencing technologies have provided a handful of RNA-seq datasets for transcriptome analysis. However, reconstruction of full-length isoforms and estimation of the expression level of transcripts with a low cost are challenging tasks. We propose a novel de novo method named SSP that incorporates interval integer linear programming to resolve alternatively spliced isoforms and reconstruct the whole transcriptome from short reads. Experimental results show that SSP is fast and precise in determining different alternatively spliced isoforms along with the estimation of reconstructed transcript abundances. The SSP software package is available at http://www.bioinf.cs.ipm.ir/software/ssp. © 2013.
National Survey of Patients’ Bill of Rights Statutes
Jacob, Dan M.; Hochhauser, Mark; Parker, Ruth M.
2009-01-01
BACKGROUND Despite vigorous national debate between 1999–2001 the federal patients’ bill of rights (PBOR) was not enacted. However, states have enacted legislation and the Joint Commission defined an accreditation standard to present patients with their rights. Because such initiatives can be undermined by overly complex language, we surveyed the readability of hospital PBOR documents as well as texts mandated by state law. METHODS State Web sites and codes were searched to identify PBOR statutes for general patient populations. The rights addressed were compared with the 12 themes presented in the American Hospital Association’s (AHA) PBOR text of 2002. In addition, we obtained PBOR texts from a sample of hospitals in each state. Readability was evaluated using Prose, a software program which reports an average of eight readability formulas. RESULTS Of 23 states with a PBOR statute for the general public, all establish a grievance policy, four protect a private right of action, and one stipulates fines for violations. These laws address an average of 7.4 of the 12 AHA themes. Nine states’ statutes specify PBOR text for distribution to patients. These documents have an average readability of 15th grade (range, 11.6, New York, to 17.0, Minnesota). PBOR documents from 240 US hospitals have an average readability of 14th grade (range, 8.2 to 17.0). CONCLUSIONS While the average U.S. adult reads at an 8th grade reading level, an advanced college reading level is routinely required to read PBOR documents. Patients are not likely to learn about their rights from documents they cannot read. PMID:19189192
ATLAS software configuration and build tool optimisation
NASA Astrophysics Data System (ADS)
Rybkin, Grigory; Atlas Collaboration
2014-06-01
ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of multi-core computing resources utilisation, and considerably improved software developer and user experience.
DiffSplice: the genome-wide detection of differential splicing events with RNA-seq
Hu, Yin; Huang, Yan; Du, Ying; Orellana, Christian F.; Singh, Darshan; Johnson, Amy R.; Monroy, Anaïs; Kuan, Pei-Fen; Hammond, Scott M.; Makowski, Liza; Randell, Scott H.; Chiang, Derek Y.; Hayes, D. Neil; Jones, Corbin; Liu, Yufeng; Prins, Jan F.; Liu, Jinze
2013-01-01
The RNA transcriptome varies in response to cellular differentiation as well as environmental factors, and can be characterized by the diversity and abundance of transcript isoforms. Differential transcription analysis, the detection of differences between the transcriptomes of different cells, may improve understanding of cell differentiation and development and enable the identification of biomarkers that classify disease types. The availability of high-throughput short-read RNA sequencing technologies provides in-depth sampling of the transcriptome, making it possible to accurately detect the differences between transcriptomes. In this article, we present a new method for the detection and visualization of differential transcription. Our approach does not depend on transcript or gene annotations. It also circumvents the need for full transcript inference and quantification, which is a challenging problem because of short read lengths, as well as various sampling biases. Instead, our method takes a divide-and-conquer approach to localize the difference between transcriptomes in the form of alternative splicing modules (ASMs), where transcript isoforms diverge. Our approach starts with the identification of ASMs from the splice graph, constructed directly from the exons and introns predicted from RNA-seq read alignments. The abundance of alternative splicing isoforms residing in each ASM is estimated for each sample and is compared across sample groups. A non-parametric statistical test is applied to each ASM to detect significant differential transcription with a controlled false discovery rate. The sensitivity and specificity of the method have been assessed using simulated data sets and compared with other state-of-the-art approaches. Experimental validation using qRT-PCR confirmed a selected set of genes that are differentially expressed in a lung differentiation study and a breast cancer data set, demonstrating the utility of the approach applied on experimental biological data sets. The software of DiffSplice is available at http://www.netlab.uky.edu/p/bioinfo/DiffSplice. PMID:23155066
An ontology based trust verification of software license agreement
NASA Astrophysics Data System (ADS)
Lu, Wenhuan; Li, Xiaoqing; Gan, Zengqin; Wei, Jianguo
2017-08-01
When we install software or download software, there will show up so big mass document to state the rights and obligations, for which lots of person are not patient to read it or understand it. That would may make users feel distrust for the software. In this paper, we propose an ontology based verification for Software License Agreement. First of all, this work proposed an ontology model for domain of Software License Agreement. The domain ontology is constructed by proposed methodology according to copyright laws and 30 software license agreements. The License Ontology can act as a part of generalized copyright law knowledge model, and also can work as visualization of software licenses. Based on this proposed ontology, a software license oriented text summarization approach is proposed which performances showing that it can improve the accuracy of software licenses summarizing. Based on the summarization, the underline purpose of the software license can be explicitly explored for trust verification.
Cuffney, Thomas F.
2003-01-01
The Invertebrate Data Analysis System (IDAS) software provides an accurate, consistent, and efficient mechanism for analyzing invertebrate data collected as part of the National Water-Quality Assessment Program and stored in the Biological Transactional Database (Bio-TDB). The IDAS software is a stand-alone program for personal computers that run Microsoft (MS) Windows?. It allows users to read data downloaded from Bio-TDB and stored either as MS Excel? or MS Access? files. The program consists of five modules. The Edit Data module allows the user to subset, combine, delete, and summarize community data. The Data Preparation module allows the user to select the type(s) of sample(s) to process, calculate densities, delete taxa based on laboratory processing notes, combine lifestages or keep them separate, select a lowest taxonomic level for analysis, delete rare taxa, and resolve taxonomic ambiguities. The Calculate Community Metrics module allows the user to calculate over 130 community metrics, including metrics based on organism tolerances and functional feeding groups. The Calculate Diversities and Similarities module allows the user to calculate nine diversity and eight similarity indices. The Data export module allows the user to export data to other software packages and produce tables of community data that can be imported into spreadsheet and word-processing programs. Though the IDAS program was developed to process invertebrate data downloaded from USGS databases, it will work with other data sets that are converted to the USGS (Bio-TDB) format. Consequently, the data manipulation, analysis, and export procedures provided by the IDAS program can be used by anyone involved in using benthic macroinvertebrates in applied or basic research.
Bolaños, Federico; LeDue, Jeff M; Murphy, Timothy H
2017-01-30
Automation of animal experimentation improves consistency, reduces potential for error while decreasing animal stress and increasing well-being. Radio frequency identification (RFID) tagging can identify individual mice in group housing environments enabling animal-specific tracking of physiological parameters. We describe a simple protocol to radio frequency identification (RFID) tag and detect mice. RFID tags were injected sub-cutaneously after brief isoflurane anesthesia and do not require surgical steps such as suturing or incisions. We employ glass-encapsulated 125kHz tags that can be read within 30.2±2.4mm of the antenna. A raspberry pi single board computer and tag reader enable automated logging and cross platform support is possible through Python. We provide sample software written in Python to provide a flexible and cost effective system for logging the weights of multiple mice in relation to pre-defined targets. The sample software can serve as the basis of any behavioral or physiological task where users will need to identify and track specific animals. Recently, we have applied this system of tagging to automated mouse brain imaging within home-cages. We provide a cost effective solution employing open source software to facilitate adoption in applications such as automated imaging or tracking individual animal weights during tasks where food or water restriction is employed as motivation for a specific behavior. Copyright © 2016 Elsevier B.V. All rights reserved.
Usability Evaluation of Multimedia Courseware (MEL-SindD)
NASA Astrophysics Data System (ADS)
Yussof, Rahmah Lob; Badioze Zaman, Halimah
Constructive evaluations on any software are needed to ensure the effectiveness and usability of the software. This assesment on the multimedia courseware is part of the researcher's study towards the development and usability of the early reading software for students with Down Syndrome (MEL-SindD). This paper will discuss the usability assesment of this courseware, the methods used for the evaluation as well as suitable approaches that can be deployed to evaluate the courseware effectiveness to disabled children.
NASA Technical Reports Server (NTRS)
Lyon, R. J. P. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Ground measured spectral signatures of wavelength bands matching ERTS MSS were collected using a radiometer at several Californian and Nevadan sites, and directly compared with similar data from ERTS CCTs. The comparison was tested at the highest possible spatial resolution for ERTS, using deconvoluted MSS data, and contrasted with that of ground measured spectra, originally from 1 meter squares. In the mobile traverses of the grassland sites, these one meter fields of view were integrated into eighty meter transects along the five km track across four major rock/soil types. Suitable software was developed to read the MSS CCT tapes, to shadeprint individual bands with user-determined greyscale stretching. Four new algorithms for unsupervised and supervised, normalized and unnormalized clustering were developed, into a program termed STANSORT. Parallel software allowed the field data to be calibrated, and by using concurrently continuously collected, upward- and downward-viewing, 4 band radiometers, bidirectional reflectances could be calculated.
VCFR: A package to manipulate and visualize variant call format data in R
USDA-ARS?s Scientific Manuscript database
Software to call single nucleotide polymorphisms or related genetic variants has converged on the variant call format (vcf) as their output format of choice. This has created a need for tools to work with vcf files. While an increasing number of software exists to read vcf data, many of them only ex...
ERIC Educational Resources Information Center
Kartal, Günizi; Terziyan, Treysi
2016-01-01
The major goal of this study was to develop a game-like software application for phonological awareness training and to evaluate its role in improving phonological awareness skills at the kindergarten level, with the intention to eventually help reading acquisition in Turkish. The participants of the study came from two kindergarten classrooms in…
How To Use the SilverPlatter Software To Search the ERIC CD ROM.
ERIC Educational Resources Information Center
Merrill, Paul F.
This manual provides detailed instructions for using SilverPlatter software to search the ERIC CD ROM (Compact Disk Read Only Memory), a large bibliographic database relating to education which contains reference information on numerous journal articles from over 750 journals cited in the "Current Index to Journals in Education" (CIJE),…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conlin, Jeremy
2017-03-15
This software is code related to reading/writing/manipulating nuclear data in the Generalized Nuclear Data (GND) format, a new format for sharing nuclear data among institutions. In addition to the software and its documentation, notes and documentation from the WPEC Subgroup 43 will be included. WPEC Subgroup 43 is an international committee charged with creating the API for the GND format.
Hammond Workforce 2000: A Three-Year Project. October 1989 to September 1992.
ERIC Educational Resources Information Center
Meyers, Arthur S.; Somerville, Deborah J.
A 3-year Library Services and Construction Act grant project from 1989-1992 provided for adult learning centers, equipped with Apple IIGS computers and software at each location of the Hammond Public Library (Indiana). User-friendly, job-based software to strengthen reading, writing, mathematics, spelling, and grammar skills, as well as video and…
The Impact of Assistive Technology on Curriculum Accommodation for a Braille-Reading Student
ERIC Educational Resources Information Center
Farnsworth, Charles R.; Luckner, John L.
2008-01-01
Over 5 months, the authors evaluated the efficacy of electronic assistive technology (the BrailleNote mPower BT-32 notetaker and Tiger Cub Jr. embosser) and associated software components in creating curriculum materials for a middle school Braille-reading student. The authors collected data at the beginning and end of the study from parents,…
Promoting Reading: Using eBooks with Gifted and Advanced Readers
ERIC Educational Resources Information Center
Weber, Christine L.; Cavanaugh, Terence W.
2006-01-01
eBooks are textual documents that have been converted and "published" in an electronic format and are displayed on eBook readers, devices, or computers using eBook software programs. This new form of book is a relatively recent addition to book styles and offers students, teachers, and schools an additional tool for the teaching of reading and the…
ERIC Educational Resources Information Center
Thomeer, Marcus L.; Smith, Rachael A.; Lopata, Christopher; Volker, Martin A.; Lipinski, Alanna M.; Rodgers, Jonathan D.; McDonald, Christin A.; Lee, Gloria K.
2015-01-01
This randomized controlled trial evaluated the efficacy of a computer software (i.e., "Mind Reading") and in vivo rehearsal treatment on the emotion decoding and encoding skills, autism symptoms, and social skills of 43 children, ages 7-12 years with high-functioning autism spectrum disorder (HFASD). Children in treatment (n = 22)…
Some Improvements in Utilization of Flash Memory Devices
NASA Technical Reports Server (NTRS)
Gender, Thomas K.; Chow, James; Ott, William E.
2009-01-01
Two developments improve the utilization of flash memory devices in the face of the following limitations: (1) a flash write element (page) differs in size from a flash erase element (block), (2) a block must be erased before its is rewritten, (3) lifetime of a flash memory is typically limited to about 1,000,000 erases, (4) as many as 2 percent of the blocks of a given device may fail before the expected end of its life, and (5) to ensure reliability of reading and writing, power must not be interrupted during minimum specified reading and writing times. The first development comprises interrelated software components that regulate reading, writing, and erasure operations to minimize migration of data and unevenness in wear; perform erasures during idle times; quickly make erased blocks available for writing; detect and report failed blocks; maintain the overall state of a flash memory to satisfy real-time performance requirements; and detect and initialize a new flash memory device. The second development is a combination of hardware and software that senses the failure of a main power supply and draws power from a capacitive storage circuit designed to hold enough energy to sustain operation until reading or writing is completed.
How Will We React to the Discovery of Extraterrestrial Life?
Kwon, Jung Yul; Bercovici, Hannah L; Cunningham, Katja; Varnum, Michael E W
2017-01-01
How will humanity react to the discovery of extraterrestrial life? Speculation on this topic abounds, but empirical research is practically non-existent. We report the results of three empirical studies assessing psychological reactions to the discovery of extraterrestrial life using the Linguistic Inquiry and Word Count (LIWC) text analysis software. We examined language use in media coverage of past discovery announcements of this nature, with a focus on extraterrestrial microbial life (Pilot Study). A large online sample ( N = 501) was asked to write about their own and humanity's reaction to a hypothetical announcement of such a discovery (Study 1), and an independent, large online sample ( N = 256) was asked to read and respond to a newspaper story about the claim that fossilized extraterrestrial microbial life had been found in a meteorite of Martian origin (Study 2). Across these studies, we found that reactions were significantly more positive than negative, and more reward vs. risk oriented. A mini-meta-analysis revealed large overall effect sizes (positive vs. negative affect language: g = 0.98; reward vs. risk language: g = 0.81). We also found that people's forecasts of their own reactions showed a greater positivity bias than their forecasts of humanity's reactions (Study 1), and that responses to reading an actual announcement of the discovery of extraterrestrial microbial life showed a greater positivity bias than responses to reading an actual announcement of the creation of man-made synthetic life (Study 2). Taken together, this work suggests that our reactions to a future confirmed discovery of microbial extraterrestrial life are likely to be fairly positive.
How Will We React to the Discovery of Extraterrestrial Life?
Kwon, Jung Yul; Bercovici, Hannah L.; Cunningham, Katja; Varnum, Michael E. W.
2018-01-01
How will humanity react to the discovery of extraterrestrial life? Speculation on this topic abounds, but empirical research is practically non-existent. We report the results of three empirical studies assessing psychological reactions to the discovery of extraterrestrial life using the Linguistic Inquiry and Word Count (LIWC) text analysis software. We examined language use in media coverage of past discovery announcements of this nature, with a focus on extraterrestrial microbial life (Pilot Study). A large online sample (N = 501) was asked to write about their own and humanity’s reaction to a hypothetical announcement of such a discovery (Study 1), and an independent, large online sample (N = 256) was asked to read and respond to a newspaper story about the claim that fossilized extraterrestrial microbial life had been found in a meteorite of Martian origin (Study 2). Across these studies, we found that reactions were significantly more positive than negative, and more reward vs. risk oriented. A mini-meta-analysis revealed large overall effect sizes (positive vs. negative affect language: g = 0.98; reward vs. risk language: g = 0.81). We also found that people’s forecasts of their own reactions showed a greater positivity bias than their forecasts of humanity’s reactions (Study 1), and that responses to reading an actual announcement of the discovery of extraterrestrial microbial life showed a greater positivity bias than responses to reading an actual announcement of the creation of man-made synthetic life (Study 2). Taken together, this work suggests that our reactions to a future confirmed discovery of microbial extraterrestrial life are likely to be fairly positive. PMID:29367849
Radio Frequency Identification (RFID) Based Employee Attendance Management System
NASA Astrophysics Data System (ADS)
Maramis, G. D. P.; Rompas, P. T. D.
2018-02-01
Manually recorded attendance of all the employees has produced some problems such as the data accuracy and staff performance efficiency. The objective of this research is to design and develop a software of RFID attendance system which is integrated with database system. This RFID attendance system was developed using several main components such as tags that will be used as a replacement of ID cards and a reader device that will read the information related to the employee attendance. The result of this project is a software of RFID attendance system that is integrated with the database and has a function to store the data or information of every single employee. This system has a maximum reading range of 2 cm with success probability of 1 and requires a minimum interval between readings of 2 seconds in order to achieve an optimal functionality. By using the system, the discipline attitude of the employees and also the performance of the staff will be improved instantly.
IRAC test report. Gallium doped silicon band 2: Read noise and dark current
NASA Technical Reports Server (NTRS)
Lamb, Gerald; Shu, Peter; Mather, John; Ewin, Audrey; Bowser, Jeffrey
1987-01-01
A direct readout infrared detector array, a candidate for the Space Infrared Telescope Facility (SIRTF) Infrared Array Camera (IRAC), has been tested. The array has a detector surface of gallium doped silicon, bump bonded to a 58x62 pixel MOSFET multiplexer on a separate chip. Although this chip and system do not meet all the SIRTF requirements, the critically important read noise is within a factor of 3 of the requirement. Significant accomplishments of this study include: (1) development of a low noise correlated double sampling readout system with a readout noise of 127 to 164 electrons (based on the detector integrator capacitance of 0.1 pF); (2) measurement of the readout noise of the detector itself, ranging from 123 to 214 electrons with bias only (best to worst pixel), and 256 to 424 electrons with full clocking in normal operation at 5.4 K where dark current is small. Thirty percent smaller read noises are obtained at a temperature of 15K; (3) measurement of the detector response versus integration time, showing significant nonlinear behavior for large signals, well below the saturation level; and (4) development of a custom computer interface and suitable software for collection, analysis and display of data.
ERIC Educational Resources Information Center
Fives, Allyn
2016-01-01
This study explored the association between reading self-belief and reading achievement among a representative sample of nine year old children in the Republic of Ireland. Results from analysis of variance and simple effects analysis showed a positive linear association between reading achievement and "attitude to reading." The…
A-Book: A Feedback-Based Adaptive System to Enhance Meta-Cognitive Skills during Reading.
Guerra, Ernesto; Mellado, Guido
2017-01-01
In the digital era, tech devices (hardware and software) are increasingly within hand's reach. Yet, implementing information and communication technologies for educational contexts that have robust and long-lasting effects on student learning outcomes is still a challenge. We propose that any such system must a) be theoretically motivated and designed to tackle specific cognitive skills (e.g., inference making) supporting a given cognitive task (e.g., reading comprehension) and b) must be able to identify and adapt to the user's profile. In the present study, we implemented a feedback-based adaptive system called A-book (assisted-reading book) and tested it in a sample of 4th, 5th, and 6th graders. To assess our hypotheses, we contrasted three experimental assisted-reading conditions; one that supported meta-cognitive skills and adapted to the user profile (adaptive condition), one that supported meta-cognitive skills but did not adapt to the user profile (training condition) and a control condition. The results provide initial support for our proposal; participants in the adaptive condition improved their accuracy scores on inference making questions over time, outperforming both the training and control groups. There was no evidence, however, of significant improvements on other tested meta-cognitive skills (i.e., text structure knowledge, comprehension monitoring). We discussed the practical implications of using the A-book for the enhancement of meta-cognitive skills in school contexts, as well as its current limitations and future developments that could improve the system.
Development of an automated MODS plate reader to detect early growth of Mycobacterium tuberculosis.
Comina, G; Mendoza, D; Velazco, A; Coronel, J; Sheen, P; Gilman, R H; Moore, D A J; Zimic, M
2011-06-01
In this work, an automated microscopic observation drug susceptibility (MODS) plate reader has been developed. The reader automatically handles MODS plates and after autofocussing digital images are acquired of the characteristic microscopic cording structures of Mycobacterium tuberculosis, which are the identification method utilized in the MODS technique to detect tuberculosis and multidrug resistant tuberculosis. In conventional MODS, trained technicians manually move the MODS plate on the stage of an inverted microscope while trying to locate and focus upon the characteristic microscopic cording colonies. In centres with high tuberculosis diagnostic demand, sufficient time may not be available to adequately examine all cultures. An automated reader would reduce labour time and the handling of M. tuberculosis cultures by laboratory personnel. Two hundred MODS culture images (100 from tuberculosis positive and 100 from tuberculosis negative sputum samples confirmed by a standard MODS reading using a commercial microscope) were acquired randomly using the automated MODS plate reader. A specialist analysed these digital images with the help of a personal computer and designated them as M. tuberculosis present or absent. The specialist considered four images insufficiently clear to permit a definitive reading. The readings from the 196 valid images resulted in a 100% agreement with the conventional nonautomated standard reading. The automated MODS plate reader combined with open-source MODS pattern recognition software provides a novel platform for high throughput automated tuberculosis diagnosis. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
The system is developed to collect, process, store and present the information provided by the radio frequency identification (RFID) devices. The system contains three parts, the application software, the database and the web page. The application software manages multiple RFID devices, such as readers and portals, simultaneously. It communicates with the devices through application programming interface (API) provided by the device vendor. The application software converts data collected by the RFID readers and portals to readable information. It is capable of encrypting data using 256 bits advanced encryption standard (AES). The application software has a graphical user interface (GUI). Themore » GUI mimics the configurations of the nucler material storage sites or transport vehicles. The GUI gives the user and system administrator an intuitive way to read the information and/or configure the devices. The application software is capable of sending the information to a remote, dedicated and secured web and database server. Two captured screen samples, one for storage and transport, are attached. The database is constructed to handle a large number of RFID tag readers and portals. A SQL server is employed for this purpose. An XML script is used to update the database once the information is sent from the application software. The design of the web page imitates the design of the application software. The web page retrieves data from the database and presents it in different panels. The user needs a user name combined with a password to access the web page. The web page is capable of sending e-mail and text messages based on preset criteria, such as when alarm thresholds are excceeded. A captured screen sample is attached. The application software is designed to be installed on a local computer. The local computer is directly connected to the RFID devices and can be controlled locally or remotely. There are multiple local computers managing different sites or transport vehicles. The control from remote sites and information transmitted to a central database server is through secured internet. The information stored in the central databaser server is shown on the web page. The users can view the web page on the internet. A dedicated and secured web and database server (https) is used to provide information security.« less
The Cervical Microbiome over 7 Years and a Comparison of Methodologies for Its Characterization
Smith, Benjamin C.; McAndrew, Thomas; Chen, Zigui; Harari, Ariana; Barris, David M.; Viswanathan, Shankar; Rodriguez, Ana Cecilia; Castle, Phillip; Herrero, Rolando; Schiffman, Mark; Burk, Robert D.
2012-01-01
Background The rapidly expanding field of microbiome studies offers investigators a large choice of methods for each step in the process of determining the microorganisms in a sample. The human cervicovaginal microbiome affects female reproductive health, susceptibility to and natural history of many sexually transmitted infections, including human papillomavirus (HPV). At present, long-term behavior of the cervical microbiome in early sexual life is poorly understood. Methods The V6 and V6–V9 regions of the 16S ribosomal RNA gene were amplified from DNA isolated from exfoliated cervical cells. Specimens from 10 women participating in the Natural History Study of HPV in Guanacaste, Costa Rica were sampled successively over a period of 5–7 years. We sequenced amplicons using 3 different platforms (Sanger, Roche 454, and Illumina HiSeq 2000) and analyzed sequences using pipelines based on 3 different classification algorithms (usearch, RDP Classifier, and pplacer). Results Usearch and pplacer provided consistent microbiome classifications for all sequencing methods, whereas RDP Classifier deviated significantly when characterizing Illumina reads. Comparing across sequencing platforms indicated 7%–41% of the reads were reclassified, while comparing across software pipelines reclassified up to 32% of the reads. Variability in classification was shown not to be due to a difference in read lengths. Six cervical microbiome community types were observed and are characterized by a predominance of either G. vaginalis or Lactobacillus spp. Over the 5–7 year period, subjects displayed fluctuation between community types. A PERMANOVA analysis on pairwise Kantorovich-Rubinstein distances between the microbiota of all samples yielded an F-test ratio of 2.86 (p<0.01), indicating a significant difference comparing within and between subjects’ microbiota. Conclusions Amplification and sequencing methods affected the characterization of the microbiome more than classification algorithms. Pplacer and usearch performed consistently with all sequencing methods. The analyses identified 6 community types consistent with those previously reported. The long-term behavior of the cervical microbiome indicated that fluctuations were subject dependent. PMID:22792313
Vernick, Kenneth D.
2017-01-01
Metavisitor is a software package that allows biologists and clinicians without specialized bioinformatics expertise to detect and assemble viral genomes from deep sequence datasets. The package is composed of a set of modular bioinformatic tools and workflows that are implemented in the Galaxy framework. Using the graphical Galaxy workflow editor, users with minimal computational skills can use existing Metavisitor workflows or adapt them to suit specific needs by adding or modifying analysis modules. Metavisitor works with DNA, RNA or small RNA sequencing data over a range of read lengths and can use a combination of de novo and guided approaches to assemble genomes from sequencing reads. We show that the software has the potential for quick diagnosis as well as discovery of viruses from a vast array of organisms. Importantly, we provide here executable Metavisitor use cases, which increase the accessibility and transparency of the software, ultimately enabling biologists or clinicians to focus on biological or medical questions. PMID:28045932
AirShow 1.0 CFD Software Users' Guide
NASA Technical Reports Server (NTRS)
Mohler, Stanley R., Jr.
2005-01-01
AirShow is visualization post-processing software for Computational Fluid Dynamics (CFD). Upon reading binary PLOT3D grid and solution files into AirShow, the engineer can quickly see how hundreds of complex 3-D structured blocks are arranged and numbered. Additionally, chosen grid planes can be displayed and colored according to various aerodynamic flow quantities such as Mach number and pressure. The user may interactively rotate and translate the graphical objects using the mouse. The software source code was written in cross-platform Java, C++, and OpenGL, and runs on Unix, Linux, and Windows. The graphical user interface (GUI) was written using Java Swing. Java also provides multiple synchronized threads. The Java Native Interface (JNI) provides a bridge between the Java code and the C++ code where the PLOT3D files are read, the OpenGL graphics are rendered, and numerical calculations are performed. AirShow is easy to learn and simple to use. The source code is available for free from the NASA Technology Transfer and Partnership Office.
Accurate estimation of short read mapping quality for next-generation genome sequencing
Ruffalo, Matthew; Koyutürk, Mehmet; Ray, Soumya; LaFramboise, Thomas
2012-01-01
Motivation: Several software tools specialize in the alignment of short next-generation sequencing reads to a reference sequence. Some of these tools report a mapping quality score for each alignment—in principle, this quality score tells researchers the likelihood that the alignment is correct. However, the reported mapping quality often correlates weakly with actual accuracy and the qualities of many mappings are underestimated, encouraging the researchers to discard correct mappings. Further, these low-quality mappings tend to correlate with variations in the genome (both single nucleotide and structural), and such mappings are important in accurately identifying genomic variants. Approach: We develop a machine learning tool, LoQuM (LOgistic regression tool for calibrating the Quality of short read mappings, to assign reliable mapping quality scores to mappings of Illumina reads returned by any alignment tool. LoQuM uses statistics on the read (base quality scores reported by the sequencer) and the alignment (number of matches, mismatches and deletions, mapping quality score returned by the alignment tool, if available, and number of mappings) as features for classification and uses simulated reads to learn a logistic regression model that relates these features to actual mapping quality. Results: We test the predictions of LoQuM on an independent dataset generated by the ART short read simulation software and observe that LoQuM can ‘resurrect’ many mappings that are assigned zero quality scores by the alignment tools and are therefore likely to be discarded by researchers. We also observe that the recalibration of mapping quality scores greatly enhances the precision of called single nucleotide polymorphisms. Availability: LoQuM is available as open source at http://compbio.case.edu/loqum/. Contact: matthew.ruffalo@case.edu. PMID:22962451
Extracting the Data From the LCM vk4 Formatted Output File
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less
cFinder: definition and quantification of multiple haplotypes in a mixed sample.
Niklas, Norbert; Hafenscher, Julia; Barna, Agnes; Wiesinger, Karin; Pröll, Johannes; Dreiseitl, Stephan; Preuner-Stix, Sandra; Valent, Peter; Lion, Thomas; Gabriel, Christian
2015-09-07
Next-generation sequencing allows for determining the genetic composition of a mixed sample. For instance, when performing resistance testing for BCR-ABL1 it is necessary to identify clones and define compound mutations; together with an exact quantification this may complement diagnosis and therapy decisions with additional information. Moreover, that applies not only to oncological issues but also determination of viral, bacterial or fungal infection. The efforts to retrieve multiple haplotypes (more than two) and proportion information from data with conventional software are difficult, cumbersome and demand multiple manual steps. Therefore, we developed a tool called cFinder that is capable of automatic detection of haplotypes and their accurate quantification within one sample. BCR-ABL1 samples containing multiple clones were used for testing and our cFinder could identify all previously found clones together with their abundance and even refine some results. Additionally, reads were simulated using GemSIM with multiple haplotypes, the detection was very close to linear (R(2) = 0.96). Our aim is not to deduce haploblocks over statistics, but to characterize one sample's composition precisely. As a result the cFinder reports the connections of variants (haplotypes) with their readcount and relative occurrence (percentage). Download is available at http://sourceforge.net/projects/cfinder/. Our cFinder is implemented in an efficient algorithm that can be run on a low-performance desktop computer. Furthermore, it considers paired-end information (if available) and is generally open for any current next-generation sequencing technology and alignment strategy. To our knowledge, this is the first software that enables researchers without extensive bioinformatic support to designate multiple haplotypes and how they constitute to a sample.
ERIC Educational Resources Information Center
Shapiro, Norma
This module on owning and operating a software design company is one of 36 in a series on entrepreneurship. The introduction tells the student what topics will be covered and suggests other modules to read in related occupations. Each unit includes student goals, a case study, and a discussion of the unit subject matter. Learning activities are…
ERIC Educational Resources Information Center
Drummond, Kathryn; Chinen, Marjorie; Duncan, Teresa Garcia; Miller, H. Ray; Fryer, Lindsay; Zmach, Courtney; Culp, Katherine
2011-01-01
"Thinking Reader" is a software program for students in Grades 5-8 that incorporates elements commonly identified in policy reports as being key components of effective adolescent literacy instruction. This evaluation of the impact of "Thinking Reader" use by Grade 6 students focused on two confirmatory research questions about…
ERIC Educational Resources Information Center
Lacava, Paul G.; Rankin, Ana; Mahlios, Emily; Cook, Katie; Simpson, Richard L.
2010-01-01
Many students with Autism Spectrum Disorders (ASD) have delays learning to recognize emotions. Social behavior is also challenging, including initiating interactions, responding to others, developing peer relationships, and so forth. In this single case design study we investigated the relationship between use of computer software ("Mind Reading:…
Orthographic Learning and the Role of Text-to-Speech Software in Dutch Disabled Readers
ERIC Educational Resources Information Center
Staels, Eva; Van den Broeck, Wim
2015-01-01
In this study, we examined whether orthographic learning can be demonstrated in disabled readers learning to read in a transparent orthography (Dutch). In addition, we tested the effect of the use of text-to-speech software, a new form of direct instruction, on orthographic learning. Both research goals were investigated by replicating Share's…
ERIC Educational Resources Information Center
Poock, Melanie M.
1998-01-01
Describes Accelerated Reader (AR), a computer software program that promotes reading; discusses AR hardware requirements; explains how it is used for book selection and testing in schools; assesses the program's strengths and weaknesses; and describes how Grant and Madison Elementary Schools (Muscatine, Iowa) have used the program effectively.…
ERIC Educational Resources Information Center
Fox, Mary Murphy
2012-01-01
The current study investigated Theory of Mind in young adults with autism. The young adults with autism spectrum disorder (ASD) consisted of four students between the ages of 18 and 19 from an on-campus program for students with autism located at Marywood University in Northeastern Pennsylvania. It was hypothesized that "Mind Reading",…
A software to digital image processing to be used in the voxel phantom development.
Vieira, J W; Lima, F R A
2009-11-15
Anthropomorphic models used in computational dosimetry, also denominated phantoms, are based on digital images recorded from scanning of real people by Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). The voxel phantom construction requests computational processing for transformations of image formats, to compact two-dimensional (2-D) images forming of three-dimensional (3-D) matrices, image sampling and quantization, image enhancement, restoration and segmentation, among others. Hardly the researcher of computational dosimetry will find all these available abilities in single software, and almost always this difficulty presents as a result the decrease of the rhythm of his researches or the use, sometimes inadequate, of alternative tools. The need to integrate the several tasks mentioned above to obtain an image that can be used in an exposure computational model motivated the development of the Digital Image Processing (DIP) software, mainly to solve particular problems in Dissertations and Thesis developed by members of the Grupo de Pesquisa em Dosimetria Numérica (GDN/CNPq). Because of this particular objective, the software uses the Portuguese idiom in their implementations and interfaces. This paper presents the second version of the DIP, whose main changes are the more formal organization on menus and menu items, and menu for digital image segmentation. Currently, the DIP contains the menus Fundamentos, Visualizações, Domínio Espacial, Domínio de Frequências, Segmentações and Estudos. Each menu contains items and sub-items with functionalities that, usually, request an image as input and produce an image or an attribute in the output. The DIP reads edits and writes binary files containing the 3-D matrix corresponding to a stack of axial images from a given geometry that can be a human body or other volume of interest. It also can read any type of computational image and to make conversions. When the task involves only an output image, this is saved as a JPEG file in the Windows default; when it involves an image stack, the output binary file is denominated SGI (Simulações Gráficas Interativas (Interactive Graphic Simulations), an acronym already used in other publications of the GDN/CNPq.
NASA Technical Reports Server (NTRS)
De Luca, Gianluca; De Luca, Carlo J.; Bergman, Per
2004-01-01
A portable electronic apparatus records electromyographic (EMG) signals in as many as 16 channels at a sampling rate of 1,024 Hz in each channel. The apparatus (see figure) includes 16 differential EMG electrodes (each electrode corresponding to one channel) with cables and attachment hardware, reference electrodes, an input/output-and-power-adapter unit, a 16-bit analog-to-digital converter, and a hand-held computer that contains a removable 256-MB flash memory card. When all 16 EMG electrodes are in use, full-bandwidth data can be recorded in each channel for as long as 8 hours. The apparatus is powered by a battery and is small enough that it can be carried in a waist pouch. The computer is equipped with a small screen that can be used to display the incoming signals on each channel. Amplitude and time adjustments of this display can be made easily by use of touch buttons on the screen. The user can also set up a data-acquisition schedule to conform to experimental protocols or to manage battery energy and memory efficiently. Once the EMG data have been recorded, the flash memory card is removed from the EMG apparatus and placed in a flash-memory- card-reading external drive unit connected to a personal computer (PC). The PC can then read the data recorded in the 16 channels. Preferably, before further analysis, the data should be stored in the hard drive of the PC. The data files are opened and viewed on the PC by use of special- purpose software. The software for operation of the apparatus resides in a random-access memory (RAM), with backup power supplied by a small internal lithium cell. A backup copy of this software resides on the flash memory card. In the event of loss of both main and backup battery power and consequent loss of this software, the backup copy can be used to restore the RAM copy after power has been restored. Accessories for this device are also available. These include goniometers, accelerometers, foot switches, and force gauges.
LC-MSsim – a simulation software for liquid chromatography mass spectrometry data
Schulz-Trieglaff, Ole; Pfeifer, Nico; Gröpl, Clemens; Kohlbacher, Oliver; Reinert, Knut
2008-01-01
Background Mass Spectrometry coupled to Liquid Chromatography (LC-MS) is commonly used to analyze the protein content of biological samples in large scale studies. The data resulting from an LC-MS experiment is huge, highly complex and noisy. Accordingly, it has sparked new developments in Bioinformatics, especially in the fields of algorithm development, statistics and software engineering. In a quantitative label-free mass spectrometry experiment, crucial steps are the detection of peptide features in the mass spectra and the alignment of samples by correcting for shifts in retention time. At the moment, it is difficult to compare the plethora of algorithms for these tasks. So far, curated benchmark data exists only for peptide identification algorithms but no data that represents a ground truth for the evaluation of feature detection, alignment and filtering algorithms. Results We present LC-MSsim, a simulation software for LC-ESI-MS experiments. It simulates ESI spectra on the MS level. It reads a list of proteins from a FASTA file and digests the protein mixture using a user-defined enzyme. The software creates an LC-MS data set using a predictor for the retention time of the peptides and a model for peak shapes and elution profiles of the mass spectral peaks. Our software also offers the possibility to add contaminants, to change the background noise level and includes a model for the detectability of peptides in mass spectra. After the simulation, LC-MSsim writes the simulated data to mzData, a public XML format. The software also stores the positions (monoisotopic m/z and retention time) and ion counts of the simulated ions in separate files. Conclusion LC-MSsim generates simulated LC-MS data sets and incorporates models for peak shapes and contaminations. Algorithm developers can match the results of feature detection and alignment algorithms against the simulated ion lists and meaningful error rates can be computed. We anticipate that LC-MSsim will be useful to the wider community to perform benchmark studies and comparisons between computational tools. PMID:18842122
Generic Space Science Visualization in 2D/3D using SDDAS
NASA Astrophysics Data System (ADS)
Mukherjee, J.; Murphy, Z. B.; Gonzalez, C. A.; Muller, M.; Ybarra, S.
2017-12-01
The Southwest Data Display and Analysis System (SDDAS) is a flexible multi-mission / multi-instrument software system intended to support space physics data analysis, and has been in active development for over 20 years. For the Magnetospheric Multi-Scale (MMS), Juno, Cluster, and Mars Express missions, we have modified these generic tools for visualizing data in two and three dimensions. The SDDAS software is open source and makes use of various other open source packages, including VTK and Qwt. The software offers interactive plotting as well as a Python and Lua module to modify the data before plotting. In theory, by writing a Lua or Python module to read the data, any data could be used. Currently, the software can natively read data in IDFS, CEF, CDF, FITS, SEG-Y, ASCII, and XLS formats. We have integrated the software with other Python packages such as SPICE and SpacePy. Included with the visualization software is a database application and other utilities for managing data that can retrieve data from the Cluster Active Archive and Space Physics Data Facility at Goddard, as well as other local archives. Line plots, spectrograms, geographic, volume plots, strip charts, etc. are just some of the types of plots one can generate with SDDAS. Furthermore, due to the design, output is not limited to strictly visualization as SDDAS can also be used to generate stand-alone IDL or Python visualization code.. Lastly, SDDAS has been successfully used as a backend for several web based analysis systems as well.
2013-01-01
Background Besides the development of comprehensive tools for high-throughput 16S ribosomal RNA amplicon sequence analysis, there exists a growing need for protocols emphasizing alternative phylogenetic markers such as those representing eukaryotic organisms. Results Here we introduce CloVR-ITS, an automated pipeline for comparative analysis of internal transcribed spacer (ITS) pyrosequences amplified from metagenomic DNA isolates and representing fungal species. This pipeline performs a variety of steps similar to those commonly used for 16S rRNA amplicon sequence analysis, including preprocessing for quality, chimera detection, clustering of sequences into operational taxonomic units (OTUs), taxonomic assignment (at class, order, family, genus, and species levels) and statistical analysis of sample groups of interest based on user-provided information. Using ITS amplicon pyrosequencing data from a previous human gastric fluid study, we demonstrate the utility of CloVR-ITS for fungal microbiota analysis and provide runtime and cost examples, including analysis of extremely large datasets on the cloud. We show that the largest fractions of reads from the stomach fluid samples were assigned to Dothideomycetes, Saccharomycetes, Agaricomycetes and Sordariomycetes but that all samples were dominated by sequences that could not be taxonomically classified. Representatives of the Candida genus were identified in all samples, most notably C. quercitrusa, while sequence reads assigned to the Aspergillus genus were only identified in a subset of samples. CloVR-ITS is made available as a pre-installed, automated, and portable software pipeline for cloud-friendly execution as part of the CloVR virtual machine package (http://clovr.org). Conclusion The CloVR-ITS pipeline provides fungal microbiota analysis that can be complementary to bacterial 16S rRNA and total metagenome sequence analysis allowing for more comprehensive studies of environmental and host-associated microbial communities. PMID:24451270
2014-01-01
Background Recent innovations in sequencing technologies have provided researchers with the ability to rapidly characterize the microbial content of an environmental or clinical sample with unprecedented resolution. These approaches are producing a wealth of information that is providing novel insights into the microbial ecology of the environment and human health. However, these sequencing-based approaches produce large and complex datasets that require efficient and sensitive computational analysis workflows. Many recent tools for analyzing metagenomic-sequencing data have emerged, however, these approaches often suffer from issues of specificity, efficiency, and typically do not include a complete metagenomic analysis framework. Results We present PathoScope 2.0, a complete bioinformatics framework for rapidly and accurately quantifying the proportions of reads from individual microbial strains present in metagenomic sequencing data from environmental or clinical samples. The pipeline performs all necessary computational analysis steps; including reference genome library extraction and indexing, read quality control and alignment, strain identification, and summarization and annotation of results. We rigorously evaluated PathoScope 2.0 using simulated data and data from the 2011 outbreak of Shiga-toxigenic Escherichia coli O104:H4. Conclusions The results show that PathoScope 2.0 is a complete, highly sensitive, and efficient approach for metagenomic analysis that outperforms alternative approaches in scope, speed, and accuracy. The PathoScope 2.0 pipeline software is freely available for download at: http://sourceforge.net/projects/pathoscope/. PMID:25225611
Raster-scanning serial protein crystallography using micro- and nano-focused synchrotron beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coquelle, Nicolas; CNRS, IBS, 38044 Grenoble; CEA, IBS, 38044 Grenoble
A raster scanning serial protein crystallography approach is presented, that consumes as low ∼200–700 nl of sedimented crystals. New serial data pre-analysis software, NanoPeakCell, is introduced. High-resolution structural information was obtained from lysozyme microcrystals (20 µm in the largest dimension) using raster-scanning serial protein crystallography on micro- and nano-focused beamlines at the ESRF. Data were collected at room temperature (RT) from crystals sandwiched between two silicon nitride wafers, thereby preventing their drying, while limiting background scattering and sample consumption. In order to identify crystal hits, new multi-processing and GUI-driven Python-based pre-analysis software was developed, named NanoPeakCell, that was able tomore » read data from a variety of crystallographic image formats. Further data processing was carried out using CrystFEL, and the resultant structures were refined to 1.7 Å resolution. The data demonstrate the feasibility of RT raster-scanning serial micro- and nano-protein crystallography at synchrotrons and validate it as an alternative approach for the collection of high-resolution structural data from micro-sized crystals. Advantages of the proposed approach are its thriftiness, its handling-free nature, the reduced amount of sample required, the adjustable hit rate, the high indexing rate and the minimization of background scattering.« less
Zheng, Qi; Grice, Elizabeth A
2016-10-01
Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or "best" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit "AlignerBoost", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost's algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost.
Shiotani, Akiko; Honda, Keisuke; Kawakami, Makiko; Kimura, Yoshiki; Yamanaka, Yoshiyuki; Fujita, Minoru; Matsumoto, Hiroshi; Tarumi, Ken-ichi; Manabe, Noriaki; Haruma, Ken
2012-01-01
The aim was to investigate the clinical utility of RAPID Access 6.5 Quickview software and to evaluate whether preview of the capsule endoscopy video by a trained nurse could detect significant lesions accurately compared with endoscopists. As reading capsule endoscopy is time consuming, one possible cost-effective strategy could be the use of trained nonphysicians or newly available software to preread and identify potentially important capsule images. The 100 capsule images of a variety of significant lesions from 87 patients were investigated. The minimum percentages for settings of sensitivity that could pick up the selected images and the detection rate for significant lesions by a well-trained nurse, two endoscopists with limited experience in reading, and one well-trained physician were examined. The frequency of the selected lesions picked up by Quickview mode using percentages for sensitivity settings of 5%, 15%, 25%, and 35% were 61%, 74%, 93%, and 98%, respectively. The percentages for sensitivity significantly correlated (r=0.78, P<0.001) with the reading time. The detection rate by the nurse or the well-trained physician was significantly higher than that by the physician with limited capsule experience (87% and 84.1% vs. 62.7%; P<0.01). The clinical use of Quickview at 25% did not significantly improve the detection rate. Quickview mode can reduce reading time but has an unacceptably miss rate for potentially important lesions. Use of a trained nonphysician assistant can reduce physician's time and improve diagnostic yield.
Atmospheric Science Data Center
2016-08-22
MISBR MISR Browse Data: Color browse image of the Ellipsoid product for each camera resampled to 2.2 km resolution. ... Tool: Order Data Readme Files: Processing Status Production Report Read Software ...
Atmospheric Science Data Center
2018-04-19
... Earthdata Search Parameters: Average aerosol optical depth Order Data: MISR Order Tool: Order ... Readme Files: Processing Status: Aerosol/Land Production Report: Daily Read Software ...
Chaplin, J C; Russell, N A; Krasnogor, N
2012-07-01
In this paper we detail experimental methods to implement registers, logic gates and logic circuits using populations of photochromic molecules exposed to sequences of light pulses. Photochromic molecules are molecules with two or more stable states that can be switched reversibly between states by illuminating with appropriate wavelengths of radiation. Registers are implemented by using the concentration of molecules in each state in a given sample to represent an integer value. The register's value can then be read using the intensity of a fluorescence signal from the sample. Logic gates have been implemented using a register with inputs in the form of light pulses to implement 1-input/1-output and 2-input/1-output logic gates. A proof of concept logic circuit is also demonstrated; coupled with the software workflow describe the transition from a circuit design to the corresponding sequence of light pulses. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
TriageTools: tools for partitioning and prioritizing analysis of high-throughput sequencing data.
Fimereli, Danai; Detours, Vincent; Konopka, Tomasz
2013-04-01
High-throughput sequencing is becoming a popular research tool but carries with it considerable costs in terms of computation time, data storage and bandwidth. Meanwhile, some research applications focusing on individual genes or pathways do not necessitate processing of a full sequencing dataset. Thus, it is desirable to partition a large dataset into smaller, manageable, but relevant pieces. We present a toolkit for partitioning raw sequencing data that includes a method for extracting reads that are likely to map onto pre-defined regions of interest. We show the method can be used to extract information about genes of interest from DNA or RNA sequencing samples in a fraction of the time and disk space required to process and store a full dataset. We report speedup factors between 2.6 and 96, depending on settings and samples used. The software is available at http://www.sourceforge.net/projects/triagetools/.
Towards understanding software: 15 years in the SEL
NASA Technical Reports Server (NTRS)
Mcgarry, Frank; Pajerski, Rose
1990-01-01
For 15 years, the Software Engineering Laboratory (SEL) at GSFC has been carrying out studies and experiments for the purpose of understanding, assessing, and improving software, and software processes within a production software environment. The SEL comprises three major organizations: (1) the GSFC Flight Dynamics Division; (2) the University of Maryland Computer Science Department; and (3) the Computer Sciences Corporation Flight Dynamics Technology Group. These organizations have jointly carried out several hundred software studies, producing hundreds of reports, papers, and documents: all describing some aspect of the software engineering technology that has undergone analysis in the flight dynamics environment. The studies range from small controlled experiments (such as analyzing the effectiveness of code reading versus functional testing) to large, multiple-project studies (such as assessing the impacts of Ada on a production environment). The key findings that NASA feels have laid the foundation for ongoing and future software development and research activities are summarized.
Kalina, Tomas; Flores-Montero, Juan; Lecrevisse, Quentin; Pedreira, Carlos E; van der Velden, Vincent H J; Novakova, Michaela; Mejstrikova, Ester; Hrusak, Ondrej; Böttcher, Sebastian; Karsch, Dennis; Sędek, Łukasz; Trinquand, Amelie; Boeckx, Nancy; Caetano, Joana; Asnafi, Vahid; Lucio, Paulo; Lima, Margarida; Helena Santos, Ana; Bonaccorso, Paola; van der Sluijs-Gelling, Alita J; Langerak, Anton W; Martin-Ayuso, Marta; Szczepański, Tomasz; van Dongen, Jacques J M; Orfao, Alberto
2015-02-01
Flow cytometric immunophenotyping has become essential for accurate diagnosis, classification, and disease monitoring in hemato-oncology. The EuroFlow Consortium has established a fully standardized "all-in-one" pipeline consisting of standardized instrument settings, reagent panels, and sample preparation protocols and software for data analysis and disease classification. For its reproducible implementation, parallel development of a quality assurance (QA) program was required. Here, we report on the results of four consecutive annual rounds of the novel external QA EuroFlow program. The novel QA scheme aimed at monitoring the whole flow cytometric analysis process (cytometer setting, sample preparation, acquisition and analysis) by reading the median fluorescence intensities (MedFI) of defined lymphocytes' subsets. Each QA participant applied the predefined reagents' panel on blood cells of local healthy donors. A uniform gating strategy was applied to define lymphocyte subsets and to read MedFI values per marker. The MedFI values were compared with reference data and deviations from reference values were quantified using performance score metrics. In four annual QA rounds, we analyzed 123 blood samples from local healthy donors on 14 different instruments in 11 laboratories from nine European countries. The immunophenotype of defined cellular subsets appeared sufficiently standardized to permit unified (software) data analysis. The coefficient of variation of MedFI for 7 of 11 markers performed repeatedly below 30%, average MedFI in each QA round ranged from 86 to 125% from overall median. Calculation of performance scores was instrumental to pinpoint standardization failures and their causes. Overall, the new EuroFlow QA system for the first time allowed to quantify the technical variation that is introduced in the measurement of fluorescence intensities in a multicentric setting over an extended period of time. EuroFlow QA is a proficiency test specific for laboratories that use standardized EuroFlow protocols. It may be used to complement, but not replace, established proficiency tests. © 2014 International Society for Advancement of Cytometry. © 2014 International Society for Advancement of Cytometry.
The XBabelPhish MAGE-ML and XML translator.
Maier, Don; Wymore, Farrell; Sherlock, Gavin; Ball, Catherine A
2008-01-18
MAGE-ML has been promoted as a standard format for describing microarray experiments and the data they produce. Two characteristics of the MAGE-ML format compromise its use as a universal standard: First, MAGE-ML files are exceptionally large - too large to be easily read by most people, and often too large to be read by most software programs. Second, the MAGE-ML standard permits many ways of representing the same information. As a result, different producers of MAGE-ML create different documents describing the same experiment and its data. Recognizing all the variants is an unwieldy software engineering task, resulting in software packages that can read and process MAGE-ML from some, but not all producers. This Tower of MAGE-ML Babel bars the unencumbered exchange of microarray experiment descriptions couched in MAGE-ML. We have developed XBabelPhish - an XQuery-based technology for translating one MAGE-ML variant into another. XBabelPhish's use is not restricted to translating MAGE-ML documents. It can transform XML files independent of their DTD, XML schema, or semantic content. Moreover, it is designed to work on very large (> 200 Mb.) files, which are common in the world of MAGE-ML. XBabelPhish provides a way to inter-translate MAGE-ML variants for improved interchange of microarray experiment information. More generally, it can be used to transform most XML files, including very large ones that exceed the capacity of most XML tools.
NASA Astrophysics Data System (ADS)
Xuan, Chuang; Oda, Hirokuni
2015-11-01
The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.
Holistic Approaches to Reading (The Printout).
ERIC Educational Resources Information Center
Balajthy, Ernest
1989-01-01
Presents eight guidelines to consider when using computers for language instruction, emphasizing computer use in a social and purposeful context. Suggests computer software which adheres to these guidelines. (MM)
... recorded versions of any book, even textbooks. Computer software is also available that "reads" printed material aloud. Ask your parent, teacher, or learning disability services coordinator how to get these services if you ...
NASA Astrophysics Data System (ADS)
Neiles, Kelly Y.
There is great concern in the scientific community that students in the United States, when compared with other countries, are falling behind in their scientific achievement. Increasing students' reading comprehension of scientific text may be one of the components involved in students' science achievement. To investigate students' reading comprehension this quantitative study examined the effects of different reader characteristics, namely, students' logical reasoning ability, factual chemistry knowledge, working memory capacity, and schema of the chemistry concepts, on reading comprehension of a chemistry text. Students' reading comprehension was measured through their ability to encode the text, access the meanings of words (lexical access), make bridging and elaborative inferences, and integrate the text with their existing schemas to make a lasting mental representation of the text (situational model). Students completed a series of tasks that measured the reader characteristic and reading comprehension variables. Some of the variables were measured using new technologies and software to investigate different cognitive processes. These technologies and software included eye tracking to investigate students' lexical accessing and a Pathfinder program to investigate students' schema of the chemistry concepts. The results from this study were analyzed using canonical correlation and regression analysis. The canonical correlation analysis allows for the ten variables described previously to be included in one multivariate analysis. Results indicate that the relationship between the reader characteristic variables and the reading comprehension variables is significant. The resulting canonical function accounts for a greater amount of variance in students' responses then any individual variable. Regression analysis was used to further investigate which reader characteristic variables accounted for the differences in students' responses for each reading comprehension variable. The results from this regression analysis indicated that the two schema measures (measured by the Pathfinder program) accounted for the greatest amount of variance in four of the reading comprehension variables (encoding the text, bridging and elaborative inferences, and delayed recall of a general summary). This research suggest that providing students with background information on chemistry concepts prior to having them read the text may result in better understanding and more effective incorporation of the chemistry concepts into their schema.
PEOPLE IN PHYSICS: Interview with Scott Durow, Software Engineer, Oxford
NASA Astrophysics Data System (ADS)
Burton, Conducted by Paul
1998-05-01
Scott Durow was educated at Bootham School, York. He studied Physics, Mathematics and Chemistry to A-level and went on to Nottingham University to read Medical Physics. After graduating from Nottingham he embarked on his present career as a Software Engineer based in Oxford. He is a musician in his spare time, as a member of a band and playing the French horn.
Product Definition Data Interface (PDDI) Product Specification
1991-07-01
syntax of the language gives a precise specification of the data without interpretation of it. M - Constituent Read Block. CSECT - Control Section, the...to conform to the PDDI Access Software’s internal data representation so that it may be further processed. JCL - Job Control Language - IBM language...software development and life cycle * phases. OUALITY CONTROL - The planned and systematic application of all actions (management/technical) necessary to
ERIC Educational Resources Information Center
Baker, Fiona S.
2015-01-01
This study explores the expectations and early and subsequent realities of text-to-speech software for 24 nonnative-English-speaking college students who were experiencing reading difficulties in their freshman year of college. The study took place over two semesters in one academic year (from September to June) at a community college on the…
Alternatives for Developing User Documentation for Applications Software
1991-09-01
style that is designed to match adult reading behaviors, using reader-based writing techniques, developing effective graphics , creating reference aids...involves research, analysis, design , and testing. The writer must have a solid understanding of the technical aspects of the document being prepared, good...ABSTRACT The preparation of software documentation is an iterative process that involves research, analysis, design , and testing. The writer must have
ERIC Educational Resources Information Center
Wood, Eileen; Anderson, Alissa; Piquette-Tomei, Noella; Savage, Robert; Mueller, Julie
2011-01-01
Support requests were documented for 10 teachers (4 kindergarten, 4 grade one, and 2 grade one/two teachers) who received just-in-time instructional support over a 2 1/2 month period while implementing a novel reading software program as part of their literacy instruction. In-class observations were made of each instructional session. Analysis of…
Repeat-aware modeling and correction of short read errors.
Yang, Xiao; Aluru, Srinivas; Dorman, Karin S
2011-02-15
High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors for genomes with high repeat content.
Planetary Atmosphere Dynamics and Radiative Transfer
NASA Technical Reports Server (NTRS)
Atkinson, David H.
1996-01-01
This research program has dealt with two projects in the field of planetary atmosphere dynamics and radiative energy transfer, one theoretical and one experimental. The first project, in radiative energy transfer, incorporated the capability to isolate and quantify the contribution of individual atmospheric components to the Venus radiative balance and thermal structure to greatly improve the current understanding of the radiative processes occurring within the Venus atmosphere. This is possible by varying the mixing ratios of each gas species, and the location, number density and aerosol size distributions of the clouds. This project was a continuation of the work initiated under a 1992 University Consortium Agreement. Under the just completed grant, work has continued on the use of a convolution-based algorithm that provided the capability to calculate the k coefficients of a gas mixture at different temperatures, pressures and spectral intervals from the separate k-distributions of the individual gas species. The second primary goal of this research dealt with the Doppler wind retrieval for the Successful Galileo Jupiter probe mission in December, 1995. In anticipation of the arrival of Galileo at Jupiter, software development continued to read the radioscience and probe/orbiter trajectory data provided by the Galileo project and required for Jupiter zonal wind measurements. Sample experiment radioscience data records and probe/orbiter trajectory data files provided by the Galileo Radioscience and Navigation teams at the Jet Propulsion Laboratory, respectively, were used for the first phase of the software development. The software to read the necessary data records was completed in 1995. The procedure by which the wind retrieval takes place begins with initial consistency checks of the raw data, preliminary data reductions, wind recoveries, iterative reconstruction of the probe descent profile, and refined wind recoveries. At each stage of the wind recovery consistency is checked and maintained between the orbiter navigational data, the radioscience data, and the probe descent profile derived by the Atmospheric Instrument Team. Preliminary results show that the zonal winds at Jupiter increase with depth to approximately 150 m/s.
DSPSR: Digital Signal Processing Software for Pulsar Astronomy
NASA Astrophysics Data System (ADS)
van Straten, W.; Bailes, M.
2010-10-01
DSPSR, written primarily in C++, is an open-source, object-oriented, digital signal processing software library and application suite for use in radio pulsar astronomy. The library implements an extensive range of modular algorithms for use in coherent dedispersion, filterbank formation, pulse folding, and other tasks. The software is installed and compiled using the standard GNU configure and make system, and is able to read astronomical data in 18 different file formats, including FITS, S2, CPSR, CPSR2, PuMa, PuMa2, WAPP, ASP, and Mark5.
Lovett, M W
1984-05-01
Children referred with specific reading dysfunction were subtyped as accuracy disabled or rate disabled according to criteria developed from an information processing model of reading skill. Multiple measures of oral and written language development were compared for two subtyped samples matched on age, sex, and IQ. The two samples were comparable in reading fluency, reading comprehension, word knowledge, and word retrieval functions. Accuracy disabled readers demonstrated inferior decoding and spelling skills. The accuracy disabled sample proved deficient in their understanding of oral language structure and in their ability to associate unfamiliar pseudowords and novel symbols in a task designed to simulate some of the learning involved in initial reading acquisition. It was suggested that these two samples of disabled readers may be best described with respect to their relative standing along a theoretical continuum of normal reading development.
HSA: a heuristic splice alignment tool.
Bu, Jingde; Chi, Xuebin; Jin, Zhong
2013-01-01
RNA-Seq methodology is a revolutionary transcriptomics sequencing technology, which is the representative of Next generation Sequencing (NGS). With the high throughput sequencing of RNA-Seq, we can acquire much more information like differential expression and novel splice variants from deep sequence analysis and data mining. But the short read length brings a great challenge to alignment, especially when the reads span two or more exons. A two steps heuristic splice alignment tool is generated in this investigation. First, map raw reads to reference with unspliced aligner--BWA; second, split initial unmapped reads into three equal short reads (seeds), align each seed to the reference, filter hits, search possible split position of read and extend hits to a complete match. Compare with other splice alignment tools like SOAPsplice and Tophat2, HSA has a better performance in call rate and efficiency, but its results do not as accurate as the other software to some extent. HSA is an effective spliced aligner of RNA-Seq reads mapping, which is available at https://github.com/vlcc/HSA.
Analysis of a mammography teaching program based on an affordance design model.
Luo, Ping; Eikman, Edward A; Kealy, William; Qian, Wei
2006-12-01
The wide use of computer technology in education, particularly in mammogram reading, asks for e-learning evaluation. The existing media comparative studies, learner attitude evaluations, and performance tests are problematic. Based on an affordance design model, this study examined an existing e-learning program on mammogram reading. The selection criteria include content relatedness, representativeness, e-learning orientation, image quality, program completeness, and accessibility. A case study was conducted to examine the affordance features, functions, and presentations of the selected software. Data collection and analysis methods include interviews, protocol-based document analysis, and usability tests and inspection. Also some statistics were calculated. The examination of PBE identified that this educational software designed and programmed some tools. The learner can use these tools in the process of optimizing displays, scanning images, comparing different projections, marking the region of interests, constructing a descriptive report, assessing one's learning outcomes, and comparing one's decisions with the experts' decisions. Further, PBE provides some resources for the learner to construct one's knowledge and skills, including a categorized image library, a term-searching function, and some teaching links. Besides, users found it easy to navigate and carry out tasks. The users also reacted positively toward PBE's navigation system, instructional aids, layout, pace and flow of information, graphics, and other presentation design. The software provides learners with some cognitive tools, supporting their perceptual problem-solving processes and extending their capabilities. Learners can internalize the mental models in mammogram reading through multiple perceptual triangulations, sensitization of related features, semantic description of mammogram findings, and expert-guided semantic report construction. The design of these cognitive tools and the software interface matches the findings and principles in human learning and instructional design. Working with PBE's case-based simulations and categorized gallery, learners can enrich and transfer their experience to their jobs.
Train the Trainer. Facilitator Guide Sample. Basic Blueprint Reading (Chapter One).
ERIC Educational Resources Information Center
Saint Louis Community Coll., MO.
This publication consists of three sections: facilitator's guide--train the trainer, facilitator's guide sample--Basic Blueprint Reading (Chapter 1), and participant's guide sample--basic blueprint reading (chapter 1). Section I addresses why the trainer should learn new classroom techniques; lecturing versus facilitating; learning styles…
Intraoperative visualization and assessment of electromagnetic tracking error
NASA Astrophysics Data System (ADS)
Harish, Vinyas; Ungi, Tamas; Lasso, Andras; MacDonald, Andrew; Nanji, Sulaiman; Fichtinger, Gabor
2015-03-01
Electromagnetic tracking allows for increased flexibility in designing image-guided interventions, however it is well understood that electromagnetic tracking is prone to error. Visualization and assessment of the tracking error should take place in the operating room with minimal interference with the clinical procedure. The goal was to achieve this ideal in an open-source software implementation in a plug and play manner, without requiring programming from the user. We use optical tracking as a ground truth. An electromagnetic sensor and optical markers are mounted onto a stylus device, pivot calibrated for both trackers. Electromagnetic tracking error is defined as difference of tool tip position between electromagnetic and optical readings. Multiple measurements are interpolated into the thin-plate B-spline transform visualized in real time using 3D Slicer. All tracked devices are used in a plug and play manner through the open-source SlicerIGT and PLUS extensions of the 3D Slicer platform. Tracking error was measured multiple times to assess reproducibility of the method, both with and without placing ferromagnetic objects in the workspace. Results from exhaustive grid sampling and freehand sampling were similar, indicating that a quick freehand sampling is sufficient to detect unexpected or excessive field distortion in the operating room. The software is available as a plug-in for the 3D Slicer platforms. Results demonstrate potential for visualizing electromagnetic tracking error in real time for intraoperative environments in feasibility clinical trials in image-guided interventions.
LLCEDATA and LLCECALC for Windows version 1.0, Volume 1: User`s manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
McFadden, J.G.
LLCEDATA and LLCECALC for Windows are user-friendly computer software programs that work together to determine the proper waste designation, handling, and disposition requirements for Long Length Contaminated Equipment (LLCE). LLCEDATA reads from a variety of data bases to produce an equipment data file (EDF) that represents a snapshot of both the LLCE and the tank it originates from. LLCECALC reads the EDF and a gamma assay (AV2) file that is produced by the Flexible Receiver Gamma Energy Analysis System. LLCECALC performs corrections to the AV2 file as it is being read and characterizes the LLCE. Both programs produce a varietymore » of reports, including a characterization report and a status report. The status report documents each action taken by the user, LLCEDATA, and LLCECALC. Documentation for LLCEDATA and LLCECALC for Windows is available in three volumes. Volume 1 is a user`s manual, which is intended as a quick reference for both LLCEDATA and LLCECALC. Volume 2 is a technical manual, and Volume 3 is a software verification and validation document.« less
Rcorrector: efficient and accurate error correction for Illumina RNA-seq reads.
Song, Li; Florea, Liliana
2015-01-01
Next-generation sequencing of cellular RNA (RNA-seq) is rapidly becoming the cornerstone of transcriptomic analysis. However, sequencing errors in the already short RNA-seq reads complicate bioinformatics analyses, in particular alignment and assembly. Error correction methods have been highly effective for whole-genome sequencing (WGS) reads, but are unsuitable for RNA-seq reads, owing to the variation in gene expression levels and alternative splicing. We developed a k-mer based method, Rcorrector, to correct random sequencing errors in Illumina RNA-seq reads. Rcorrector uses a De Bruijn graph to compactly represent all trusted k-mers in the input reads. Unlike WGS read correctors, which use a global threshold to determine trusted k-mers, Rcorrector computes a local threshold at every position in a read. Rcorrector has an accuracy higher than or comparable to existing methods, including the only other method (SEECER) designed for RNA-seq reads, and is more time and memory efficient. With a 5 GB memory footprint for 100 million reads, it can be run on virtually any desktop or server. The software is available free of charge under the GNU General Public License from https://github.com/mourisl/Rcorrector/.
Enhancing reading performance through action video games: the role of visual attention span.
Antzaka, A; Lallier, M; Meyer, S; Diard, J; Carreiras, M; Valdois, S
2017-11-06
Recent studies reported that Action Video Game-AVG training improves not only certain attentional components, but also reading fluency in children with dyslexia. We aimed to investigate the shared attentional components of AVG playing and reading, by studying whether the Visual Attention (VA) span, a component of visual attention that has previously been linked to both reading development and dyslexia, is improved in frequent players of AVGs. Thirty-six French fluent adult readers, matched on chronological age and text reading proficiency, composed two groups: frequent AVG players and non-players. Participants performed behavioural tasks measuring the VA span, and a challenging reading task (reading of briefly presented pseudo-words). AVG players performed better on both tasks and performance on these tasks was correlated. These results further support the transfer of the attentional benefits of playing AVGs to reading, and indicate that the VA span could be a core component mediating this transfer. The correlation between VA span and pseudo-word reading also supports the involvement of VA span even in adult reading. Future studies could combine VA span training with defining features of AVGs, in order to build a new generation of remediation software.
ERIC Educational Resources Information Center
Goff, Deborah A.; Pratt, Chris; Ong, Ben
2005-01-01
The primary aim of the current study was to identify the strongest independent predictors of reading comprehension using word reading, language and memory variables in a normal sample of 180 children in grades 3-5, with a range of word reading skills. It was hypothesized that orthographic processing, receptive vocabulary and verbal working memory…
ERIC Educational Resources Information Center
Moody, Kristie
2017-01-01
The purpose of this study was to examine the effectiveness of a reading curriculum on high school students' achievement, confidence in reading and teachers' perceptions. A paired samples t test was used to compare students' pretest and posttest scores on the reading curriculum. Two independent samples t tests were run to compare the control and…
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2011 CFR
2011-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2010 CFR
2010-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2012 CFR
2012-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2014 CFR
2014-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2013 CFR
2013-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
Budavari, Tamas; Langmead, Ben; Wheelan, Sarah J.; Salzberg, Steven L.; Szalay, Alexander S.
2015-01-01
When computing alignments of DNA sequences to a large genome, a key element in achieving high processing throughput is to prioritize locations in the genome where high-scoring mappings might be expected. We formulated this task as a series of list-processing operations that can be efficiently performed on graphics processing unit (GPU) hardware.We followed this approach in implementing a read aligner called Arioc that uses GPU-based parallel sort and reduction techniques to identify high-priority locations where potential alignments may be found. We then carried out a read-by-read comparison of Arioc’s reported alignments with the alignments found by several leading read aligners. With simulated reads, Arioc has comparable or better accuracy than the other read aligners we tested. With human sequencing reads, Arioc demonstrates significantly greater throughput than the other aligners we evaluated across a wide range of sensitivity settings. The Arioc software is available at https://github.com/RWilton/Arioc. It is released under a BSD open-source license. PMID:25780763
Zheng, Qi; Grice, Elizabeth A.
2016-01-01
Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or "best" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit "AlignerBoost", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost’s algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost. PMID:27706155
Knowledge-Based Software Development Tools
1993-09-01
GREEN, C., AND WESTFOLD, S. Knowledge-based programming self-applied. In Machine Intelligence 10, J. E. Hayes, D. Mitchie, and Y. Pao, Eds., Wiley...Technical Report KES.U.84.2, Kestrel Institute, April 1984. [181 KORF, R. E. Toward a model of representation changes. Artificial Intelligence 14, 1...Artificial Intelligence 27, 1 (February 1985), 43-96. Replinted in Readings in Artificial Intelligence and Software Engineering, C. Rich •ad R. Waters
Multichannel Networked Phasemeter Readout and Analysis
NASA Technical Reports Server (NTRS)
Edmonds, Karina
2008-01-01
Netmeter software reads a data stream from up to 250 networked phasemeters, synchronizes the data, saves the reduced data to disk (after applying a low-pass filter), and provides a Web server interface for remote control. Unlike older phasemeter software that requires a special, real-time operating system, this program can run on any general-purpose computer. It needs about five percent of the CPU (central processing unit) to process 20 channels because it adds built-in data logging and network-based GUIs (graphical user interfaces) that are implemented in Scalable Vector Graphics (SVG). Netmeter runs on Linux and Windows. It displays the instantaneous displacements measured by several phasemeters at a user-selectable rate, up to 1 kHz. The program monitors the measure and reference channel frequencies. For ease of use, levels of status in Netmeter are color coded: green for normal operation, yellow for network errors, and red for optical misalignment problems. Netmeter includes user-selectable filters up to 4 k samples, and user-selectable averaging windows (after filtering). Before filtering, the program saves raw data to disk using a burst-write technique.
NASA Astrophysics Data System (ADS)
Yussup, F.; Ibrahim, M. M.; Haris, M. F.; Soh, S. C.; Hasim, H.; Azman, A.; Razalim, F. A. A.; Yapp, R.; Ramli, A. A. M.
2016-01-01
With the growth of technology, many devices and equipments can be connected to the network and internet to enable online data acquisition for real-time data monitoring and control from monitoring devices located at remote sites. Centralized radiation monitoring system (CRMS) is a system that enables area radiation level at various locations in Malaysian Nuclear Agency (Nuklear Malaysia) to be monitored centrally by using a web browser. The Local Area Network (LAN) in Nuclear Malaysia is utilized in CRMS as a communication media for data acquisition of the area radiation levels from radiation detectors. The development of the system involves device configuration, wiring, network and hardware installation, software and web development. This paper describes the software upgrading on the system server that is responsible to acquire and record the area radiation readings from the detectors. The recorded readings are called in a web programming to be displayed on a website. Besides the main feature which is acquiring the area radiation levels in Nuclear Malaysia centrally, the upgrading involves new features such as uniform time interval for data recording and exporting, warning system and dose triggering.
Reading and Comprehension Levels in a Sample of Urban, Low-Income Persons
ERIC Educational Resources Information Center
Delgado, Cheryl; Weitzel, Marilyn
2013-01-01
Objective: Because health literacy is related to healthcare outcomes, this study looked at reading and comprehension levels in a sample of urban, low-income persons. Design: This was a descriptive exploration of reading comprehension levels, controlled for medical problems that could impact on vision and therefore ability to read. Setting: Ninety…
Ning, Yi; Li, Yan-Ling; Zhou, Guo-Ying; Yang, Lu-Cun; Xu, Wen-Hua
2016-04-01
High throughput sequencing technology is also called Next Generation Sequencing (NGS), which can sequence hundreds and thousands sequences in different samples at the same time. In the present study, the culture-independent high throughput sequencing technology was applied to sequence the fungi metagenomic DNA of the fungal internal transcribed spacer 1(ITS 1) in the root of Sinopodophyllum hexandrum. Sequencing data suggested that after the quality control, 22 565 reads were remained. Cluster similarity analysis was done based on 97% sequence similarity, which obtained 517 OTUs for the three samples (LD1, LD2 and LD3). All the fungi which identified from all the reads of OTUs based on 0.8 classification thresholds using the software of RDP classifier were classified as 13 classes, 35 orders, 44 family, 55 genera. Among these genera, the genus of Tetracladium was the dominant genera in all samples(35.49%, 68.55% and 12.96%).The Shannon's diversity indices and the Simpson indices of the endophytic fungi in the samples ranged from 1.75-2.92, 0.11-0.32, respectively.This is the first time for applying high through put sequencing technol-ogyto analyze the community composition and diversity of endophytic fungi in the medicinal plant, and the results showed that there were hyper diver sity and high community composition complexity of endophytic fungi in the root of S. hexandrum. It is also proved that the high through put sequencing technology has great advantage for analyzing ecommunity composition and diversity of endophtye in the plant. Copyright© by the Chinese Pharmaceutical Association.
Calabria, Andrea; Spinozzi, Giulio; Benedicenti, Fabrizio; Tenderini, Erika; Montini, Eugenio
2015-01-01
Many biological laboratories that deal with genomic samples are facing the problem of sample tracking, both for pure laboratory management and for efficiency. Our laboratory exploits PCR techniques and Next Generation Sequencing (NGS) methods to perform high-throughput integration site monitoring in different clinical trials and scientific projects. Because of the huge amount of samples that we process every year, which result in hundreds of millions of sequencing reads, we need to standardize data management and tracking systems, building up a scalable and flexible structure with web-based interfaces, which are usually called Laboratory Information Management System (LIMS). We started collecting end-users' requirements, composed of desired functionalities of the system and Graphical User Interfaces (GUI), and then we evaluated available tools that could address our requirements, spanning from pure LIMS to Content Management Systems (CMS) up to enterprise information systems. Our analysis identified ADempiere ERP, an open source Enterprise Resource Planning written in Java J2EE, as the best software that also natively implements some highly desirable technological advances, such as the high usability and modularity that grants high use-case flexibility and software scalability for custom solutions. We extended and customized ADempiere ERP to fulfil LIMS requirements and we developed adLIMS. It has been validated by our end-users verifying functionalities and GUIs through test cases for PCRs samples and pre-sequencing data and it is currently in use in our laboratories. adLIMS implements authorization and authentication policies, allowing multiple users management and roles definition that enables specific permissions, operations and data views to each user. For example, adLIMS allows creating sample sheets from stored data using available exporting operations. This simplicity and process standardization may avoid manual errors and information backtracking, features that are not granted using track recording on files or spreadsheets. adLIMS aims to combine sample tracking and data reporting features with higher accessibility and usability of GUIs, thus allowing time to be saved on doing repetitive laboratory tasks, and reducing errors with respect to manual data collection methods. Moreover, adLIMS implements automated data entry, exploiting sample data multiplexing and parallel/transactional processing. adLIMS is natively extensible to cope with laboratory automation through platform-dependent API interfaces, and could be extended to genomic facilities due to the ERP functionalities.
DOEDEF Software System, Version 2. 2: Operational instructions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meirans, L.
The DOEDEF (Department of Energy Data Exchange Format) Software System is a collection of software routines written to facilitate the manipulation of IGES (Initial Graphics Exchange Specification) data. Typically, the IGES data has been produced by the IGES processors for a Computer-Aided Design (CAD) system, and the data manipulations are user-defined ''flavoring'' operations. The DOEDEF Software System is used in conjunction with the RIM (Relational Information Management) DBMS from Boeing Computer Services (Version 7, UD18 or higher). The three major pieces of the software system are: Parser, reads an ASCII IGES file and converts it to the RIM database equivalent;more » Kernel, provides the user with IGES-oriented interface routines to the database; and Filewriter, writes the RIM database to an IGES file.« less
Vision Impairment and Blindness
... books can make life easier. There are also devices to help those with no vision, like text-reading software and braille books. The sooner vision loss or eye disease is found and treated, the greater your ...
X-MATE: a flexible system for mapping short read data
Pearson, John V.; Cloonan, Nicole; Grimmond, Sean M.
2011-01-01
Summary: Accurate and complete mapping of short-read sequencing to a reference genome greatly enhances the discovery of biological results and improves statistical predictions. We recently presented RNA-MATE, a pipeline for the recursive mapping of RNA-Seq datasets. With the rapid increase in genome re-sequencing projects, progression of available mapping software and the evolution of file formats, we now present X-MATE, an updated version of RNA-MATE, capable of mapping both RNA-Seq and DNA datasets and with improved performance, output file formats, configuration files, and flexibility in core mapping software. Availability: Executables, source code, junction libraries, test data and results and the user manual are available from http://grimmond.imb.uq.edu.au/X-MATE/. Contact: n.cloonan@uq.edu.au; s.grimmond@uq.edu.au Supplementary information: Supplementary data are available at Bioinformatics Online. PMID:21216778
NASA Astrophysics Data System (ADS)
Baraúna, R. A.; Graças, D. A.; Ramos, R. T.; Carneiro, A. R.; Lopes, T. S.; Lima, A. R.; Zahlouth, R. L.; Pellizari, V. H.; Silva, A.
2013-05-01
Methanosarcina mazei is a strictly anaerobic methanogen from the Methanosarcinales order. This species is known for its broad catabolic range among methanogens and is widespread throughout diverse environments. The draft genome of a strain cultivated from the sediment of the Tucuruí hydroelectric power station, the fourth largest hydroelectric dam in the world, is described here. Approximately 80% of methane is produced by biogenic sources, such as methanogenic archaea from M. mazei species. Although the methanogenesis pathway is well known, some aspects of the core genome, genome evolution and shared genes are still unclear. A sediment sample from the Tucuruí hydropower station reservoir was inoculated in mineral media supplemented with acetate and methanol. This media was maintained in an H2:CO2 (80:20) atmosphere to enrich and cultivate M. mazei. The enrichment was conducted at 30°C under standard anaerobic conditions. After several molecular and cellular analyses, total DNA was extracted from a non-pure culture of M. mazei, amplified using phi29 DNA polymerase (BioLabs) and finally used as a source template for genome sequencing. The draft genome was obtained after two rounds of sequencing. First, the genome was sequenced using a SOLiD System V3 with a mate-paired library, which yielded 24,405,103 and 24,399,268 reads (50 bp) for the R3 and F3 tags, respectively. The second round of sequencing was performed using the SOLiD 5500 XL platform with a mate-paired library, resulting in a total of 113,588,848 reads (60 bp) for each tag (F3 and R3). All reads obtained by this procedure were filtered using Quality Assessment software, whereby reads with an average quality score below Phred 20 were removed. Velvet and Edena were used to assemble the reads, and Simplifier was used to remove the redundant sequences. After this, a total of 16,811 contigs were obtained. M. mazei GO1 (AE008384) genome was used to map the contigs and generate the scaffolds. We used the Graphical Contig Analyzer for All Sequencing Platforms software (G4ALL; http://g4all.sourceforge.net/) to manually curate and generate the genome scaffold with gaps. The resultant gaps were manually closed using CLC Genomics Workbench software. M. mazei TUC01 genome contained 3,420,400 bp with a GC content of 42.47% distributed over 3 scaffolds that were annotated by RAST. A total of 2,959 coding DNA sequences (CDS) were predicted. The genome of M. mazei TUC01 (accession number: CP003077) will provide valuable information about the ecology of Methanosarcinales order and more accurate information about the methanogenesis pathway observed in the Neotropics. SPONSOR: Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES); Agência Nacional de Energia Elétrica (ANEEL); Centrais Elétricas do Norte do Brasil (Eletronorte).
dDocent: a RADseq, variant-calling pipeline designed for population genomics of non-model organisms.
Puritz, Jonathan B; Hollenbeck, Christopher M; Gold, John R
2014-01-01
Restriction-site associated DNA sequencing (RADseq) has become a powerful and useful approach for population genomics. Currently, no software exists that utilizes both paired-end reads from RADseq data to efficiently produce population-informative variant calls, especially for non-model organisms with large effective population sizes and high levels of genetic polymorphism. dDocent is an analysis pipeline with a user-friendly, command-line interface designed to process individually barcoded RADseq data (with double cut sites) into informative SNPs/Indels for population-level analyses. The pipeline, written in BASH, uses data reduction techniques and other stand-alone software packages to perform quality trimming and adapter removal, de novo assembly of RAD loci, read mapping, SNP and Indel calling, and baseline data filtering. Double-digest RAD data from population pairings of three different marine fishes were used to compare dDocent with Stacks, the first generally available, widely used pipeline for analysis of RADseq data. dDocent consistently identified more SNPs shared across greater numbers of individuals and with higher levels of coverage. This is due to the fact that dDocent quality trims instead of filtering, incorporates both forward and reverse reads (including reads with INDEL polymorphisms) in assembly, mapping, and SNP calling. The pipeline and a comprehensive user guide can be found at http://dDocent.wordpress.com.
dDocent: a RADseq, variant-calling pipeline designed for population genomics of non-model organisms
Hollenbeck, Christopher M.; Gold, John R.
2014-01-01
Restriction-site associated DNA sequencing (RADseq) has become a powerful and useful approach for population genomics. Currently, no software exists that utilizes both paired-end reads from RADseq data to efficiently produce population-informative variant calls, especially for non-model organisms with large effective population sizes and high levels of genetic polymorphism. dDocent is an analysis pipeline with a user-friendly, command-line interface designed to process individually barcoded RADseq data (with double cut sites) into informative SNPs/Indels for population-level analyses. The pipeline, written in BASH, uses data reduction techniques and other stand-alone software packages to perform quality trimming and adapter removal, de novo assembly of RAD loci, read mapping, SNP and Indel calling, and baseline data filtering. Double-digest RAD data from population pairings of three different marine fishes were used to compare dDocent with Stacks, the first generally available, widely used pipeline for analysis of RADseq data. dDocent consistently identified more SNPs shared across greater numbers of individuals and with higher levels of coverage. This is due to the fact that dDocent quality trims instead of filtering, incorporates both forward and reverse reads (including reads with INDEL polymorphisms) in assembly, mapping, and SNP calling. The pipeline and a comprehensive user guide can be found at http://dDocent.wordpress.com. PMID:24949246
2014-04-01
synchronization primitives based on preset templates can result in over synchronization if unchecked, possibly creating deadlock situations. Further...inputs rather than enforcing synchronization with a global clock. MRICDF models software as a network of communicating actors. Four primitive actors...control wants to send interrupt or not. Since this is shared buffer, a semaphore mechanism is assumed to synchronize the read/write of this buffer. The
1976-11-01
system. b. Read different program configurations to reconfigure the software during flight. c. Write Digital Integrated Test System (DITS) results...associated witn > inor C):l.e Event must be Unlatched. The sole difference between a Latched ana an lnratcrec Condition is that upon the Scheduling...Table. Furthermore, the block of pointers for one Minor Cycle may be wholly contained witnir the Diock of ocinters for a different Minor Cycle. For
New Software for Ensemble Creation in the Spitzer-Space-Telescope Operations Database
NASA Technical Reports Server (NTRS)
Laher, Russ; Rector, John
2004-01-01
Some of the computer pipelines used to process digital astronomical images from NASA's Spitzer Space Telescope require multiple input images, in order to generate high-level science and calibration products. The images are grouped into ensembles according to well documented ensemble-creation rules by making explicit associations in the operations Informix database at the Spitzer Science Center (SSC). The advantage of this approach is that a simple database query can retrieve the required ensemble of pipeline input images. New and improved software for ensemble creation has been developed. The new software is much faster than the existing software because it uses pre-compiled database stored-procedures written in Informix SPL (SQL programming language). The new software is also more flexible because the ensemble creation rules are now stored in and read from newly defined database tables. This table-driven approach was implemented so that ensemble rules can be inserted, updated, or deleted without modifying software.
UPmag: MATLAB software for viewing and processing u channel or other pass-through paleomagnetic data
NASA Astrophysics Data System (ADS)
Xuan, Chuang; Channell, James E. T.
2009-10-01
With the development of pass-through cryogenic magnetometers and the u channel sampling method, large volumes of paleomagnetic data can be accumulated within a short time period. It is often critical to visualize and process these data in "real time" as measurements proceed, so that the measurement plan can be dictated accordingly. We introduce new MATLAB™ software (UPmag) that is designed for easy and rapid analysis of natural remanent magnetization (NRM) and laboratory-induced remanent magnetization data for u channel samples or core sections. UPmag comprises three MATLAB™ graphic user interfaces: UVIEW, UDIR, and UINT. UVIEW allows users to open and check through measurement data from the magnetometer as well as to correct detected flux jumps in the data, and to export files for further treatment. UDIR reads the *.dir file generated by UVIEW, automatically calculates component directions using selectable demagnetization range(s) with anchored or free origin, and displays vector component plots and stepwise intensity plots for any position along the u channel sample. UDIR can also display data on equal area stereographic projections and draw virtual geomagnetic poles on various map projections. UINT provides a convenient platform to evaluate relative paleointensity (RPI) estimates using the *.int files that can be exported from UVIEW. Two methods are used for RPI estimation: the calculated slopes of the best fit line between the NRM and the respective normalizer (using paired demagnetization data for both parameters) and the averages of the NRM/normalizer ratios. Linear correlation coefficients (of slopes) and standard deviations (of ratios) can be calculated simultaneously to monitor the quality of the RPI estimates. All resulting data and plots from UPmag can be exported into various file formats. UPmag software, data format files, and test data can be downloaded from http://earthref.org/cgi-bin/er.cgi?s=erda.cgi?n=985.
ERIC Educational Resources Information Center
Library Computing, 1985
1985-01-01
Special supplement to "Library Journal" and "School Library Journal" covers topics of interest to school, public, academic, and special libraries planning for automation: microcomputer use, readings in automation, online searching, databases of microcomputer software, public access to microcomputers, circulation, creating a…
Little, Callie W.
2015-01-01
The present study is an examination of the genetic and environmental effects on the associations among reading fluency, spelling and earlier reading comprehension on a later reading comprehension outcome (FCAT) in a combined sample of 3rd and 4th grade students using data from the 2011-2012 school year of the Florida Twin project on Reading (Taylor et al., 2013). A genetically sensitive model was applied to the data with results indicating a common genetic component among all four measures, along with shared and non-shared environmental influences common between reading fluency, spelling and FCAT. PMID:26770052
Reading in Class & out of Class: An Experience Sampling Method Study
ERIC Educational Resources Information Center
Shumow, Lee; Schmidt, Jennifer A.; Kackar, Hayal
2008-01-01
This study described and compared the reading of sixth and eighth grade students both in and out of school using a unique data set collected with the Experience Sampling Method (ESM). On average, students read forty minutes a day out of class and seventeen minutes a day in class indicating that reading is a common leisure practice for…
Genetic and Environmental Influences on Writing and their Relations to Language and Reading
Olson, Richard K.; Hulslander, Jacqueline; Christopher, Micaela; Keenan, Janice M.; Wadsworth, Sally J.; Willcutt, Erik G.; Pennington, Bruce F.; DeFries, John C.
2011-01-01
Identical and fraternal twins (N = 540, age 8 to 18 years) were tested on three different measures of writing (Woodcock-Johnson III Tests of Achievement-Writing Samples and Writing Fluency; Handwriting Copy from the Group Diagnostic Reading and Aptitude Achievement Tests), three different language skills (Phonological Awareness, Rapid Naming, and Vocabulary), and three different reading skills (Word Recognition, Spelling, and Reading Comprehension). Substantial genetic influence was found on two of the writing measures, Writing Samples and Handwriting Copy, and all of the language and reading measures. Shared environment influences were generally not significant, except for vocabulary. Non-shared environment estimates, including measurement error, were significant for all variables. Genetic influences among the writing measures were significantly correlated (highest between the speeded measures Writing Fluency and Handwriting Copy), but there were also significant independent genetic influences between Copy and Samples and between Fluency and Samples. Genetic influences on writing were significantly correlated with genetic influences on all of the language and reading skills, but significant independent genetic influences were also found for Copy and Samples, whose genetic correlations were significantly less than 1.0 with the reading and language skills. The genetic correlations varied significantly in strength depending on the overlap between the writing, language, and reading task demands. We discuss implications of our results for education, limitations of the study, and new directions for research on writing and its relations to language and reading. PMID:21842316
ERIC Educational Resources Information Center
Luoni, Chiara; Balottin, Umberto; Zaccagnino, Maria; Brembilla, Laura; Livetti, Giulia; Termine, Cristiano
2015-01-01
Attention-deficit/hyperactivity disorder (ADHD) often co-occurs with reading disability. A cross-sectional study in an Italian-speaking, nonclinical sample was conducted in an attempt to document the existence of an early association between reading difficulties (RD) and ADHD behaviours. We recruited a sample of 369 children in their first year at…
Differential expression analysis for RNAseq using Poisson mixed models
Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny
2017-01-01
Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632
Links between Early Oral Narrative and Decoding Skills and Later Reading in a New Zealand Sample
ERIC Educational Resources Information Center
Schaughency, Elizabeth; Suggate, Sebastian; Reese, Elaine
2017-01-01
We examined earlier oral narrative and decoding and later reading in two samples spanning the first four years of reading instruction. The Year 1 sample (n = 44) was initially assessed after one year of instruction (M = 6; 1 years) and followed through their third year (M = 8; 1 years); the Year 2 sample (n = 34) assessed after two years of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
2002-08-19
Utitlity tariffs vary significantly from utility to utility. Each utility has its own rates and sets of rules by which bills are calculated. The Bill Calculator reconstructs the tariff based on these rules, stored in data tables, and access the appropriate charges for a given energy consumption and demand. The software reconstructs the tariff logic from the rules stored in data tables. Changes are tallied as the logic is reconstructed. This is essentially an accounting program. The main limitation is on the time to search for each tariff element. It is currently on O(N) search. Also, since the Bill calculatormore » first stores all tariffs in an array and then reads the array to reconstruct a specific tariff, the memory limitatins of a particular system would limit the number of tariffs that could be handled. This tool allows a user to calculate a bill from any sampled utility without prior knowledge of the tariff logic or structure. The peculiarities of the tariff logic are stored in data tables and manged by the Bill Calculator software. This version of the software is implemented as a VB module that operates within Microsoft Excel. Input data tables are stored in Excel worksheets. In this version the Bill Calculator functions can be assessed through Excel as user defined worksheet functions. Bill Calculator can calculate approximately 50,000 bills in less than 30 minutes.« less
Atmospheric Science Data Center
2013-03-19
... they are leaving the NASA domain and are subject to the privacy and security policies of the owners/sponsors of the outside web ... sites. Read software is available for most data products from the project data tables . Any data not in ...
78 FR 75362 - Notice of Issuance of Final Determination Concerning Docave Computer Software
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-11
... in whole or in part of materials from another country or instrumentality, it has been substantially... programming of a foreign PROM (Programmable Read-Only Memory chip) in the United States substantially...
Mapping RNA-seq Reads with STAR
Dobin, Alexander; Gingeras, Thomas R.
2015-01-01
Mapping of large sets of high-throughput sequencing reads to a reference genome is one of the foundational steps in RNA-seq data analysis. The STAR software package performs this task with high levels of accuracy and speed. In addition to detecting annotated and novel splice junctions, STAR is capable of discovering more complex RNA sequence arrangements, such as chimeric and circular RNA. STAR can align spliced sequences of any length with moderate error rates providing scalability for emerging sequencing technologies. STAR generates output files that can be used for many downstream analyses such as transcript/gene expression quantification, differential gene expression, novel isoform reconstruction, signal visualization, and so forth. In this unit we describe computational protocols that produce various output files, use different RNA-seq datatypes, and utilize different mapping strategies. STAR is Open Source software that can be run on Unix, Linux or Mac OS X systems. PMID:26334920
Mapping RNA-seq Reads with STAR.
Dobin, Alexander; Gingeras, Thomas R
2015-09-03
Mapping of large sets of high-throughput sequencing reads to a reference genome is one of the foundational steps in RNA-seq data analysis. The STAR software package performs this task with high levels of accuracy and speed. In addition to detecting annotated and novel splice junctions, STAR is capable of discovering more complex RNA sequence arrangements, such as chimeric and circular RNA. STAR can align spliced sequences of any length with moderate error rates, providing scalability for emerging sequencing technologies. STAR generates output files that can be used for many downstream analyses such as transcript/gene expression quantification, differential gene expression, novel isoform reconstruction, and signal visualization. In this unit, we describe computational protocols that produce various output files, use different RNA-seq datatypes, and utilize different mapping strategies. STAR is open source software that can be run on Unix, Linux, or Mac OS X systems. Copyright © 2015 John Wiley & Sons, Inc.
Advanced Mail Systems Scanner Technology. Executive Summary and Appendixes A-E.
1980-10-01
data base. 6. Perform color acquisition studies. 7. Investigate address and bar code reading. MASS MEMORY TECHNOLOGY 1. Collect performance data on...area of the 1728-by-2200 ICAS image memory and to transmit the data to any of the three color memories of the Comtal. Function table information can...for printing color images. The software allows the transmission of data from the ICAS frame-store memory via the MCU to the Dicomed. Software test
SPARTA: Simple Program for Automated reference-based bacterial RNA-seq Transcriptome Analysis.
Johnson, Benjamin K; Scholz, Matthew B; Teal, Tracy K; Abramovitch, Robert B
2016-02-04
Many tools exist in the analysis of bacterial RNA sequencing (RNA-seq) transcriptional profiling experiments to identify differentially expressed genes between experimental conditions. Generally, the workflow includes quality control of reads, mapping to a reference, counting transcript abundance, and statistical tests for differentially expressed genes. In spite of the numerous tools developed for each component of an RNA-seq analysis workflow, easy-to-use bacterially oriented workflow applications to combine multiple tools and automate the process are lacking. With many tools to choose from for each step, the task of identifying a specific tool, adapting the input/output options to the specific use-case, and integrating the tools into a coherent analysis pipeline is not a trivial endeavor, particularly for microbiologists with limited bioinformatics experience. To make bacterial RNA-seq data analysis more accessible, we developed a Simple Program for Automated reference-based bacterial RNA-seq Transcriptome Analysis (SPARTA). SPARTA is a reference-based bacterial RNA-seq analysis workflow application for single-end Illumina reads. SPARTA is turnkey software that simplifies the process of analyzing RNA-seq data sets, making bacterial RNA-seq analysis a routine process that can be undertaken on a personal computer or in the classroom. The easy-to-install, complete workflow processes whole transcriptome shotgun sequencing data files by trimming reads and removing adapters, mapping reads to a reference, counting gene features, calculating differential gene expression, and, importantly, checking for potential batch effects within the data set. SPARTA outputs quality analysis reports, gene feature counts and differential gene expression tables and scatterplots. SPARTA provides an easy-to-use bacterial RNA-seq transcriptional profiling workflow to identify differentially expressed genes between experimental conditions. This software will enable microbiologists with limited bioinformatics experience to analyze their data and integrate next generation sequencing (NGS) technologies into the classroom. The SPARTA software and tutorial are available at sparta.readthedocs.org.
Reference-free compression of high throughput sequencing data with a probabilistic de Bruijn graph.
Benoit, Gaëtan; Lemaitre, Claire; Lavenier, Dominique; Drezen, Erwan; Dayris, Thibault; Uricaru, Raluca; Rizk, Guillaume
2015-09-14
Data volumes generated by next-generation sequencing (NGS) technologies is now a major concern for both data storage and transmission. This triggered the need for more efficient methods than general purpose compression tools, such as the widely used gzip method. We present a novel reference-free method meant to compress data issued from high throughput sequencing technologies. Our approach, implemented in the software LEON, employs techniques derived from existing assembly principles. The method is based on a reference probabilistic de Bruijn Graph, built de novo from the set of reads and stored in a Bloom filter. Each read is encoded as a path in this graph, by memorizing an anchoring kmer and a list of bifurcations. The same probabilistic de Bruijn Graph is used to perform a lossy transformation of the quality scores, which allows to obtain higher compression rates without losing pertinent information for downstream analyses. LEON was run on various real sequencing datasets (whole genome, exome, RNA-seq or metagenomics). In all cases, LEON showed higher overall compression ratios than state-of-the-art compression software. On a C. elegans whole genome sequencing dataset, LEON divided the original file size by more than 20. LEON is an open source software, distributed under GNU affero GPL License, available for download at http://gatb.inria.fr/software/leon/.
Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy
NASA Astrophysics Data System (ADS)
Bucht, Curry; Söderberg, Per; Manneberg, Göran
2009-02-01
The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor. Morphometry of the corneal endothelium is presently done by semi-automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development of fully automated analysis of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images. The digitally enhanced images of the corneal endothelium were transformed, using the fast Fourier transform (FFT). Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.
A Model Critical Reading Lesson for Secondary High-Risk Students.
ERIC Educational Resources Information Center
Haney, Gail; Thistlethwaite, Linda
1991-01-01
This article defines critical reading, discusses associated frameworks, and lists considerations for choosing topics and reading materials. A sample critical reading lesson using a "mapping" approach with a reading on euthanasia demonstrates guiding secondary learning-disabled students in critical reading. (DB)
Improving the readability of online foot and ankle patient education materials.
Sheppard, Evan D; Hyde, Zane; Florence, Mason N; McGwin, Gerald; Kirchner, John S; Ponce, Brent A
2014-12-01
Previous studies have shown the need for improving the readability of many patient education materials to increase patient comprehension. This study's purpose was to determine the readability of foot and ankle patient education materials and to determine the extent readability can be improved. We hypothesized that the reading levels would be above the recommended guidelines and that decreasing the sentence length would also decrease the reading level of these patient educational materials. Patient education materials from online public sources were collected. The readability of these articles was assessed by a readability software program. The detailed instructions provided by the National Institutes of Health (NIH) were then used as a guideline for performing edits to help improve the readability of selected articles. The most quantitative guideline, lowering all sentences to less than 15 words, was chosen to show the effect of following the NIH recommendations. The reading levels of the sampled articles were above the sixth to seventh grade recommendations of the NIH. The MedlinePlus website, which is a part of the NIH website, had the lowest reading level (8.1). The articles edited had an average reduction of 1.41 grade levels, with the lowest reduction in the Medline articles of 0.65. Providing detailed instructions to the authors writing these patient education articles and implementing editing techniques based on previous recommendations could lead to an improvement in the readability of patient education materials. This study provides authors of patient education materials with simple editing techniques that will allow for the improvement in the readability of online patient educational materials. The improvement in readability will provide patients with more comprehendible education materials that can strengthen patient awareness of medical problems and treatments. © The Author(s) 2014.
Hulse-Kemp, Amanda M; Maheshwari, Shamoni; Stoffel, Kevin; Hill, Theresa A; Jaffe, David; Williams, Stephen R; Weisenfeld, Neil; Ramakrishnan, Srividya; Kumar, Vijay; Shah, Preyas; Schatz, Michael C; Church, Deanna M; Van Deynze, Allen
2018-01-01
Linked-Read sequencing technology has recently been employed successfully for de novo assembly of human genomes, however, the utility of this technology for complex plant genomes is unproven. We evaluated the technology for this purpose by sequencing the 3.5-gigabase (Gb) diploid pepper ( Capsicum annuum ) genome with a single Linked-Read library. Plant genomes, including pepper, are characterized by long, highly similar repetitive sequences. Accordingly, significant effort is used to ensure that the sequenced plant is highly homozygous and the resulting assembly is a haploid consensus. With a phased assembly approach, we targeted a heterozygous F 1 derived from a wide cross to assess the ability to derive both haplotypes and characterize a pungency gene with a large insertion/deletion. The Supernova software generated a highly ordered, more contiguous sequence assembly than all currently available C. annuum reference genomes. Over 83% of the final assembly was anchored and oriented using four publicly available de novo linkage maps. A comparison of the annotation of conserved eukaryotic genes indicated the completeness of assembly. The validity of the phased assembly is further demonstrated with the complete recovery of both 2.5-Kb insertion/deletion haplotypes of the PUN1 locus in the F 1 sample that represents pungent and nonpungent peppers, as well as nearly full recovery of the BUSCO2 gene set within each of the two haplotypes. The most contiguous pepper genome assembly to date has been generated which demonstrates that Linked-Read library technology provides a tool to de novo assemble complex highly repetitive heterozygous plant genomes. This technology can provide an opportunity to cost-effectively develop high-quality genome assemblies for other complex plants and compare structural and gene differences through accurate haplotype reconstruction.
Nakato, Ryuichiro; Itoh, Tahehiko; Shirahige, Katsuhiko
2013-07-01
Chromatin immunoprecipitation with high-throughput sequencing (ChIP-seq) can identify genomic regions that bind proteins involved in various chromosomal functions. Although the development of next-generation sequencers offers the technology needed to identify these protein-binding sites, the analysis can be computationally challenging because sequencing data sometimes consist of >100 million reads/sample. Herein, we describe a cost-effective and time-efficient protocol that is generally applicable to ChIP-seq analysis; this protocol uses a novel peak-calling program termed DROMPA to identify peaks and an additional program, parse2wig, to preprocess read-map files. This two-step procedure drastically reduces computational time and memory requirements compared with other programs. DROMPA enables the identification of protein localization sites in repetitive sequences and efficiently identifies both broad and sharp protein localization peaks. Specifically, DROMPA outputs a protein-binding profile map in pdf or png format, which can be easily manipulated by users who have a limited background in bioinformatics. © 2013 The Authors Genes to Cells © 2013 by the Molecular Biology Society of Japan and Wiley Publishing Asia Pty Ltd.
The Early Stage of Neutron Tomography for Cultural Heritage Study in Thailand
NASA Astrophysics Data System (ADS)
Khaweerat, S.; Ratanatongchai, W.; S. Wonglee; Schillinger, B.
In parallel to the upgrade of neutron imaging facility at TRR-1/M1 since 2015, the practice on image processing software has led to implementation of neutron tomography (NT). The current setup provides a thermal neutron flux of 1.08×106 cm-2sec-1 at the exposure position. In general, the sample was fixed on a plate at the top of rotary stage controlled by Labview 2009 Version 9.0.1. The incremental step can be adjusted from 0.45 to 7.2 degree. A 16 bit CCD camera assembled with a Nikkor 50 mm f/1.2 lens was used to record light from 6LiF/ZnS (green) neutron converter screen. The exposure time for each shot was 60 seconds, resulting in the acquisition time of approximately three hours for completely turning the sample around. Afterwards, the batch of two dimensional neutron images of the sample was read into the reconstruction and visualization software Octopus reconstruction 8.8 and Octopus visualization 2.0, respectively. The results revealed that the system alignment is important. Maintaining the stability of heavy sample at every particular angle of rotation is important. Previous alignment showed instability of the supporting plane while tilting the sample. This study showed that the sample stage should be replaced. Even though the NT is a lengthy process and involves large data processing, it offers an opportunity to better understand features of an object in more details than with neutron radiography. The digital NT also allows us to separate inner features that appear superpositioned in radiography by cross-sectioning the 3D data set of an object without destruction. As a result, NT is a significant tool for revealing hidden information included in the inner structure of cultural heritage objects, providing great benefits in archaeological study, conservation process and authenticity investigating.
NASA Astrophysics Data System (ADS)
Yussup, N.; Ibrahim, M. M.; Rahman, N. A. A.; Mokhtar, M.; Salim, N. A. A.; Soh@Shaari, S. C.; Azman, A.; Lombigit, L.; Azman, A.; Omar, S. A.
2018-01-01
Most of the procedures in neutron activation analysis (NAA) process that has been established in Malaysian Nuclear Agency (Nuclear Malaysia) since 1980s were performed manually. These manual procedures carried out by the NAA laboratory personnel are time consuming and inefficient especially for sample counting and measurement process. The sample needs to be changed and the measurement software needs to be setup for every one hour counting time. Both of these procedures are performed manually for every sample. Hence, an automatic sample changer system (ASC) that consists of hardware and software is developed to automate sample counting process for up to 30 samples consecutively. This paper describes the ASC control software for NAA process which is designed and developed to control the ASC hardware and call GammaVision software for sample measurement. The software is developed by using National Instrument LabVIEW development package.
Reading strategies in Spanish developmental dyslexics.
Suárez-Coalla, Paz; Cuetos, Fernando
2012-07-01
Cross-linguistic studies suggest that the orthographic system determines the reading performance of dyslexic children. In opaque orthographies, the fundamental feature of developmental dyslexia is difficulty in reading accuracy, whereas slower reading speed is more common in transparent orthographies. The aim of the current study was to examine the extent to which different variables of words affect reaction times and articulation times in developmental dyslexics. A group of 19 developmental dyslexics of different ages and an age-matched group of 19 children without reading disabilities completed a word naming task. The children were asked to read 100 nouns that differed in length, frequency, age of acquisition, imageability, and orthographic neighborhood. The stimuli were presented on a laptop computer, and the responses were recorded using DMDX software. We conducted analyses of mixed-effects models to determine which variables influenced reading times in dyslexic children. We found that word naming skills in dyslexic children are affected predominantly by length, while in non-dyslexics children the principal variable is the age of acquisition, a lexical variable. These findings suggest that Spanish-speaking developmental dyslexics use a sublexical procedure for reading words, which is reflected in slower speed when reading long words. In contrast, normal children use a lexical strategy, which is frequently observed in readers of opaque languages.
Association Between Television Viewing and Parent-Child Reading in the Early Home Environment.
Khan, Kiren S; Purtell, Kelly M; Logan, Jessica; Ansari, Arya; Justice, Laura M
2017-09-01
This study examines whether there is an association between time spent by preschoolers in parent-child shared book reading versus TV viewing in two distinct samples. Data were used from both the preschool wave of the Early Childhood Longitudinal Study Cohort, a nationally representative sample of 4-year-olds (N = 8900), as well as a low-income, rural sample of children enrolled in the Preschool Experience in Rural Classrooms study (N = 407). Information regarding frequency of shared book reading and daily TV consumption was obtained through caregiver report. A regression approach was used to estimate how the frequency of parent-child book reading accounted for variance in TV consumption. Estimated marginal mean values were then compared for the amount of TV viewed by children who were reported as being read to daily, frequently, occasionally, and not at all. Parent-child book reading was negatively associated with the amount of TV viewed by children in both samples. Specifically, television consumption was significantly lower for children who were read to daily as compared to those who were read to occasionally. This inverse association was not moderated by contextual factors including maternal education, household size, and composition, or time spent in nonparental care. This study provides empirical support for an inverse association between TV viewing and parent-child book reading activities. Implications for policy and practice are discussed.
STORMSeq: an open-source, user-friendly pipeline for processing personal genomics data in the cloud.
Karczewski, Konrad J; Fernald, Guy Haskin; Martin, Alicia R; Snyder, Michael; Tatonetti, Nicholas P; Dudley, Joel T
2014-01-01
The increasing public availability of personal complete genome sequencing data has ushered in an era of democratized genomics. However, read mapping and variant calling software is constantly improving and individuals with personal genomic data may prefer to customize and update their variant calls. Here, we describe STORMSeq (Scalable Tools for Open-Source Read Mapping), a graphical interface cloud computing solution that does not require a parallel computing environment or extensive technical experience. This customizable and modular system performs read mapping, read cleaning, and variant calling and annotation. At present, STORMSeq costs approximately $2 and 5-10 hours to process a full exome sequence and $30 and 3-8 days to process a whole genome sequence. We provide this open-access and open-source resource as a user-friendly interface in Amazon EC2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yussup, F., E-mail: nolida@nm.gov.my; Ibrahim, M. M., E-mail: maslina-i@nm.gov.my; Soh, S. C.
With the growth of technology, many devices and equipments can be connected to the network and internet to enable online data acquisition for real-time data monitoring and control from monitoring devices located at remote sites. Centralized radiation monitoring system (CRMS) is a system that enables area radiation level at various locations in Malaysian Nuclear Agency (Nuklear Malaysia) to be monitored centrally by using a web browser. The Local Area Network (LAN) in Nuclear Malaysia is utilized in CRMS as a communication media for data acquisition of the area radiation levels from radiation detectors. The development of the system involves devicemore » configuration, wiring, network and hardware installation, software and web development. This paper describes the software upgrading on the system server that is responsible to acquire and record the area radiation readings from the detectors. The recorded readings are called in a web programming to be displayed on a website. Besides the main feature which is acquiring the area radiation levels in Nuclear Malaysia centrally, the upgrading involves new features such as uniform time interval for data recording and exporting, warning system and dose triggering.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosler, Peter
Stride Search provides a flexible tool for detecting storms or other extreme climate events in high-resolution climate data sets saved on uniform latitude-longitude grids in standard NetCDF format. Users provide the software a quantitative description of a meteorological event they are interested in; the software searches a data set for locations in space and time that meet the user’s description. In its first stage, Stride Search performs a spatial search of the data set at each timestep by dividing a search domain into circular sectors of constant geodesic radius. Data from a netCDF file is read into memory for eachmore » circular search sector. If the data meet or exceed a set of storm identification criteria (defined by the user), a storm is recorded to a linked list. Finally, the linked list is examined and duplicate detections of the same storm are removed and the results are written to an output file. The first stage’s output file is read by a second program that builds storm. Additional identification criteria may be applied at this stage to further classify storms. Storm tracks are the software’s ultimate output and routines are provided for formatting that output for various external software libraries for plotting and tabulating data.« less
de Jong, Wibe A; Walker, Andrew M; Hanwell, Marcus D
2013-05-24
Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper an end-to-end use of semantically rich data in computational chemistry is demonstrated utilizing the Chemical Markup Language (CML) framework. Semantically rich data is generated by the NWChem computational chemistry software with the FoX library and utilized by the Avogadro molecular editor for analysis and visualization. The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files and molecular orbitals used by the computational chemistry software. Draft dictionary entries and a format for molecular orbitals within CML CompChem were developed. The Avogadro application was extended to read in CML data, and display molecular geometry and electronic structure in the GUI allowing for an end-to-end solution where Avogadro can create input structures, generate input files, NWChem can run the calculation and Avogadro can then read in and analyse the CML output produced. The developments outlined in this paper will be made available in future releases of NWChem, FoX, and Avogadro. The production of CML compliant XML files for computational chemistry software such as NWChem can be accomplished relatively easily using the FoX library. The CML data can be read in by a newly developed reader in Avogadro and analysed or visualized in various ways. A community-based effort is needed to further develop the CML CompChem convention and dictionary. This will enable the long-term goal of allowing a researcher to run simple "Google-style" searches of chemistry and physics and have the results of computational calculations returned in a comprehensible form alongside articles from the published literature.
2013-01-01
Background Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper an end-to-end use of semantically rich data in computational chemistry is demonstrated utilizing the Chemical Markup Language (CML) framework. Semantically rich data is generated by the NWChem computational chemistry software with the FoX library and utilized by the Avogadro molecular editor for analysis and visualization. Results The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files and molecular orbitals used by the computational chemistry software. Draft dictionary entries and a format for molecular orbitals within CML CompChem were developed. The Avogadro application was extended to read in CML data, and display molecular geometry and electronic structure in the GUI allowing for an end-to-end solution where Avogadro can create input structures, generate input files, NWChem can run the calculation and Avogadro can then read in and analyse the CML output produced. The developments outlined in this paper will be made available in future releases of NWChem, FoX, and Avogadro. Conclusions The production of CML compliant XML files for computational chemistry software such as NWChem can be accomplished relatively easily using the FoX library. The CML data can be read in by a newly developed reader in Avogadro and analysed or visualized in various ways. A community-based effort is needed to further develop the CML CompChem convention and dictionary. This will enable the long-term goal of allowing a researcher to run simple “Google-style” searches of chemistry and physics and have the results of computational calculations returned in a comprehensible form alongside articles from the published literature. PMID:23705910
NASA Astrophysics Data System (ADS)
Schwartz, Richard A.; Zarro, D.; Csillaghy, A.; Dennis, B.; Tolbert, A. K.; Etesi, L.
2009-05-01
We report on our activities to integrate VSO search and retrieval capabilities into standard data access, display, and analysis tools. In addition to its standard Web-based search form, the VSO provides an Interactive Data Language (IDL) client (vso_search) that is available through the Solar Software (SSW) package. We have incorporated this client into an IDL-widget interface program (show_synop) that allows for more simplified searching and downloading of VSO datasets directly into a user's IDL data analysis environment. In particular, we have provided the capability to read VSO datasets into a general purpose IDL package (plotman) that can display different datatypes (lightcurves, images, and spectra) and perform basic data operations such as zooming, image overlays, solar rotation, etc. Currently, the show_synop tool supports access to ground-based and space-based (SOHO, STEREO, and Hinode) observations, and has the capability to include new datasets as they become available. A user encounters two major hurdles when using the VSO: (1) Instrument-specific software (such as level-0 file readers and data-prepping procedures) may not be available in the user's local SSW distribution. (2) Recent calibration files (such as flat-fields) are not automatically distributed with the analysis software. To address these issues, we have developed a dedicated server (prepserver) that incorporates all the latest instrument-specific software libraries and calibration files. The prepserver uses an IDL-Java bridge to read and implement data processing requests from a client and return a processed data file that can be readily displayed with the show_synop/plotman package. The advantage of the prepserver is that the user is only required to install the general branch (gen) of the SSW tree, and is freed from the more onerous task of installing instrument-specific libraries and calibration files. We will demonstrate how the prepserver can be used to read, process, and overlay SOHO/EIT, TRACE, SECCHI/EUVI, and RHESSI images.
Sinonasal microbiome sampling: a comparison of techniques.
Bassiouni, Ahmed; Cleland, Edward John; Psaltis, Alkis James; Vreugde, Sarah; Wormald, Peter-John
2015-01-01
The role of the sino-nasal microbiome in CRS remains unclear. We hypothesized that the bacteria within mucosal-associated biofilms may be different from the more superficial-lying, free-floating bacteria in the sinuses and that this may impact on the microbiome results obtained. This study investigates whether there is a significant difference in the microbiota of a sinonasal mucosal tissue sample versus a swab sample. Cross-sectional study with paired design. Mucosal biopsy and swab samples were obtained intra-operatively from the ethmoid sinuses of 6 patients with CRS. Extracted DNA was sequenced on a Roche-454 sequencer using 16S-rRNA gene targeted primers. Data were analyzed using QIIME 1.8 software package. At a maximum subsampling depth of 1,100 reads, the mean observed species richness was 33.3 species (30.6 for swab, versus 36 for mucosa; p > 0.05). There was no significant difference in phylogenetic and non-phylogenetic alpha diversity metrics (Faith's PD_Whole_Tree and Shannon's index) between the two sampling methods (p > 0.05). The type of sample also had no significant effect on phylogenetic and non-phylogenetic beta diversity metrics (Unifrac and Bray-Curtis; p > 0.05). We observed no significant difference between the microbiota of mucosal tissue and swab samples. This suggests that less invasive swab samples are representative of the sinonasal mucosa microbiome and can be used for future sinonasal microbiome studies.
Atmospheric Science Data Center
2014-08-18
... they are leaving the NASA domain and are subject to the privacy and security policies of the owners/sponsors of the outside web ... sites. Read software is available for most data products from the project data tables . Any data not in ...
Atmospheric Science Data Center
2014-08-18
... they are leaving the NASA domain and are subject to the privacy and security policies of the owners/sponsors of the outside web ... sites. Read software is available for most data products from the project data tables . Any data not in ...
Atmospheric Science Data Center
2014-04-25
AirMISR WISCONSIN 2000 Project Title: AirMISR Discipline: ... Platform: ER-2 Spatial Coverage: Wisconsin (35.92, 43.79)(-97.94, -90.23) Spatial Resolution: ... Order Data Readme Files: Readme Wisconsin Read Software Files : IDL Code ...
ERIC Educational Resources Information Center
Speece, Deborah L.; Ritchey, Kristen D.
2005-01-01
The purpose of this study was to examine the development of oral reading fluency in a sample of first-grade children. Using growth curve analysis, models of growth were identified for a combined sample of at-risk (AR) and not-at-risk (NAR) children, and predictors of growth were identified for the longitudinal AR sample in first and second grade.…
The Software Engineering Laboratory: An operational software experience factory
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Caldiera, Gianluigi; Mcgarry, Frank; Pajerski, Rose; Page, Gerald; Waligora, Sharon
1992-01-01
For 15 years, the Software Engineering Laboratory (SEL) has been carrying out studies and experiments for the purpose of understanding, assessing, and improving software and software processes within a production software development environment at NASA/GSFC. The SEL comprises three major organizations: (1) NASA/GSFC, Flight Dynamics Division; (2) University of Maryland, Department of Computer Science; and (3) Computer Sciences Corporation, Flight Dynamics Technology Group. These organizations have jointly carried out several hundred software studies, producing hundreds of reports, papers, and documents, all of which describe some aspect of the software engineering technology that was analyzed in the flight dynamics environment at NASA. The studies range from small, controlled experiments (such as analyzing the effectiveness of code reading versus that of functional testing) to large, multiple project studies (such as assessing the impacts of Ada on a production environment). The organization's driving goal is to improve the software process continually, so that sustained improvement may be observed in the resulting products. This paper discusses the SEL as a functioning example of an operational software experience factory and summarizes the characteristics of and major lessons learned from 15 years of SEL operations.
2014-01-01
Background RNA sequencing (RNA-seq) is emerging as a critical approach in biological research. However, its high-throughput advantage is significantly limited by the capacity of bioinformatics tools. The research community urgently needs user-friendly tools to efficiently analyze the complicated data generated by high throughput sequencers. Results We developed a standalone tool with graphic user interface (GUI)-based analytic modules, known as eRNA. The capacity of performing parallel processing and sample management facilitates large data analyses by maximizing hardware usage and freeing users from tediously handling sequencing data. The module miRNA identification” includes GUIs for raw data reading, adapter removal, sequence alignment, and read counting. The module “mRNA identification” includes GUIs for reference sequences, genome mapping, transcript assembling, and differential expression. The module “Target screening” provides expression profiling analyses and graphic visualization. The module “Self-testing” offers the directory setups, sample management, and a check for third-party package dependency. Integration of other GUIs including Bowtie, miRDeep2, and miRspring extend the program’s functionality. Conclusions eRNA focuses on the common tools required for the mapping and quantification analysis of miRNA-seq and mRNA-seq data. The software package provides an additional choice for scientists who require a user-friendly computing environment and high-throughput capacity for large data analysis. eRNA is available for free download at https://sourceforge.net/projects/erna/?source=directory. PMID:24593312
Yuan, Tiezheng; Huang, Xiaoyi; Dittmar, Rachel L; Du, Meijun; Kohli, Manish; Boardman, Lisa; Thibodeau, Stephen N; Wang, Liang
2014-03-05
RNA sequencing (RNA-seq) is emerging as a critical approach in biological research. However, its high-throughput advantage is significantly limited by the capacity of bioinformatics tools. The research community urgently needs user-friendly tools to efficiently analyze the complicated data generated by high throughput sequencers. We developed a standalone tool with graphic user interface (GUI)-based analytic modules, known as eRNA. The capacity of performing parallel processing and sample management facilitates large data analyses by maximizing hardware usage and freeing users from tediously handling sequencing data. The module miRNA identification" includes GUIs for raw data reading, adapter removal, sequence alignment, and read counting. The module "mRNA identification" includes GUIs for reference sequences, genome mapping, transcript assembling, and differential expression. The module "Target screening" provides expression profiling analyses and graphic visualization. The module "Self-testing" offers the directory setups, sample management, and a check for third-party package dependency. Integration of other GUIs including Bowtie, miRDeep2, and miRspring extend the program's functionality. eRNA focuses on the common tools required for the mapping and quantification analysis of miRNA-seq and mRNA-seq data. The software package provides an additional choice for scientists who require a user-friendly computing environment and high-throughput capacity for large data analysis. eRNA is available for free download at https://sourceforge.net/projects/erna/?source=directory.
Examining the Effects of Skill Level and Reading Modality on Reading Comprehension
ERIC Educational Resources Information Center
Dickens, Rachel H.; Meisinger, Elizabeth B.
2016-01-01
The purpose of this study was to examine the effects of reading skill and reading modality (oral versus silent) on reading comprehension. A normative sample of sixth-grade students (N = 74) read texts aloud and silently and then answered questions about what they read. Skill in word reading fluency was assessed by the Test of Word Reading…
ERIC Educational Resources Information Center
Schaffner, Ellen; Schiefele, Ulrich; Ulferts, Hannah
2013-01-01
This study examined the role of reading amount as a mediator of the effects of intrinsic and extrinsic reading motivation on higher order reading comprehension (comprised of paragraph-and passage-level comprehension) in a sample of 159 fifth-grade elementary students. A positive association between intrinsic reading motivation and reading amount…
Heritability of high reading ability and its interaction with parental education.
Friend, Angela; DeFries, John C; Olson, Richard K; Pennington, Bruce; Harlaar, Nicole; Byrne, Brian; Samuelsson, Stefan; Willcutt, Erik G; Wadsworth, Sally J; Corley, Robin; Keenan, Janice M
2009-07-01
Moderation of the level of genetic influence on children's high reading ability by environmental influences associated with parental education was explored in two independent samples of identical and fraternal twins from the United States and Great Britain. For both samples, the heritability of high reading performance increased significantly with lower levels of parental education. Thus, resilience (high reading ability despite lower environmental support) is more strongly influenced by genotype than is high reading ability with higher environmental support. This result provides a coherent account when considered alongside results of previous research showing that heritability for low reading ability decreased with lower levels of parental education.
Anatomy of a hash-based long read sequence mapping algorithm for next generation DNA sequencing.
Misra, Sanchit; Agrawal, Ankit; Liao, Wei-keng; Choudhary, Alok
2011-01-15
Recently, a number of programs have been proposed for mapping short reads to a reference genome. Many of them are heavily optimized for short-read mapping and hence are very efficient for shorter queries, but that makes them inefficient or not applicable for reads longer than 200 bp. However, many sequencers are already generating longer reads and more are expected to follow. For long read sequence mapping, there are limited options; BLAT, SSAHA2, FANGS and BWA-SW are among the popular ones. However, resequencing and personalized medicine need much faster software to map these long sequencing reads to a reference genome to identify SNPs or rare transcripts. We present AGILE (AliGnIng Long rEads), a hash table based high-throughput sequence mapping algorithm for longer 454 reads that uses diagonal multiple seed-match criteria, customized q-gram filtering and a dynamic incremental search approach among other heuristics to optimize every step of the mapping process. In our experiments, we observe that AGILE is more accurate than BLAT, and comparable to BWA-SW and SSAHA2. For practical error rates (< 5%) and read lengths (200-1000 bp), AGILE is significantly faster than BLAT, SSAHA2 and BWA-SW. Even for the other cases, AGILE is comparable to BWA-SW and several times faster than BLAT and SSAHA2. http://www.ece.northwestern.edu/~smi539/agile.html.
Mueller, Shane T.; Esposito, Alena G.
2015-01-01
We describe the Bivalent Shape Task (BST), software using the Psychology Experiment Building Language (PEBL), for testing of cognitive interference and the ability to suppress interference. The test is available via the GNU Public License, Version 3 (GPLv3), is freely modifiable, and has been tested on both children and adults and found to provide a simple and fast non-verbal measure of cognitive interference and suppression that requires no reading. PMID:26702358
1993-12-01
Unclassified/Unlimited 13. ABSTRACT ~Maximum 2W0 worr*J The purpose of this thesis is to develop a high-level model to create seli"adapting software which...Department of Computer Science ABSTRACT The purpose of this thesis is to develop a high-level model to create self-adapting software which teaches learning...stimulating and demanding. The power of the system model described herein is that it can vary as needed by the individual student. The system will
Quality of patient education materials for rehabilitation after neurological surgery.
Agarwal, Nitin; Sarris, Christina; Hansberry, David R; Lin, Matthew J; Barrese, James C; Prestigiacomo, Charles J
2013-01-01
To evaluate the quality of online patient education materials for rehabilitation following neurological surgery. Materials were obtained from the National Institute of Neurological Disorders and Stroke (NINDS), U.S. National Library of Medicine (NLM), American Occupational Therapy Association (AOTA), and the American Academy of Orthopaedic Surgeons (AAOS). After removing unnecessary formatting, the readability of each site was assessed using the Flesch Reading Ease and Flesch-Kincaid Grade Level evaluations with Microsoft Office Word software. The average values of the Flesch Reading Ease and Flesch-Kincaid Grade Level were 41.5 and 11.8, respectively, which are well outside the recommended reading levels for the average American. Moreover, no online section was written below a ninth grade reading level. Evaluations of several websites from the NINDS, NLM, AOTA, and AAOS demonstrated that their reading levels were higher than that of the average American. Improved readability might be beneficial for patient education. Ultimately, increased patient comprehension may correlate to positive clinical outcomes.
GateKeeper: a new hardware architecture for accelerating pre-alignment in DNA short read mapping.
Alser, Mohammed; Hassan, Hasan; Xin, Hongyi; Ergin, Oguz; Mutlu, Onur; Alkan, Can
2017-11-01
High throughput DNA sequencing (HTS) technologies generate an excessive number of small DNA segments -called short reads- that cause significant computational burden. To analyze the entire genome, each of the billions of short reads must be mapped to a reference genome based on the similarity between a read and 'candidate' locations in that reference genome. The similarity measurement, called alignment, formulated as an approximate string matching problem, is the computational bottleneck because: (i) it is implemented using quadratic-time dynamic programming algorithms and (ii) the majority of candidate locations in the reference genome do not align with a given read due to high dissimilarity. Calculating the alignment of such incorrect candidate locations consumes an overwhelming majority of a modern read mapper's execution time. Therefore, it is crucial to develop a fast and effective filter that can detect incorrect candidate locations and eliminate them before invoking computationally costly alignment algorithms. We propose GateKeeper, a new hardware accelerator that functions as a pre-alignment step that quickly filters out most incorrect candidate locations. GateKeeper is the first design to accelerate pre-alignment using Field-Programmable Gate Arrays (FPGAs), which can perform pre-alignment much faster than software. When implemented on a single FPGA chip, GateKeeper maintains high accuracy (on average >96%) while providing, on average, 90-fold and 130-fold speedup over the state-of-the-art software pre-alignment techniques, Adjacency Filter and Shifted Hamming Distance (SHD), respectively. The addition of GateKeeper as a pre-alignment step can reduce the verification time of the mrFAST mapper by a factor of 10. https://github.com/BilkentCompGen/GateKeeper. mohammedalser@bilkent.edu.tr or onur.mutlu@inf.ethz.ch or calkan@cs.bilkent.edu.tr. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
LoRTE: Detecting transposon-induced genomic variants using low coverage PacBio long read sequences.
Disdero, Eric; Filée, Jonathan
2017-01-01
Population genomic analysis of transposable elements has greatly benefited from recent advances of sequencing technologies. However, the short size of the reads and the propensity of transposable elements to nest in highly repeated regions of genomes limits the efficiency of bioinformatic tools when Illumina or 454 technologies are used. Fortunately, long read sequencing technologies generating read length that may span the entire length of full transposons are now available. However, existing TE population genomic softwares were not designed to handle long reads and the development of new dedicated tools is needed. LoRTE is the first tool able to use PacBio long read sequences to identify transposon deletions and insertions between a reference genome and genomes of different strains or populations. Tested against simulated and genuine Drosophila melanogaster PacBio datasets, LoRTE appears to be a reliable and broadly applicable tool to study the dynamic and evolutionary impact of transposable elements using low coverage, long read sequences. LoRTE is an efficient and accurate tool to identify structural genomic variants caused by TE insertion or deletion. LoRTE is available for download at http://www.egce.cnrs-gif.fr/?p=6422.
Barrick, Jeffrey E; Colburn, Geoffrey; Deatherage, Daniel E; Traverse, Charles C; Strand, Matthew D; Borges, Jordan J; Knoester, David B; Reba, Aaron; Meyer, Austin G
2014-11-29
Mutations that alter chromosomal structure play critical roles in evolution and disease, including in the origin of new lifestyles and pathogenic traits in microbes. Large-scale rearrangements in genomes are often mediated by recombination events involving new or existing copies of mobile genetic elements, recently duplicated genes, or other repetitive sequences. Most current software programs for predicting structural variation from short-read DNA resequencing data are intended primarily for use on human genomes. They typically disregard information in reads mapping to repeat sequences, and significant post-processing and manual examination of their output is often required to rule out false-positive predictions and precisely describe mutational events. We have implemented an algorithm for identifying structural variation from DNA resequencing data as part of the breseq computational pipeline for predicting mutations in haploid microbial genomes. Our method evaluates the support for new sequence junctions present in a clonal sample from split-read alignments to a reference genome, including matches to repeat sequences. Then, it uses a statistical model of read coverage evenness to accept or reject these predictions. Finally, breseq combines predictions of new junctions and deleted chromosomal regions to output biologically relevant descriptions of mutations and their effects on genes. We demonstrate the performance of breseq on simulated Escherichia coli genomes with deletions generating unique breakpoint sequences, new insertions of mobile genetic elements, and deletions mediated by mobile elements. Then, we reanalyze data from an E. coli K-12 mutation accumulation evolution experiment in which structural variation was not previously identified. Transposon insertions and large-scale chromosomal changes detected by breseq account for ~25% of spontaneous mutations in this strain. In all cases, we find that breseq is able to reliably predict structural variation with modest read-depth coverage of the reference genome (>40-fold). Using breseq to predict structural variation should be useful for studies of microbial epidemiology, experimental evolution, synthetic biology, and genetics when a reference genome for a closely related strain is available. In these cases, breseq can discover mutations that may be responsible for important or unintended changes in genomes that might otherwise go undetected.
ERIC Educational Resources Information Center
Grigorenko, Elena L.; Ngorosho, Damaris; Jukes, Matthew; Bundy, Donald
2006-01-01
In this article, we discuss two characteristics of the majority of current behaviour- and molecular-genetic studies of reading ability and disability, specifically, the ascertainment strategies and the populations from which samples are selected. In the context of this discussion, we present data that we collected on a sample of Swahili-speaking…
ERIC Educational Resources Information Center
Connolly, Bruce, Comp.
1986-01-01
This first installment of four-part "Online/Database Laserdisk Directory" reports on aspects of laserdisks including: product name; product description; company name; conpatibility information; type of laserdisk (compact disc read-only-memory, videodisk); software used; interface with magnetic media capability; conditions of usage;…
Sight-Word Practice in a Flash!
ERIC Educational Resources Information Center
Erwin, Robin W., Jr.
2016-01-01
For learners who need sight-word practice, including young students and struggling readers, digital flash cards may promote automatic word recognition when used as a supplemental activity to regular reading instruction. A novel use of common presentation software efficiently supports this practice strategy.
ERIC Educational Resources Information Center
Wise, Justin C.; Sevcik, Rose A.; Morris, Robin D.; Lovett, Maureen W.; Wolf, Maryanne; Kuhn, Melanie; Meisinger, Beth; Schwanenflugel, Paula
2010-01-01
Purpose: The purpose of this study was to examine whether different measures of oral reading fluency relate differentially to reading comprehension performance in two samples of second-grade students: (a) students who evidenced difficulties with nonsense-word oral reading fluency, real-word oral reading fluency, and oral reading fluency of…
MICCA: a complete and accurate software for taxonomic profiling of metagenomic data.
Albanese, Davide; Fontana, Paolo; De Filippo, Carlotta; Cavalieri, Duccio; Donati, Claudio
2015-05-19
The introduction of high throughput sequencing technologies has triggered an increase of the number of studies in which the microbiota of environmental and human samples is characterized through the sequencing of selected marker genes. While experimental protocols have undergone a process of standardization that makes them accessible to a large community of scientist, standard and robust data analysis pipelines are still lacking. Here we introduce MICCA, a software pipeline for the processing of amplicon metagenomic datasets that efficiently combines quality filtering, clustering of Operational Taxonomic Units (OTUs), taxonomy assignment and phylogenetic tree inference. MICCA provides accurate results reaching a good compromise among modularity and usability. Moreover, we introduce a de-novo clustering algorithm specifically designed for the inference of Operational Taxonomic Units (OTUs). Tests on real and synthetic datasets shows that thanks to the optimized reads filtering process and to the new clustering algorithm, MICCA provides estimates of the number of OTUs and of other common ecological indices that are more accurate and robust than currently available pipelines. Analysis of public metagenomic datasets shows that the higher consistency of results improves our understanding of the structure of environmental and human associated microbial communities. MICCA is an open source project.
Sleep: An Open-Source Python Software for Visualization, Analysis, and Staging of Sleep Data
Combrisson, Etienne; Vallat, Raphael; Eichenlaub, Jean-Baptiste; O'Reilly, Christian; Lajnef, Tarek; Guillot, Aymeric; Ruby, Perrine M.; Jerbi, Karim
2017-01-01
We introduce Sleep, a new Python open-source graphical user interface (GUI) dedicated to visualization, scoring and analyses of sleep data. Among its most prominent features are: (1) Dynamic display of polysomnographic data, spectrogram, hypnogram and topographic maps with several customizable parameters, (2) Implementation of several automatic detection of sleep features such as spindles, K-complexes, slow waves, and rapid eye movements (REM), (3) Implementation of practical signal processing tools such as re-referencing or filtering, and (4) Display of main descriptive statistics including publication-ready tables and figures. The software package supports loading and reading raw EEG data from standard file formats such as European Data Format, in addition to a range of commercial data formats. Most importantly, Sleep is built on top of the VisPy library, which provides GPU-based fast and high-level visualization. As a result, it is capable of efficiently handling and displaying large sleep datasets. Sleep is freely available (http://visbrain.org/sleep) and comes with sample datasets and an extensive documentation. Novel functionalities will continue to be added and open-science community efforts are expected to enhance the capacities of this module. PMID:28983246
Sleep: An Open-Source Python Software for Visualization, Analysis, and Staging of Sleep Data.
Combrisson, Etienne; Vallat, Raphael; Eichenlaub, Jean-Baptiste; O'Reilly, Christian; Lajnef, Tarek; Guillot, Aymeric; Ruby, Perrine M; Jerbi, Karim
2017-01-01
We introduce Sleep, a new Python open-source graphical user interface (GUI) dedicated to visualization, scoring and analyses of sleep data. Among its most prominent features are: (1) Dynamic display of polysomnographic data, spectrogram, hypnogram and topographic maps with several customizable parameters, (2) Implementation of several automatic detection of sleep features such as spindles, K-complexes, slow waves, and rapid eye movements (REM), (3) Implementation of practical signal processing tools such as re-referencing or filtering, and (4) Display of main descriptive statistics including publication-ready tables and figures. The software package supports loading and reading raw EEG data from standard file formats such as European Data Format, in addition to a range of commercial data formats. Most importantly, Sleep is built on top of the VisPy library, which provides GPU-based fast and high-level visualization. As a result, it is capable of efficiently handling and displaying large sleep datasets. Sleep is freely available (http://visbrain.org/sleep) and comes with sample datasets and an extensive documentation. Novel functionalities will continue to be added and open-science community efforts are expected to enhance the capacities of this module.
MICCA: a complete and accurate software for taxonomic profiling of metagenomic data
Albanese, Davide; Fontana, Paolo; De Filippo, Carlotta; Cavalieri, Duccio; Donati, Claudio
2015-01-01
The introduction of high throughput sequencing technologies has triggered an increase of the number of studies in which the microbiota of environmental and human samples is characterized through the sequencing of selected marker genes. While experimental protocols have undergone a process of standardization that makes them accessible to a large community of scientist, standard and robust data analysis pipelines are still lacking. Here we introduce MICCA, a software pipeline for the processing of amplicon metagenomic datasets that efficiently combines quality filtering, clustering of Operational Taxonomic Units (OTUs), taxonomy assignment and phylogenetic tree inference. MICCA provides accurate results reaching a good compromise among modularity and usability. Moreover, we introduce a de-novo clustering algorithm specifically designed for the inference of Operational Taxonomic Units (OTUs). Tests on real and synthetic datasets shows that thanks to the optimized reads filtering process and to the new clustering algorithm, MICCA provides estimates of the number of OTUs and of other common ecological indices that are more accurate and robust than currently available pipelines. Analysis of public metagenomic datasets shows that the higher consistency of results improves our understanding of the structure of environmental and human associated microbial communities. MICCA is an open source project. PMID:25988396
On the Effects of Motivation on Reading Performance Growth in Secondary School
ERIC Educational Resources Information Center
Retelsdorf, Jan; Koller, Olaf; Moller, Jens
2011-01-01
This research aimed at identifying unique effects of reading motivation on reading performance when controlling for cognitive skills, familial, and demographic background. We drew upon a longitudinal sample of N = 1508 secondary school students from 5th to 8th grade. Two types of intrinsic reading motivation (reading enjoyment, reading for…
Prediction and Stability of Reading Problems in Middle Childhood
ERIC Educational Resources Information Center
Ritchey, Kristen D.; Silverman, Rebecca D.; Schatschneider, Christopher; Speece, Deborah L.
2015-01-01
The longitudinal prediction of reading problems from fourth grade to sixth grade was investigated with a sample of 173 students. Reading problems at the end of sixth grade were defined by significantly below average performance (= 15th percentile) on reading factors defining word reading, fluency, and reading comprehension. Sixth grade poor reader…
Reading Activities of American Adults.
ERIC Educational Resources Information Center
Sharon, Amiel T.
A reading activities survey as part of the Targeted Research and Development Reading Program was done by interviewing 3,504 adults, aged 16 years or older, selected by area probability sampling. Among the preliminary findings was that the most frequent type of reading is newspaper reading. Seven out of 10 people read or look at a newspaper during…
Equivalence of Screen versus Print Reading Comprehension Depends on Task Complexity and Proficiency
ERIC Educational Resources Information Center
Lenhard, Wolfgang; Schroeders, Ulrich; Lenhard, Alexandra
2017-01-01
As reading and reading assessment become increasingly implemented on electronic devices, the question arises whether reading on screen is comparable with reading on paper. To examine potential differences, we studied reading processes on different proficiency and complexity levels. Specifically, we used data from the standardization sample of the…
STORMSeq: An Open-Source, User-Friendly Pipeline for Processing Personal Genomics Data in the Cloud
Karczewski, Konrad J.; Fernald, Guy Haskin; Martin, Alicia R.; Snyder, Michael; Tatonetti, Nicholas P.; Dudley, Joel T.
2014-01-01
The increasing public availability of personal complete genome sequencing data has ushered in an era of democratized genomics. However, read mapping and variant calling software is constantly improving and individuals with personal genomic data may prefer to customize and update their variant calls. Here, we describe STORMSeq (Scalable Tools for Open-Source Read Mapping), a graphical interface cloud computing solution that does not require a parallel computing environment or extensive technical experience. This customizable and modular system performs read mapping, read cleaning, and variant calling and annotation. At present, STORMSeq costs approximately $2 and 5–10 hours to process a full exome sequence and $30 and 3–8 days to process a whole genome sequence. We provide this open-access and open-source resource as a user-friendly interface in Amazon EC2. PMID:24454756
Peeters, Marieke; de Moor, Jan; Verhoeven, Ludo
2011-01-01
The goal of the present study was to get an overview of the emergent literacy activities, instructional adaptations and school absence of children with cerebral palsy (CP) compared to normally developing peers. The results showed that there were differences between the groups regarding the amount of emergent literacy instruction. While time dedicated to storybook reading and independent picture-book reading was comparable, the children with CP received fewer opportunities to work with educational software and more time was dedicated to rhyming games and singing. For the children with CP, the level of speech, intellectual, and physical impairments were all related to the amount of time in emergent literacy instruction. Additionally, the amount of time reading precursors is trained and the number of specific reading precursors that is trained is all related to skills of emergent literacy. Copyright © 2010 Elsevier Ltd. All rights reserved.
Cuffney, Thomas F.; Brightbill, Robin A.
2011-01-01
The Invertebrate Data Analysis System (IDAS) software was developed to provide an accurate, consistent, and efficient mechanism for analyzing invertebrate data collected as part of the U.S. Geological Survey National Water-Quality Assessment (NAWQA) Program. The IDAS software is a stand-alone program for personal computers that run Microsoft Windows(Registered). It allows users to read data downloaded from the NAWQA Program Biological Transactional Database (Bio-TDB) or to import data from other sources either as Microsoft Excel(Registered) or Microsoft Access(Registered) files. The program consists of five modules: Edit Data, Data Preparation, Calculate Community Metrics, Calculate Diversities and Similarities, and Data Export. The Edit Data module allows the user to subset data on the basis of taxonomy or sample type, extract a random subsample of data, combine or delete data, summarize distributions, resolve ambiguous taxa (see glossary) and conditional/provisional taxa, import non-NAWQA data, and maintain and create files of invertebrate attributes that are used in the calculation of invertebrate metrics. The Data Preparation module allows the user to select the type(s) of sample(s) to process, calculate densities, delete taxa on the basis of laboratory processing notes, delete pupae or terrestrial adults, combine lifestages or keep them separate, select a lowest taxonomic level for analysis, delete rare taxa on the basis of the number of sites where a taxon occurs and (or) the abundance of a taxon in a sample, and resolve taxonomic ambiguities by one of four methods. The Calculate Community Metrics module allows the user to calculate 184 community metrics, including metrics based on organism tolerances, functional feeding groups, and behavior. The Calculate Diversities and Similarities module allows the user to calculate nine diversity and eight similarity indices. The Data Export module allows the user to export data to other software packages (CANOCO, Primer, PC-ORD, MVSP) and produce tables of community data that can be imported into spreadsheet, database, graphics, statistics, and word-processing programs. The IDAS program facilitates the documentation of analyses by keeping a log of the data that are processed, the files that are generated, and the program settings used to process the data. Though the IDAS program was developed to process NAWQA Program invertebrate data downloaded from Bio-TDB, the Edit Data module includes tools that can be used to convert non-NAWQA data into Bio-TDB format. Consequently, the data manipulation, analysis, and export procedures provided by the IDAS program can be used to process data generated outside of the NAWQA Program.
Genotyping in the cloud with Crossbow.
Gurtowski, James; Schatz, Michael C; Langmead, Ben
2012-09-01
Crossbow is a scalable, portable, and automatic cloud computing tool for identifying SNPs from high-coverage, short-read resequencing data. It is built on Apache Hadoop, an implementation of the MapReduce software framework. Hadoop allows Crossbow to distribute read alignment and SNP calling subtasks over a cluster of commodity computers. Two robust tools, Bowtie and SOAPsnp, implement the fundamental alignment and variant calling operations respectively, and have demonstrated capabilities within Crossbow of analyzing approximately one billion short reads per hour on a commodity Hadoop cluster with 320 cores. Through protocol examples, this unit will demonstrate the use of Crossbow for identifying variations in three different operating modes: on a Hadoop cluster, on a single computer, and on the Amazon Elastic MapReduce cloud computing service.
ERIC Educational Resources Information Center
Werfel, Krystal L.; Krimm, Hannah
2017-01-01
Purpose: The purpose of this preliminary study was to (a) compare the pattern of reading subtypes among a clinical sample of children with specific language impairment (SLI) and children with typical language and (b) evaluate phonological and nonphonological language deficits within each reading impairment subtype. Method: Participants were 32…
ERIC Educational Resources Information Center
Bonifacci, Paola; Tobia, Valentina
2017-01-01
The present study evaluated which components within the simple view of reading model better predicted reading comprehension in a sample of bilingual language-minority children exposed to Italian, a highly transparent language, as a second language. The sample included 260 typically developing bilingual children who were attending either the first…
A Design and Development of Multi-Purpose CCD Camera System with Thermoelectric Cooling: Software
NASA Astrophysics Data System (ADS)
Oh, S. H.; Kang, Y. W.; Byun, Y. I.
2007-12-01
We present a software which we developed for the multi-purpose CCD camera. This software can be used on the all 3 types of CCD - KAF-0401E (768×512), KAF-1602E (15367times;1024), KAF-3200E (2184×1472) made in KODAK Co.. For the efficient CCD camera control, the software is operated with two independent processes of the CCD control program and the temperature/shutter operation program. This software is designed to fully automatic operation as well as manually operation under LINUX system, and is controled by LINUX user signal procedure. We plan to use this software for all sky survey system and also night sky monitoring and sky observation. As our results, the read-out time of each CCD are about 15sec, 64sec, 134sec for KAF-0401E, KAF-1602E, KAF-3200E., because these time are limited by the data transmission speed of parallel port. For larger format CCD, the data transmission is required more high speed. we are considering this control software to one using USB port for high speed data transmission.
Yavaş, Gökhan; Koyutürk, Mehmet; Gould, Meetha P; McMahon, Sarah; LaFramboise, Thomas
2014-03-05
With the advent of paired-end high throughput sequencing, it is now possible to identify various types of structural variation on a genome-wide scale. Although many methods have been proposed for structural variation detection, most do not provide precise boundaries for identified variants. In this paper, we propose a new method, Distribution Based detection of Duplication Boundaries (DB2), for accurate detection of tandem duplication breakpoints, an important class of structural variation, with high precision and recall. Our computational experiments on simulated data show that DB2 outperforms state-of-the-art methods in terms of finding breakpoints of tandem duplications, with a higher positive predictive value (precision) in calling the duplications' presence. In particular, DB2's prediction of tandem duplications is correct 99% of the time even for very noisy data, while narrowing down the space of possible breakpoints within a margin of 15 to 20 bps on the average. Most of the existing methods provide boundaries in ranges that extend to hundreds of bases with lower precision values. Our method is also highly robust to varying properties of the sequencing library and to the sizes of the tandem duplications, as shown by its stable precision, recall and mean boundary mismatch performance. We demonstrate our method's efficacy using both simulated paired-end reads, and those generated from a melanoma sample and two ovarian cancer samples. Newly discovered tandem duplications are validated using PCR and Sanger sequencing. Our method, DB2, uses discordantly aligned reads, taking into account the distribution of fragment length to predict tandem duplications along with their breakpoints on a donor genome. The proposed method fine tunes the breakpoint calls by applying a novel probabilistic framework that incorporates the empirical fragment length distribution to score each feasible breakpoint. DB2 is implemented in Java programming language and is freely available at http://mendel.gene.cwru.edu/laframboiselab/software.php.
Common Sense Conversion: Can You Read Me?
ERIC Educational Resources Information Center
Crawford, Walt
1988-01-01
Discusses basic approaches and available software for five categories of file conversion: (1) converting between different computers with different operating systems; (2) converting between different computers with the same operating systems; (3) converting between different applications on the same computer; (4) converting between different…
Joshi, Vinayak; Agurto, Carla; VanNess, Richard; Nemeth, Sheila; Soliz, Peter; Barriga, Simon
2014-01-01
One of the most important signs of systemic disease that presents on the retina is vascular abnormalities such as in hypertensive retinopathy. Manual analysis of fundus images by human readers is qualitative and lacks in accuracy, consistency and repeatability. Present semi-automatic methods for vascular evaluation are reported to increase accuracy and reduce reader variability, but require extensive reader interaction; thus limiting the software-aided efficiency. Automation thus holds a twofold promise. First, decrease variability while increasing accuracy, and second, increasing the efficiency. In this paper we propose fully automated software as a second reader system for comprehensive assessment of retinal vasculature; which aids the readers in the quantitative characterization of vessel abnormalities in fundus images. This system provides the reader with objective measures of vascular morphology such as tortuosity, branching angles, as well as highlights of areas with abnormalities such as artery-venous nicking, copper and silver wiring, and retinal emboli; in order for the reader to make a final screening decision. To test the efficacy of our system, we evaluated the change in performance of a newly certified retinal reader when grading a set of 40 color fundus images with and without the assistance of the software. The results demonstrated an improvement in reader's performance with the software assistance, in terms of accuracy of detection of vessel abnormalities, determination of retinopathy, and reading time. This system enables the reader in making computer-assisted vasculature assessment with high accuracy and consistency, at a reduced reading time.
Artemis and ACT: viewing, annotating and comparing sequences stored in a relational database.
Carver, Tim; Berriman, Matthew; Tivey, Adrian; Patel, Chinmay; Böhme, Ulrike; Barrell, Barclay G; Parkhill, Julian; Rajandream, Marie-Adèle
2008-12-01
Artemis and Artemis Comparison Tool (ACT) have become mainstream tools for viewing and annotating sequence data, particularly for microbial genomes. Since its first release, Artemis has been continuously developed and supported with additional functionality for editing and analysing sequences based on feedback from an active user community of laboratory biologists and professional annotators. Nevertheless, its utility has been somewhat restricted by its limitation to reading and writing from flat files. Therefore, a new version of Artemis has been developed, which reads from and writes to a relational database schema, and allows users to annotate more complex, often large and fragmented, genome sequences. Artemis and ACT have now been extended to read and write directly to the Generic Model Organism Database (GMOD, http://www.gmod.org) Chado relational database schema. In addition, a Gene Builder tool has been developed to provide structured forms and tables to edit coordinates of gene models and edit functional annotation, based on standard ontologies, controlled vocabularies and free text. Artemis and ACT are freely available (under a GPL licence) for download (for MacOSX, UNIX and Windows) at the Wellcome Trust Sanger Institute web sites: http://www.sanger.ac.uk/Software/Artemis/ http://www.sanger.ac.uk/Software/ACT/
NASA Astrophysics Data System (ADS)
Croitoru, Bogdan; Tulbure, Adrian; Abrudean, Mihail; Secara, Mihai
2015-02-01
The present paper describes a software method for creating / managing one type of Transducer Electronic Datasheet (TEDS) according to IEEE 1451.4 standard in order to develop a prototype of smart multi-sensor platform (with up to ten different analog sensors simultaneously connected) with Plug and Play capabilities over ETHERNET and Wi-Fi. In the experiments were used: one analog temperature sensor, one analog light sensor, one PIC32-based microcontroller development board with analog and digital I/O ports and other computing resources, one 24LC256 I2C (Inter Integrated Circuit standard) serial Electrically Erasable Programmable Read Only Memory (EEPROM) memory with 32KB available space and 3 bytes internal buffer for page writes (1 byte for data and 2 bytes for address). It was developed a prototype algorithm for writing and reading TEDS information to / from I2C EEPROM memories using the standard C language (up to ten different TEDS blocks coexisting in the same EEPROM device at once). The algorithm is able to write and read one type of TEDS: transducer information with standard TEDS content. A second software application, written in VB.NET platform, was developed in order to access the EEPROM sensor information from a computer through a serial interface (USB).
Simplifier: a web tool to eliminate redundant NGS contigs.
Ramos, Rommel Thiago Jucá; Carneiro, Adriana Ribeiro; Azevedo, Vasco; Schneider, Maria Paula; Barh, Debmalya; Silva, Artur
2012-01-01
Modern genomic sequencing technologies produce a large amount of data with reduced cost per base; however, this data consists of short reads. This reduction in the size of the reads, compared to those obtained with previous methodologies, presents new challenges, including a need for efficient algorithms for the assembly of genomes from short reads and for resolving repetitions. Additionally after abinitio assembly, curation of the hundreds or thousands of contigs generated by assemblers demands considerable time and computational resources. We developed Simplifier, a stand-alone software that selectively eliminates redundant sequences from the collection of contigs generated by ab initio assembly of genomes. Application of Simplifier to data generated by assembly of the genome of Corynebacterium pseudotuberculosis strain 258 reduced the number of contigs generated by ab initio methods from 8,004 to 5,272, a reduction of 34.14%; in addition, N50 increased from 1 kb to 1.5 kb. Processing the contigs of Escherichia coli DH10B with Simplifier reduced the mate-paired library 17.47% and the fragment library 23.91%. Simplifier removed redundant sequences from datasets produced by assemblers, thereby reducing the effort required for finalization of genome assembly in tests with data from Prokaryotic organisms. Simplifier is available at http://www.genoma.ufpa.br/rramos/softwares/simplifier.xhtmlIt requires Sun jdk 6 or higher.
Scheuch, Matthias; Höper, Dirk; Beer, Martin
2015-03-03
Fuelled by the advent and subsequent development of next generation sequencing technologies, metagenomics became a powerful tool for the analysis of microbial communities both scientifically and diagnostically. The biggest challenge is the extraction of relevant information from the huge sequence datasets generated for metagenomics studies. Although a plethora of tools are available, data analysis is still a bottleneck. To overcome the bottleneck of data analysis, we developed an automated computational workflow called RIEMS - Reliable Information Extraction from Metagenomic Sequence datasets. RIEMS assigns every individual read sequence within a dataset taxonomically by cascading different sequence analyses with decreasing stringency of the assignments using various software applications. After completion of the analyses, the results are summarised in a clearly structured result protocol organised taxonomically. The high accuracy and performance of RIEMS analyses were proven in comparison with other tools for metagenomics data analysis using simulated sequencing read datasets. RIEMS has the potential to fill the gap that still exists with regard to data analysis for metagenomics studies. The usefulness and power of RIEMS for the analysis of genuine sequencing datasets was demonstrated with an early version of RIEMS in 2011 when it was used to detect the orthobunyavirus sequences leading to the discovery of Schmallenberg virus.
Teaching Reading Comprehension through Collaborative Strategic Reading.
ERIC Educational Resources Information Center
Vaughn, Sharon; Klingner, Janette Kettman
1999-01-01
Provides an overview of collaborative strategic reading (CSR) as an approach to enhancing the reading-comprehension skills of students with learning disabilities. Procedures for implementing CSR with collaborative groups and techniques for teaching reading-comprehension skills are provided. The role of the teacher is described and sample teaching…
Development of Software to Model AXAF-I Image Quality
NASA Technical Reports Server (NTRS)
Ahmad, Anees; Hawkins, Lamar
1996-01-01
This draft final report describes the work performed under the delivery order number 145 from May 1995 through August 1996. The scope of work included a number of software development tasks for the performance modeling of AXAF-I. A number of new capabilities and functions have been added to the GT software, which is the command mode version of the GRAZTRACE software, originally developed by MSFC. A structural data interface has been developed for the EAL (old SPAR) finite element analysis FEA program, which is being used by MSFC Structural Analysis group for the analysis of AXAF-I. This interface utility can read the structural deformation file from the EAL and other finite element analysis programs such as NASTRAN and COSMOS/M, and convert the data to a suitable format that can be used for the deformation ray-tracing to predict the image quality for a distorted mirror. There is a provision in this utility to expand the data from finite element models assuming 180 degrees symmetry. This utility has been used to predict image characteristics for the AXAF-I HRMA, when subjected to gravity effects in the horizontal x-ray ground test configuration. The development of the metrology data processing interface software has also been completed. It can read the HDOS FITS format surface map files, manipulate and filter the metrology data, and produce a deformation file, which can be used by GT for ray tracing for the mirror surface figure errors. This utility has been used to determine the optimum alignment (axial spacing and clocking) for the four pairs of AXAF-I mirrors. Based on this optimized alignment, the geometric images and effective focal lengths for the as built mirrors were predicted to cross check the results obtained by Kodak.
Transcending the Curricular Barrier between Fitness and Reading with FitLit
ERIC Educational Resources Information Center
Opitz, Michael F.
2011-01-01
The author discusses how FitLit, children's literature that spotlights the multiple aspects of health and well-being, offers a vehicle for integrating reading and fitness into existing classroom routines such as guided reading, read-alouds, independent reading, and reading and writing workshop. Sample FitLit titles are provided as well as a…
How Fast Can We Read in the Mind? Developmental Trajectories of Silent Reading Fluency
ERIC Educational Resources Information Center
Ciuffo, Massimo; Myers, Jane; Ingrassia, Massimo; Milanese, Antonio; Venuti, Maria; Alquino, Ausilia; Baradello, Alice; Stella, Giacomo; Gagliano, Antonella
2017-01-01
The silent reading fluency is not an observable behaviour and, therefore, its evaluation is perceived as more challenging and less reliable than oral reading fluency. The present research is aimed to measure the silent reading speed in a sample of proficient students, assessed by an original silent reading fluency task, based on behavioural…
The Impact of Guided Reading Instruction on Elementary Students' Reading Fluency and Accuracy
ERIC Educational Resources Information Center
Teets, Agnes Jean
2017-01-01
This study examined the impact of Guided Reading instruction on elementary students' ability to read with fluency and accuracy. A one-way analysis of covariance with pre and posttest design was performed and applied to determine the impact of Guided Reading instruction on elementary students' reading fluency and accuracy. The sample of subjects…
ERIC Educational Resources Information Center
Dittman, Cassandra K.
2016-01-01
Concurrent associations between teacher ratings of inattention, hyperactivity and pre-reading skills were examined in 64 pre-schoolers who had not commenced formal reading instruction and 136 school entrants who were in the first weeks of reading instruction. Both samples of children completed measures of pre-reading skills, namely phonological…
Zhang, Mingxia; Li, Jin; Chen, Chuansheng; Mei, Leilei; Xue, Gui; Lu, Zhonglin; Chen, Chunhui; He, Qinghua; Wei, Miao; Dong, Qi
2012-01-01
Previous functional neuroimaging studies have shown that the left mid-fusiform cortex plays a critical role in reading. However, there is very limited research relating this region’s anatomical structure to reading performance either in native or second language. Using structural MRI and three reading tasks (Chinese characters, English words, and alphabetic pseudowords) and a non-reading task (visual-auditory learning), this study investigated the contributions of the left mid-fusiform cortical thickness to reading in a large sample of 226 Chinese subjects. Results showed that cortical thickness in the left mid-fusiform gyrus was positively correlated with performance on all three reading tasks but not with the performance on the non-reading task. Our findings provide structural evidence for the left mid-fusiform cortex as the “gateway” region for reading Chinese and English. The absence of the association between the left mid-fusiform cortical thickness and non-reading performance implied the specific role of this area in reading skills, not in general language skills. PMID:23022094
MosaicSolver: a tool for determining recombinants of viral genomes from pileup data
Wood, Graham R.; Ryabov, Eugene V.; Fannon, Jessica M.; Moore, Jonathan D.; Evans, David J.; Burroughs, Nigel
2014-01-01
Viral recombination is a key evolutionary mechanism, aiding escape from host immunity, contributing to changes in tropism and possibly assisting transmission across species barriers. The ability to determine whether recombination has occurred and to locate associated specific recombination junctions is thus of major importance in understanding emerging diseases and pathogenesis. This paper describes a method for determining recombinant mosaics (and their proportions) originating from two parent genomes, using high-throughput sequence data. The method involves setting the problem geometrically and the use of appropriately constrained quadratic programming. Recombinants of the honeybee deformed wing virus and the Varroa destructor virus-1 are inferred to illustrate the method from both siRNAs and reads sampling the viral genome population (cDNA library); our results are confirmed experimentally. Matlab software (MosaicSolver) is available. PMID:25120266
Flight code validation simulator
NASA Astrophysics Data System (ADS)
Sims, Brent A.
1996-05-01
An End-To-End Simulation capability for software development and validation of missile flight software on the actual embedded computer has been developed utilizing a 486 PC, i860 DSP coprocessor, embedded flight computer and custom dual port memory interface hardware. This system allows real-time interrupt driven embedded flight software development and checkout. The flight software runs in a Sandia Digital Airborne Computer and reads and writes actual hardware sensor locations in which Inertial Measurement Unit data resides. The simulator provides six degree of freedom real-time dynamic simulation, accurate real-time discrete sensor data and acts on commands and discretes from the flight computer. This system was utilized in the development and validation of the successful premier flight of the Digital Miniature Attitude Reference System in January of 1995 at the White Sands Missile Range on a two stage attitude controlled sounding rocket.
Differential expression analysis for RNAseq using Poisson mixed models.
Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang
2017-06-20
Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
A hard-to-read font reduces the framing effect in a large sample.
Korn, Christoph W; Ries, Juliane; Schalk, Lennart; Oganian, Yulia; Saalbach, Henrik
2018-04-01
How can apparent decision biases, such as the framing effect, be reduced? Intriguing findings within recent years indicate that foreign language settings reduce framing effects, which has been explained in terms of deeper cognitive processing. Because hard-to-read fonts have been argued to trigger deeper cognitive processing, so-called cognitive disfluency, we tested whether hard-to-read fonts reduce framing effects. We found no reliable evidence for an effect of hard-to-read fonts on four framing scenarios in a laboratory (final N = 158) and an online study (N = 271). However, in a preregistered online study with a rather large sample (N = 732), a hard-to-read font reduced the framing effect in the classic "Asian disease" scenario (in a one-sided test). This suggests that hard-read-fonts can modulate decision biases-albeit with rather small effect sizes. Overall, our findings stress the importance of large samples for the reliability and replicability of modulations of decision biases.
Skiba, Thomas; Landi, Nicole; Wagner, Richard
2011-01-01
Reading ability and specific reading disability (SRD) are complex traits involving several cognitive processes and are shaped by a complex interplay of genetic and environmental forces. Linkage studies of these traits have identified several susceptibility loci. Association studies have gone further in detecting candidate genes that might underlie these signals. These results have been obtained in samples of mainly European ancestry, which vary in their languages, inclusion criteria, and phenotype assessments. Such phenotypic heterogeneity across samples makes understanding the relationship between reading (dis)ability and reading-related processes and the genetic factors difficult; in addition, it may negatively influence attempts at replication. In moving forward, the identification of preferable phenotypes for future sample collection may improve the replicability of findings. This review of all published linkage and association results from the past 15 years was conducted to determine if certain phenotypes produce more replicable and consistent results than others. PMID:21243420
UCam: universal camera controller and data acquisition system
NASA Astrophysics Data System (ADS)
McLay, S. A.; Bezawada, N. N.; Atkinson, D. C.; Ives, D. J.
2010-07-01
This paper describes the software architecture and design concepts used in the UKATC's generic camera control and data acquisition software system (UCam) which was originally developed for use with the ARC controller hardware. The ARC detector control electronics are developed by Astronomical Research Cameras (ARC), of San Diego, USA. UCam provides an alternative software solution programmed in C/C++ and python that runs on a real-time Linux operating system to achieve critical speed performance for high time resolution instrumentation. UCam is a server based application that can be accessed remotely and easily integrated as part of a larger instrument control system. It comes with a user friendly client application interface that has several features including a FITS header editor and support for interfacing with network devices. Support is also provided for writing automated scripts in python or as text files. UCam has an application centric design where custom applications for different types of detectors and read out modes can be developed, downloaded and executed on the ARC controller. The built-in de-multiplexer can be easily reconfigured to readout any number of channels for almost any type of detector. It also provides support for numerous sampling modes such as CDS, FOWLER, NDR and threshold limited NDR. UCam has been developed over several years for use on many instruments such as the Wide Field Infra Red Camera (WFCAM) at UKIRT in Hawaii, the mid-IR imager/spectrometer UIST and is also used on instruments at SUBARU, Gemini and Palomar.
Effectiveness of a Metacognitive Reading Strategies Program for Improving Low Achieving EFL Readers
ERIC Educational Resources Information Center
Ismail, Nasrah Mahmoud; Tawalbeh, Tha'er Issa
2015-01-01
As the training of language learners was a main concern of EFL teachers, this study aimed to assess the effectiveness of metacognitive reading strategies instruction (MRSI) on Taif University EFL students who achieved low results in reading. The final sample of this study was (21) female university students. The sample was divided into two groups;…
ERIC Educational Resources Information Center
Janzen, Troy M.; Saklofske, Donald H.; Das, J. P.
2013-01-01
Two Canadian First Nations samples of Grades 3 and 4 children were assessed for cognitive processing, word reading, and phonological awareness skills. Both groups were from Plains Cree rural reservations in different provinces. The two groups showed significant differences on several key cognitive variables although there were more similarities…
ERIC Educational Resources Information Center
Van Nuys, Ute Elisabeth
1986-01-01
Presents reviews of the following mathematics software designed to teach young children counting, number recognition, visual discrimination, matching, addition, and subtraction skills; Stickybear Numbers, Learning with Leeper, Getting Ready to Read and Add, Counting Parade, Early Games for Young Children, Charlie Brown's 1,2,3's, Let's Go Fishing,…
Teach Your Computer to Read: Scanners and Optical Character Recognition.
ERIC Educational Resources Information Center
Marsden, Jim
1993-01-01
Desktop scanners can be used with a software technology called optical character recognition (OCR) to convert the text on virtually any paper document into an electronic form. OCR offers educators new flexibility in incorporating text into tests, lesson plans, and other materials. (MLF)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, John; Castillo, Andrew
2016-09-21
This software contains a set of python modules – input, search, cluster, analysis; these modules read input files containing spatial coordinates and associated attributes which can be used to perform nearest neighbor search (spatial indexing via kdtree), cluster analysis/identification, and calculation of spatial statistics for analysis.
CER_SRBAVG_Aqua-FM3-MODIS_Edition2A
Atmospheric Science Data Center
2014-07-24
... Readme Files: Readme R4-671 UNIX C shell scripts for extracting regional CERES geo and non-geo fluxes from ... Software Files : Read Package (C) UNIX C shell scripts for extracting regional CERES geo and non-geo fluxes from ...
How Community Colleges Can Capitalize on Changes in Information Services.
ERIC Educational Resources Information Center
Nourse, Jimmie Anne; Widman, Rudy
1991-01-01
Urges community college librarians to become leaders in library instruction by developing aggressive teaching programs using high-technology information resources, such as compact disc read-only-memory (CD-ROM), telecommunications, and on-line databases. Discusses training, hardware, software, and funding issues. (DMM)
Krishnan, Neeraja M.; Gaur, Prakhar; Chaudhary, Rakshit; Rao, Arjun A.; Panda, Binay
2012-01-01
Copy Number Alterations (CNAs) such as deletions and duplications; compose a larger percentage of genetic variations than single nucleotide polymorphisms or other structural variations in cancer genomes that undergo major chromosomal re-arrangements. It is, therefore, imperative to identify cancer-specific somatic copy number alterations (SCNAs), with respect to matched normal tissue, in order to understand their association with the disease. We have devised an accurate, sensitive, and easy-to-use tool, COPS, COpy number using Paired Samples, for detecting SCNAs. We rigorously tested the performance of COPS using short sequence simulated reads at various sizes and coverage of SCNAs, read depths, read lengths and also with real tumor:normal paired samples. We found COPS to perform better in comparison to other known SCNA detection tools for all evaluated parameters, namely, sensitivity (detection of true positives), specificity (detection of false positives) and size accuracy. COPS performed well for sequencing reads of all lengths when used with most upstream read alignment tools. Additionally, by incorporating a downstream boundary segmentation detection tool, the accuracy of SCNA boundaries was further improved. Here, we report an accurate, sensitive and easy to use tool in detecting cancer-specific SCNAs using short-read sequence data. In addition to cancer, COPS can be used for any disease as long as sequence reads from both disease and normal samples from the same individual are available. An added boundary segmentation detection module makes COPS detected SCNA boundaries more specific for the samples studied. COPS is available at ftp://115.119.160.213 with username “cops” and password “cops”. PMID:23110103
The etiology of mathematical and reading (dis)ability covariation in a sample of Dutch twins.
Markowitz, Ezra M; Willemsen, Gonneke; Trumbetta, Susan L; van Beijsterveldt, Toos C E M; Boomsma, Dorret I
2005-12-01
The genetic etiology of mathematical and reading (dis)ability has been studied in a number of distinct samples, but the true nature of the relationship between the two remains unclear. Data from the Netherlands Twin Register was used to determine the etiology of the relationship between mathematical and reading (dis)ability in adolescent twins. Ratings of mathematical and reading problems were obtained from parents of over 1500 twin pairs. Results of bivariate structural equation modeling showed a genetic correlation around .60, which explained over 90% of the phenotypic correlation between mathematical and reading ability. The genetic model was the same for males and females.
ERIC Educational Resources Information Center
Solheim, Oddny Judith
2011-01-01
It has been hypothesized that students with low self-efficacy will struggle with complex reading tasks in assessment situations. In this study we examined whether perceived reading self-efficacy and reading task value uniquely predicted reading comprehension scores in two different item formats in a sample of fifth-grade students. Results showed…
ERIC Educational Resources Information Center
Parkin, Jason R.
2018-01-01
Oral language and word reading skills have important effects on reading comprehension. The Wechsler Individual Achievement Test-Third Edition (WIAT-III) measures both skill sets, but little is known about their specific effects on reading comprehension within this battery. Path analysis was used to evaluate the collective effects of reading and…
ERIC Educational Resources Information Center
Wickramaarachchi, Thilina Indrajie
2014-01-01
The study examines the interaction between reading and writing processes in general and more specifically the impact of pre-reading tasks incorporating writing tasks (referred to as "prw tasks") in helping the development of inferential reading comprehension. A sample of 70 first year ESL students of the University of Kelaniya were…
NASA Astrophysics Data System (ADS)
Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde
2006-03-01
European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.
2012-01-24
used during the data collection. The computer recorded the VI data using Signal Express Software . 3. Circuit model for the MHCD device during normal...summarized in Table 1. The electric circuit model was implemented and simulated using MATLAB’s Simscape software . 4. Results and discussion 4.1...2908e2913. [14] J.A. Pérez-Martínez, R. Peña-Eguiluz, R. López-Callejas, A. Mercado -Cabrera, R.A. Valencia, S.R. Barocio, J.S. Benítez-Read, J.O. Pacheco
Software for Optical Archive and Retrieval (SOAR) user's guide, version 4.2
NASA Technical Reports Server (NTRS)
Davis, Charles
1991-01-01
The optical disk is an emerging technology. Because it is not a magnetic medium, it offers a number of distinct advantages over the established form of storage, advantages that make it extremely attractive. They are as follows: (1) the ability to store much more data within the same space; (2) the random access characteristics of the Write Once Read Many optical disk; (3) a much longer life than that of traditional storage media; and (4) much greater data access rate. Software for Optical Archive and Retrieval (SOAR) user's guide is presented.
SSPACE-LongRead: scaffolding bacterial draft genomes using long read sequence information
2014-01-01
Background The recent introduction of the Pacific Biosciences RS single molecule sequencing technology has opened new doors to scaffolding genome assemblies in a cost-effective manner. The long read sequence information is promised to enhance the quality of incomplete and inaccurate draft assemblies constructed from Next Generation Sequencing (NGS) data. Results Here we propose a novel hybrid assembly methodology that aims to scaffold pre-assembled contigs in an iterative manner using PacBio RS long read information as a backbone. On a test set comprising six bacterial draft genomes, assembled using either a single Illumina MiSeq or Roche 454 library, we show that even a 50× coverage of uncorrected PacBio RS long reads is sufficient to drastically reduce the number of contigs. Comparisons to the AHA scaffolder indicate our strategy is better capable of producing (nearly) complete bacterial genomes. Conclusions The current work describes our SSPACE-LongRead software which is designed to upgrade incomplete draft genomes using single molecule sequences. We conclude that the recent advances of the PacBio sequencing technology and chemistry, in combination with the limited computational resources required to run our program, allow to scaffold genomes in a fast and reliable manner. PMID:24950923
Balanced Reading Basals and the Impact on Third-Grade Reading Achievement
ERIC Educational Resources Information Center
Dorsey, Windy
2015-01-01
This convergent parallel mixed method sought to determine if the reading program increased third-grade student achievement. The research questions of the study examined the reading achievement scores of third-grade students and the effectiveness of McGraw-Hill Reading Wonders™. Significant differences were observed when a paired sample t test…
d'Assuncao, Jefferson; Irwig, Les; Macaskill, Petra; Chan, Siew F; Richards, Adele; Farnsworth, Annabelle
2007-01-01
Objective To compare the accuracy of liquid based cytology using the computerised ThinPrep Imager with that of manually read conventional cytology. Design Prospective study. Setting Pathology laboratory in Sydney, Australia. Participants 55 164 split sample pairs (liquid based sample collected after conventional sample from one collection) from consecutive samples of women choosing both types of cytology and whose specimens were examined between August 2004 and June 2005. Main outcome measures Primary outcome was accuracy of slides for detecting squamous lesions. Secondary outcomes were rate of unsatisfactory slides, distribution of squamous cytological classifications, and accuracy of detecting glandular lesions. Results Fewer unsatisfactory slides were found for imager read cytology than for conventional cytology (1.8% v 3.1%; P<0.001). More slides were classified as abnormal by imager read cytology (7.4% v 6.0% overall and 2.8% v 2.2% for cervical intraepithelial neoplasia of grade 1 or higher). Among 550 patients in whom imager read cytology was cervical intraepithelial neoplasia grade 1 or higher and conventional cytology was less severe than grade 1, 133 of 380 biopsy samples taken were high grade histology. Among 294 patients in whom imager read cytology was less severe than cervical intraepithelial neoplasia grade 1 and conventional cytology was grade 1 or higher, 62 of 210 biopsy samples taken were high grade histology. Imager read cytology therefore detected 71 more cases of high grade histology than did conventional cytology, resulting from 170 more biopsies. Similar results were found when one pathologist reread the slides, masked to cytology results. Conclusion The ThinPrep Imager detects 1.29 more cases of histological high grade squamous disease per 1000 women screened than conventional cytology, with cervical intraepithelial neoplasia grade 1 as the threshold for referral to colposcopy. More imager read slides than conventional slides were satisfactory for examination and more contained low grade cytological abnormalities. PMID:17604301
Software for pre-processing Illumina next-generation sequencing short read sequences
2014-01-01
Background When compared to Sanger sequencing technology, next-generation sequencing (NGS) technologies are hindered by shorter sequence read length, higher base-call error rate, non-uniform coverage, and platform-specific sequencing artifacts. These characteristics lower the quality of their downstream analyses, e.g. de novo and reference-based assembly, by introducing sequencing artifacts and errors that may contribute to incorrect interpretation of data. Although many tools have been developed for quality control and pre-processing of NGS data, none of them provide flexible and comprehensive trimming options in conjunction with parallel processing to expedite pre-processing of large NGS datasets. Methods We developed ngsShoRT (next-generation sequencing Short Reads Trimmer), a flexible and comprehensive open-source software package written in Perl that provides a set of algorithms commonly used for pre-processing NGS short read sequences. We compared the features and performance of ngsShoRT with existing tools: CutAdapt, NGS QC Toolkit and Trimmomatic. We also compared the effects of using pre-processed short read sequences generated by different algorithms on de novo and reference-based assembly for three different genomes: Caenorhabditis elegans, Saccharomyces cerevisiae S288c, and Escherichia coli O157 H7. Results Several combinations of ngsShoRT algorithms were tested on publicly available Illumina GA II, HiSeq 2000, and MiSeq eukaryotic and bacteria genomic short read sequences with the focus on removing sequencing artifacts and low-quality reads and/or bases. Our results show that across three organisms and three sequencing platforms, trimming improved the mean quality scores of trimmed sequences. Using trimmed sequences for de novo and reference-based assembly improved assembly quality as well as assembler performance. In general, ngsShoRT outperformed comparable trimming tools in terms of trimming speed and improvement of de novo and reference-based assembly as measured by assembly contiguity and correctness. Conclusions Trimming of short read sequences can improve the quality of de novo and reference-based assembly and assembler performance. The parallel processing capability of ngsShoRT reduces trimming time and improves the memory efficiency when dealing with large datasets. We recommend combining sequencing artifacts removal, and quality score based read filtering and base trimming as the most consistent method for improving sequence quality and downstream assemblies. ngsShoRT source code, user guide and tutorial are available at http://research.bioinformatics.udel.edu/genomics/ngsShoRT/. ngsShoRT can be incorporated as a pre-processing step in genome and transcriptome assembly projects. PMID:24955109
Automated quality checks on repeat prescribing.
Rogers, Jeremy E; Wroe, Christopher J; Roberts, Angus; Swallow, Angela; Stables, David; Cantrill, Judith A; Rector, Alan L
2003-01-01
BACKGROUND: Good clinical practice in primary care includes periodic review of repeat prescriptions. Markers of prescriptions that may need review have been described, but manually checking all repeat prescriptions against the markers would be impractical. AIM: To investigate the feasibility of computerising the application of repeat prescribing quality checks to electronic patient records in United Kingdom (UK) primary care. DESIGN OF STUDY: Software performance test against benchmark manual analysis of cross-sectional convenience sample of prescribing documentation. SETTING: Three general practices in Greater Manchester, in the north west of England, during a 4-month period in 2001. METHOD: A machine-readable drug information resource, based on the British National Formulary (BNF) as the 'gold standard' for valid drug indications, was installed in three practices. Software raised alerts for each repeat prescribed item where the electronic patient record contained no valid indication for the medication. Alerts raised by the software in two practices were analysed manually. Clinical reaction to the software was assessed by semi-structured interviews in three practices. RESULTS: There was no valid indication in the electronic medical records for 14.8% of repeat prescribed items. Sixty-two per cent of all alerts generated were incorrect. Forty-three per cent of all incorrect alerts were as a result of errors in the drug information resource, 44% to locally idiosyncratic clinical coding, 8% to the use of the BNF without adaptation as a gold standard, and 5% to the inability of the system to infer diagnoses that, although unrecorded, would be 'obvious' to a clinical reading the record. The interviewed clinicians supported the goals of the software. CONCLUSION: Using electronic records for secondary decision support purposes will benefit from (and may require) both more consistent electronic clinical data collection across multiple sites, and reconciling clinicians' willingness to infer unstated but 'obvious' diagnoses with the machine's inability to do the same. PMID:14702902
Thompson, G Brian; Fletcher-Flinn, Claire M; Wilson, Kathryn J; McKay, Michael F; Margrain, Valerie G
2015-03-01
Predictions from theories of the processes of word reading acquisition have rarely been tested against evidence from exceptionally early readers. The theories of Ehri, Share, and Byrne, and an alternative, Knowledge Sources theory, were so tested. The former three theories postulate that full development of context-free letter sounds and awareness of phonemes are required for normal acquisition, while the claim of the alternative is that with or without such, children can use sublexical information from their emerging reading vocabularies to acquire word reading. Results from two independent samples of children aged 3-5, and 5 years, with mean word reading levels of 7 and 9 years respectively, showed underdevelopment of their context-free letter sounds and phoneme awareness, relative to their word reading levels and normal comparison samples. Despite such underdevelopment, these exceptional readers engaged in a form of phonological recoding that enabled pseudoword reading, at the level of older-age normal controls matched on word reading level. Moreover, in the 5-year-old sample further experiments showed that, relative to normal controls, they had a bias toward use of sublexical information from their reading vocabularies for phonological recoding of heterophonic pseudowords with irregular consistent spelling, and were superior in accessing word meanings independently of phonology, although only if the readers were without exposure to explicit phonics. The three theories were less satisfactory than the alternative theory in accounting for the learning of the exceptionally early readers. Copyright © 2014 Elsevier B.V. All rights reserved.
Certification Testing Methodology for Composite Structure. Volume 2. Methodology Development
1986-10-01
parameter, sample size and fa- tigue test duration. The required input are 1. Residual strength Weibull shape parameter ( ALPR ) 2. Fatigue life Weibull shape...INPUT STRENGTH ALPHA’) READ(*,*) ALPR ALPRI = 1.O/ ALPR WRITE(*, 2) 2 FORMAT( 2X, ’PLEASE INPUT LIFE ALPHA’) READ(*,*) ALPL ALPLI - 1.0/ALPL WRITE(*, 3...3 FORMAT(2X,’PLEASE INPUT SAMPLE SIZE’) READ(*,*) N AN - N WRITE(*,4) 4 FORMAT(2X,’PLEASE INPUT TEST DURATION’) READ(*,*) T RALP - ALPL/ ALPR ARGR - 1
RAPSearch: a fast protein similarity search tool for short reads
2011-01-01
Background Next Generation Sequencing (NGS) is producing enormous corpuses of short DNA reads, affecting emerging fields like metagenomics. Protein similarity search--a key step to achieve annotation of protein-coding genes in these short reads, and identification of their biological functions--faces daunting challenges because of the very sizes of the short read datasets. Results We developed a fast protein similarity search tool RAPSearch that utilizes a reduced amino acid alphabet and suffix array to detect seeds of flexible length. For short reads (translated in 6 frames) we tested, RAPSearch achieved ~20-90 times speedup as compared to BLASTX. RAPSearch missed only a small fraction (~1.3-3.2%) of BLASTX similarity hits, but it also discovered additional homologous proteins (~0.3-2.1%) that BLASTX missed. By contrast, BLAT, a tool that is even slightly faster than RAPSearch, had significant loss of sensitivity as compared to RAPSearch and BLAST. Conclusions RAPSearch is implemented as open-source software and is accessible at http://omics.informatics.indiana.edu/mg/RAPSearch. It enables faster protein similarity search. The application of RAPSearch in metageomics has also been demonstrated. PMID:21575167
msgbsR: An R package for analysing methylation-sensitive restriction enzyme sequencing data.
Mayne, Benjamin T; Leemaqz, Shalem Y; Buckberry, Sam; Rodriguez Lopez, Carlos M; Roberts, Claire T; Bianco-Miotto, Tina; Breen, James
2018-02-01
Genotyping-by-sequencing (GBS) or restriction-site associated DNA marker sequencing (RAD-seq) is a practical and cost-effective method for analysing large genomes from high diversity species. This method of sequencing, coupled with methylation-sensitive enzymes (often referred to as methylation-sensitive restriction enzyme sequencing or MRE-seq), is an effective tool to study DNA methylation in parts of the genome that are inaccessible in other sequencing techniques or are not annotated in microarray technologies. Current software tools do not fulfil all methylation-sensitive restriction sequencing assays for determining differences in DNA methylation between samples. To fill this computational need, we present msgbsR, an R package that contains tools for the analysis of methylation-sensitive restriction enzyme sequencing experiments. msgbsR can be used to identify and quantify read counts at methylated sites directly from alignment files (BAM files) and enables verification of restriction enzyme cut sites with the correct recognition sequence of the individual enzyme. In addition, msgbsR assesses DNA methylation based on read coverage, similar to RNA sequencing experiments, rather than methylation proportion and is a useful tool in analysing differential methylation on large populations. The package is fully documented and available freely online as a Bioconductor package ( https://bioconductor.org/packages/release/bioc/html/msgbsR.html ).
BOREAS ECMWF 6-Hour Analysis and Forecast Data
NASA Technical Reports Server (NTRS)
Viterbo, Pedro; Hall, Forrest G. (Editor); Newcommer, Jeffrey A. (Editor); Betts, Alan; Strub, Richard
2000-01-01
In cooperation with BOREAS atmospheric research efforts, the ECMWF agreed to provide BOREAS with a customized subset of its 6-hourly forecast data. This data set contains parameters from three ECMWF data products in GRIB format: Surface and Diagnostic Fields, Supplemental Fields, and Extension Data. Sample software and information are provided to assist in reading the data files. Temporally, the atmospheric parameters are available for the four main synoptic hours of 00, 06, 12, and 18 UTC from 1994 to 1996. Spatially, the data are stored in a 0.5- by 0.5-degree latitude/longitude grid. To cover the entire BOREAS study area, the grid extends from 48 to 62 degrees latitude and -92 to -114 degrees longitude. The data are stored in binary data representation known as FM 92 GRIB. Due to the complexity of the content and format of this data set, users are advised to read Sections 6, 7, 8, and 14 before using data. Based on agreements between BOREAS and ECMWF, users may legally obtain and use these data only by having a set of the BOREAS CD-ROMs that contain the data. Possession or use of these data under any other circumstance is prohibited. See Sections 11.3 and 20.4 for details.
Nakazato, Takeru; Bono, Hidemasa
2017-01-01
Abstract It is important for public data repositories to promote the reuse of archived data. In the growing field of omics science, however, the increasing number of submissions of high-throughput sequencing (HTSeq) data to public repositories prevents users from choosing a suitable data set from among the large number of search results. Repository users need to be able to set a threshold to reduce the number of results to obtain a suitable subset of high-quality data for reanalysis. We calculated the quality of sequencing data archived in a public data repository, the Sequence Read Archive (SRA), by using the quality control software FastQC. We obtained quality values for 1 171 313 experiments, which can be used to evaluate the suitability of data for reuse. We also visualized the data distribution in SRA by integrating the quality information and metadata of experiments and samples. We provide quality information of all of the archived sequencing data, which enable users to obtain sufficient quality sequencing data for reanalyses. The calculated quality data are available to the public in various formats. Our data also provide an example of enhancing the reuse of public data by adding metadata to published research data by a third party. PMID:28449062
Automating Physical Database Design: An Extensible Approach
1993-03-01
Schonberg. Tom Cheatham of Harvard University and Software Options provided much encouragement and support, as did Glenn Holloway, Judy Townley , and Mike...through- out, and also helped by reading drafts of a conference paper that reported earlier stages of this work (as did Glenn Holloway and Judy Townley
Summary of Research 1998, Department of Electrical and Computer Engineering.
1999-08-01
Channel Interference," Master’s Thesis, Naval Postgraduate School, December 1998. Erdogan , V, "Time Domain Simulation of MPSK Communications System...1998, several new user interface screens have been designed. Software for reading data from analog tape has been completed and interfaced to the
Adaptive Technology that Provides Access to Computers. DO-IT Program.
ERIC Educational Resources Information Center
Washington Univ., Seattle.
This brochure describes the different types of barriers individuals with mobility impairments, blindness, low vision, hearing impairments, and specific learning disabilities face in providing computer input, interpreting output, and reading documentation. The adaptive hardware and software that has been developed to provide functional alternatives…
Software Manuals: Where Instructional Design and Technical Writing Join Forces.
ERIC Educational Resources Information Center
Thurston, Walter, Ed.
1986-01-01
Presents highlights from a panel discussion by well known San Francisco Bay area documentation writers, instructional designers, and human performance technologists. Three issues on user performance and documentation are addressed: whether people avoid reading user manuals and why; major human factors influencing documentation use; and…
ULFEM time series analysis package
Karl, Susan M.; McPhee, Darcy K.; Glen, Jonathan M. G.; Klemperer, Simon L.
2013-01-01
This manual describes how to use the Ultra-Low-Frequency ElectroMagnetic (ULFEM) software package. Casual users can read the quick-start guide and will probably not need any more information than this. For users who may wish to modify the code, we provide further description of the routines.
Introduction to SmartBooks. Report 23-93.
ERIC Educational Resources Information Center
Kopec, Danny; Wood, Carol
Humankind has become accustomed to reading and learning from printed books. The computer offers us the possibility to exploit another medium whose key advantage is flexibility through extensive memory, computational speed, and versatile representational means. Specifically, we have the hypercard application, an integrated piece of software, with…
Understanding the Requirements for Open Source Software
2009-06-17
GNOME and K Development Environment ( KDE ) for end-user interfaces, the Eclipse and NetBeans interactive development environments for Java-based Web...17 4.1. Informal Post-hoc Assertion of OSS Requirements vs . Requirements Elicitation...18 4.2. Requirements Reading, Sense-making, and Accountability vs . Requirements Analysis
ERIC Educational Resources Information Center
Hale, Andrea D.; Skinner, Christopher H.; Wilhoit, Brian; Ciancio, Dennis; Morrow, Jennifer A.
2012-01-01
Maze and reading comprehension rate measures are calculated by using measures of reading speed and measures of accuracy (i.e., correctly selected words or answers). In sixth- and seventh-grade samples, we found that the measures of reading speed embedded within our Maze measures accounted for 50% and 39% of broad reading score (BRS) variance,…
Cheung, Celeste H.M.; Wood, Alexis C.; Paloyelis, Yannis; Arias-Vasquez, Alejandro; Buitelaar, Jan K.; Franke, Barbara; Miranda, Ana; Mulas, Fernando; Rommelse, Nanda; Sergeant, Joseph A.; Sonuga-Barke, Edmund J.; Faraone, Stephen V.; Asherson, Philip; Kuntsi, Jonna
2012-01-01
Background Twin studies using both clinical and population-based samples suggest that the frequent co-occurrence of attention deficit hyperactivity disorder (ADHD) and reading ability/disability (RD) is largely driven by shared genetic influences. While both disorders are associated with lower IQ, recent twin data suggest that the shared genetic variability between reading difficulties and ADHD inattention symptoms is largely independent from genetic influences contributing to general cognitive ability. The current study aimed to extend the previous findings that were based on rating scale measures in a population sample by examining the generalizability of the findings to a clinical population, and by measuring reading difficulties both with a rating scale and with an objective task. We therefore investigated the familial relationships between ADHD, reading difficulties and IQ in a sample of individuals diagnosed with ADHD combined type, their siblings and control sibling pairs. Methods We ran multivariate familial models on data from 1789 individuals at ages 6 to 19. Reading difficulties were measured with both rating scale and an objective task. IQ was obtained using the Wechsler Intelligence Scales (WISC-III / WAIS-III). Results Significant phenotypic (0.2–0.4) and familial (0.3–0.5) correlations were observed among ADHD, reading difficulties and IQ. Yet 53% to 72% of the overlapping familial influences between ADHD and reading difficulties were not shared with IQ. Conclusions Our finding that familial influences shared with general cognitive ability, though present, do not account for the majority of the overlapping familial influences on ADHD and reading difficulties extends previous findings from a population-based study to a clinically-ascertained sample with combined type ADHD. PMID:22324316
Design ATE systems for complex assemblies
NASA Astrophysics Data System (ADS)
Napier, R. S.; Flammer, G. H.; Moser, S. A.
1983-06-01
The use of ATE systems in radio specification testing can reduce the test time by approximately 90 to 95 percent. What is more, the test station does not require a highly trained operator. Since the system controller has full power over all the measurements, human errors are not introduced into the readings. The controller is immune to any need to increase output by allowing marginal units to pass through the system. In addition, the software compensates for predictable, repeatable system errors, for example, cabling losses, which are an inherent part of the test setup. With no variation in test procedures from unit to unit, there is a constant repeatability factor. Preparing the software, however, usually entails considerable expense. It is pointed out that many of the problems associated with ATE system software can be avoided with the use of a software-intensive, or computer-intensive, system organization. Its goal is to minimize the user's need for software development, thereby saving time and money.
St Clair, Michelle C; Durkin, Kevin; Conti-Ramsden, Gina; Pickles, Andrew
2010-03-01
Individuals with a history of specific language impairment (SLI) often have subsequent problems with reading skills, but there have been some discrepant findings as to the developmental time course of these skills. This study investigates the developmental trajectories of reading skills over a 9-year time-span (from 7 to 16 years of age) in a large sample of individuals with a history of SLI. Relationships among reading skills, autistic symptomatology, and language-related abilities were also investigated. The results indicate that both reading accuracy and comprehension are deficient but that the development of these skills progresses in a consistently parallel fashion to what would be expected from a normative sample of same age peers. Language-related abilities were strongly associated with reading skills. Unlike individuals with SLI only, those with SLI and additional autistic symptomatology had adequate reading accuracy but did not differ from the individuals with SLI only in reading comprehension. They exhibited a significant gap between what they could read and what they could understand when reading. These findings provide strong evidence that individuals with SLI experience continued, long-term deficits in reading skills from childhood to adolescence.
The association between arithmetic and reading performance in school: A meta-analytic study.
Singer, Vivian; Strasser, Kathernie
2017-12-01
Many studies of school achievement find a significant association between reading and arithmetic achievement. The magnitude of the association varies widely across the studies, but the sources of this variation have not been identified. The purpose of this paper is to examine the magnitude and determinants of the relation between arithmetic and reading performance during elementary and middle school years. We meta-analyzed 210 correlations between math and reading measures, coming from 68 independent samples (the overall sample size was 58923 participants). The meta-analysis yielded an average correlation of 0.55 between math and reading measures. Among the moderators tested, only transparency of orthography and use of timed or untimed tests were significant in explaining the size of the correlation, with the largest correlations observed between timed measures of arithmetic and reading and between math and reading in opaque orthographies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
2009-10-14
EMTA-NLA is a computer program for analyzing the nonlinear stiffness, strength, and thermo-elastic properties of discontinuous fiber composite materials. Discontinuous fiber composites are chopped-fiber reinforced polymer materials that are formed by injection molding or compression molding techniques. The fibers tend to align during forming as the composite flows and fills the mold. EMTA-NLA can read the fiber orientation data from the molding software, Autodesk Moldflow Plastics Insight, and calculate the local material properties for accurately analyzing the warpage, stiffness, and strength of the as-formed composite part using the commercial NLA software. Therefore, EMTA-NLA is a unique assembly of mathematical algorithmsmore » that provide a one-of-a-kind composites constitutive model that links these two powerful commercial software packages.« less
Omics Metadata Management Software v. 1 (OMMS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and to perform bioinformatics analyses and information management tasks via a simple and intuitive web-based interface. Several use cases with short-read sequence datasets are provided to showcase the full functionality of the OMMS, from metadata curation tasks, to bioinformatics analyses and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for web-based deployment supporting geographically dispersed research teams. Our software was developed with open-source bundles, is flexible, extensible and easily installedmore » and run by operators with general system administration and scripting language literacy.« less
Tevatron beam position monitor upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolbers, Stephen; Banerjee, B.; Barker, B.
2005-05-01
The Tevatron Beam Position Monitor (BPM) readout electronics and software have been upgraded to improve measurement precision, functionality and reliability. The original system, designed and built in the early 1980's, became inadequate for current and future operations of the Tevatron. The upgraded system consists of 960 channels of new electronics to process analog signals from 240 BPMs, new front-end software, new online and controls software, and modified applications to take advantage of the improved measurements and support the new functionality. The new system reads signals from both ends of the existing directional stripline pickups to provide simultaneous proton and antiprotonmore » position measurements. Measurements using the new system are presented that demonstrate its improved resolution and overall performance.« less
ERIC Educational Resources Information Center
Colorado State Dept. of Education, Denver.
This document contains released reading comprehension passages, test items, and writing prompts from the Colorado Student Assessment Program for 2001. The sample questions and prompts are included without answers or examples of student responses. Test materials are included for: (1) Grade 4 Reading and Writing; (2) Grade 4 Lectura y Escritura…
Proceedings of the Twenty-Fourth Annual Software Engineering Workshop
NASA Technical Reports Server (NTRS)
2000-01-01
On December 1 and 2, the Software Engineering Laboratory (SEL), a consortium composed of NASA/Goddard, the University of Maryland, and CSC, held the 24th Software Engineering Workshop (SEW), the last of the millennium. Approximately 240 people attended the 2-day workshop. Day 1 was composed of four sessions: International Influence of the Software Engineering Laboratory; Object Oriented Testing and Reading; Software Process Improvement; and Space Software. For the first session, three internationally known software process experts discussed the influence of the SEL with respect to software engineering research. In the Space Software session, prominent representatives from three different NASA sites- GSFC's Marti Szczur, the Jet Propulsion Laboratory's Rick Doyle, and the Ames Research Center IV&V Facility's Lou Blazy- discussed the future of space software in their respective centers. At the end of the first day, the SEW sponsored a reception at the GSFC Visitors' Center. Day 2 also provided four sessions: Using the Experience Factory; A panel discussion entitled "Software Past, Present, and Future: Views from Government, Industry, and Academia"; Inspections; and COTS. The day started with an excellent talk by CSC's Frank McGarry on "Attaining Level 5 in CMM Process Maturity." Session 2, the panel discussion on software, featured NASA Chief Information Officer Lee Holcomb (Government), our own Jerry Page (Industry), and Mike Evangelist of the National Science Foundation (Academia). Each presented his perspective on the most important developments in software in the past 10 years, in the present, and in the future.
ERIC Educational Resources Information Center
Turner, Franklin Dickerson
2012-01-01
The author examined the effectiveness of 2 fluency-oriented reading programs on improving reading fluency for an ethnically diverse sample of second-grade students. The first approach is Fluency-Oriented Reading Instruction (S. A. Stahl & K. Heubach, 2005), which incorporates the repeated reading of a grade-level text over the course of an…
ERIC Educational Resources Information Center
Clarke, Mark A.
1980-01-01
Examines a sampling of current ESL reading instruction practices, addressing the concern that the lack of a generally accepted theory of L2 reading constitutes a major obstacle to teaching and testing ESL reading skills. Summarizes the results of two studies and discusses their implications for ESL teachers. (MES)
ERIC Educational Resources Information Center
Solís, Michael; Vaughn, Sharon; Scammacca, Nancy
2015-01-01
This experimental study examined the efficacy of a multicomponent reading intervention compared to a control condition on the reading comprehension of adolescent students with low reading comprehension (more than 1½ standard deviations below normative sample). Ninth-grade students were randomly assigned to treatment (n = 25) and comparison (n =…
The Indicating Factors of Oral Reading Fluency of Monolingual and Bilingual Children in Egypt
ERIC Educational Resources Information Center
Hussien, Abdelaziz M.
2014-01-01
This study examined oral reading fluency (ORF) of bilingual and monolingual students. The author selected a sample of 510 (258 males and 252 females) native Arabic-speaking sixth-graders (62 bilinguals and 448 monolinguals) in Egypt. The purposes were; (a) to examine oral reading rate, oral reading accuracy, prosody, and oral reading comprehension…
ERIC Educational Resources Information Center
McCreary, John J.; Marchant, Gregory J.
2017-01-01
The relationship between reading and empathy was explored. Controlling for GPA and gender, reading variables were hypothesized as related to empathy; the relationship was expected to differ for males and females. For the complete sample, affective components were related to GPA but not reading. Perspective taking was related to reading…
Samur, Dalya; Tops, Mattie; Koole, Sander L
2018-02-01
Prior experiments indicated that reading literary fiction improves mentalising performance relative to reading popular fiction, non-fiction, or not reading. However, the experiments had relatively small sample sizes and hence low statistical power. To address this limitation, the present authors conducted four high-powered replication experiments (combined N = 1006) testing the causal impact of reading literary fiction on mentalising. Relative to the original research, the present experiments used the same literary texts in the reading manipulation; the same mentalising task; and the same kind of participant samples. Moreover, one experiment was pre-registered as a direct replication. In none of the experiments did reading literary fiction have any effect on mentalising relative to control conditions. The results replicate earlier findings that familiarity with fiction is positively correlated with mentalising. Taken together, the present findings call into question whether a single session of reading fiction leads to immediate improvements in mentalising.
Design and implementation of Ada programs to facilitate automated testing
NASA Technical Reports Server (NTRS)
Dean, Jack; Fox, Barry; Oropeza, Michael
1991-01-01
An automated method utilized to test the software components of COMPASS, an interactive computer aided scheduling system, is presented. Each package of this system introduces a private type, and works to construct instances of that type, along with read and write routines for that type. Generic procedures that can generate test drivers for these functions are given and show how the test drivers can read from a test data file the functions to call, the arguments for those functions, what the anticipated result should be, and whether an exception should be raised for the function given the arguments.
Raster-scanning serial protein crystallography using micro- and nano-focused synchrotron beams
Coquelle, Nicolas; Brewster, Aaron S.; Kapp, Ulrike; Shilova, Anastasya; Weinhausen, Britta; Burghammer, Manfred; Colletier, Jacques-Philippe
2015-01-01
High-resolution structural information was obtained from lysozyme microcrystals (20 µm in the largest dimension) using raster-scanning serial protein crystallography on micro- and nano-focused beamlines at the ESRF. Data were collected at room temperature (RT) from crystals sandwiched between two silicon nitride wafers, thereby preventing their drying, while limiting background scattering and sample consumption. In order to identify crystal hits, new multi-processing and GUI-driven Python-based pre-analysis software was developed, named NanoPeakCell, that was able to read data from a variety of crystallographic image formats. Further data processing was carried out using CrystFEL, and the resultant structures were refined to 1.7 Å resolution. The data demonstrate the feasibility of RT raster-scanning serial micro- and nano-protein crystallography at synchrotrons and validate it as an alternative approach for the collection of high-resolution structural data from micro-sized crystals. Advantages of the proposed approach are its thriftiness, its handling-free nature, the reduced amount of sample required, the adjustable hit rate, the high indexing rate and the minimization of background scattering. PMID:25945583
Raster-scanning serial protein crystallography using micro- and nano-focused synchrotron beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coquelle, Nicolas; Brewster, Aaron S.; Kapp, Ulrike
High-resolution structural information was obtained from lysozyme microcrystals (20 µm in the largest dimension) using raster-scanning serial protein crystallography on micro- and nano-focused beamlines at the ESRF. Data were collected at room temperature (RT) from crystals sandwiched between two silicon nitride wafers, thereby preventing their drying, while limiting background scattering and sample consumption. In order to identify crystal hits, new multi-processing and GUI-driven Python-based pre-analysis software was developed, named NanoPeakCell, that was able to read data from a variety of crystallographic image formats. Further data processing was carried out using CrystFEL, and the resultant structures were refined to 1.7 Åmore » resolution. The data demonstrate the feasibility of RT raster-scanning serial micro- and nano-protein crystallography at synchrotrons and validate it as an alternative approach for the collection of high-resolution structural data from micro-sized crystals. Advantages of the proposed approach are its thriftiness, its handling-free nature, the reduced amount of sample required, the adjustable hit rate, the high indexing rate and the minimization of background scattering.« less
Raster-scanning serial protein crystallography using micro- and nano-focused synchrotron beams.
Coquelle, Nicolas; Brewster, Aaron S; Kapp, Ulrike; Shilova, Anastasya; Weinhausen, Britta; Burghammer, Manfred; Colletier, Jacques Philippe
2015-05-01
High-resolution structural information was obtained from lysozyme microcrystals (20 µm in the largest dimension) using raster-scanning serial protein crystallography on micro- and nano-focused beamlines at the ESRF. Data were collected at room temperature (RT) from crystals sandwiched between two silicon nitride wafers, thereby preventing their drying, while limiting background scattering and sample consumption. In order to identify crystal hits, new multi-processing and GUI-driven Python-based pre-analysis software was developed, named NanoPeakCell, that was able to read data from a variety of crystallographic image formats. Further data processing was carried out using CrystFEL, and the resultant structures were refined to 1.7 Å resolution. The data demonstrate the feasibility of RT raster-scanning serial micro- and nano-protein crystallography at synchrotrons and validate it as an alternative approach for the collection of high-resolution structural data from micro-sized crystals. Advantages of the proposed approach are its thriftiness, its handling-free nature, the reduced amount of sample required, the adjustable hit rate, the high indexing rate and the minimization of background scattering.
Raster-scanning serial protein crystallography using micro- and nano-focused synchrotron beams
Coquelle, Nicolas; Brewster, Aaron S.; Kapp, Ulrike; ...
2015-04-25
High-resolution structural information was obtained from lysozyme microcrystals (20 µm in the largest dimension) using raster-scanning serial protein crystallography on micro- and nano-focused beamlines at the ESRF. Data were collected at room temperature (RT) from crystals sandwiched between two silicon nitride wafers, thereby preventing their drying, while limiting background scattering and sample consumption. In order to identify crystal hits, new multi-processing and GUI-driven Python-based pre-analysis software was developed, named NanoPeakCell, that was able to read data from a variety of crystallographic image formats. Further data processing was carried out using CrystFEL, and the resultant structures were refined to 1.7 Åmore » resolution. The data demonstrate the feasibility of RT raster-scanning serial micro- and nano-protein crystallography at synchrotrons and validate it as an alternative approach for the collection of high-resolution structural data from micro-sized crystals. Advantages of the proposed approach are its thriftiness, its handling-free nature, the reduced amount of sample required, the adjustable hit rate, the high indexing rate and the minimization of background scattering.« less
AMBER instrument control software
NASA Astrophysics Data System (ADS)
Le Coarer, Etienne P.; Zins, Gerard; Gluck, Laurence; Duvert, Gilles; Driebe, Thomas; Ohnaka, Keiichi; Heininger, Matthias; Connot, Claus; Behrend, Jan; Dugue, Michel; Clausse, Jean Michel; Millour, Florentin
2004-09-01
AMBER (Astronomical Multiple BEam Recombiner) is a 3 aperture interferometric recombiner operating between 1 and 2.5 um, for the Very Large Telescope Interferometer (VLTI). The control software of the instrument, based on the VLT Common Software, has been written to comply with specific features of the AMBER hardware, such as the Infrared detector read out modes or piezo stage drivers, as well as with the very specific operation modes of an interferomtric instrument. In this respect, the AMBER control software was designed to insure that all operations, from the preparation of the observations to the control/command of the instrument during the observations, would be kept as simple as possible for the users and operators, opening the use of an interferometric instrument to the largest community of astronomers. Peculiar attention was given to internal checks and calibration procedures both to evaluate data quality in real time, and improve the successes of long term UV plane coverage observations.
Solving Equations of Multibody Dynamics
NASA Technical Reports Server (NTRS)
Jain, Abhinandan; Lim, Christopher
2007-01-01
Darts++ is a computer program for solving the equations of motion of a multibody system or of a multibody model of a dynamic system. It is intended especially for use in dynamical simulations performed in designing and analyzing, and developing software for the control of, complex mechanical systems. Darts++ is based on the Spatial-Operator- Algebra formulation for multibody dynamics. This software reads a description of a multibody system from a model data file, then constructs and implements an efficient algorithm that solves the dynamical equations of the system. The efficiency and, hence, the computational speed is sufficient to make Darts++ suitable for use in realtime closed-loop simulations. Darts++ features an object-oriented software architecture that enables reconfiguration of system topology at run time; in contrast, in related prior software, system topology is fixed during initialization. Darts++ provides an interface to scripting languages, including Tcl and Python, that enable the user to configure and interact with simulation objects at run time.
How reliable is computerized assessment of readability?
Mailloux, S L; Johnson, M E; Fisher, D G; Pettibone, T J
1995-01-01
To assess the consistency and comparability of readability software programs, four software programs (Corporate Voice, Grammatix IV, Microsoft Word for Windows, and RightWriter) were compared. Standard materials included 28 pieces of printed educational materials on human immunodeficiency virus/acquired immunodeficiency syndrome distributed nationally and the Gettysburg Address. Statistical analyses for the educational materials revealed that each of the three formulas assessed (Flesch-Kincaid, Flesch Reading Ease, and Gunning Fog Index) provided significantly different grade equivalent scores and that the Microsoft Word program provided significantly lower grade levels and was more inconsistent in the scores provided. For the Gettysburg Address, considerable variation was revealed among formulas, with the discrepancy being up to two grade levels. When averaging across formulas, there was a variation of 1.3 grade levels between the four software programs. Given the variation between formulas and programs, implications for decisions based on results of these software programs are provided.
Translator for Optimizing Fluid-Handling Components
NASA Technical Reports Server (NTRS)
Landon, Mark; Perry, Ernest
2007-01-01
A software interface has been devised to facilitate optimization of the shapes of valves, elbows, fittings, and other components used to handle fluids under extreme conditions. This software interface translates data files generated by PLOT3D (a NASA grid-based plotting-and- data-display program) and by computational fluid dynamics (CFD) software into a format in which the files can be read by Sculptor, which is a shape-deformation- and-optimization program. Sculptor enables the user to interactively, smoothly, and arbitrarily deform the surfaces and volumes in two- and three-dimensional CFD models. Sculptor also includes design-optimization algorithms that can be used in conjunction with the arbitrary-shape-deformation components to perform automatic shape optimization. In the optimization process, the output of the CFD software is used as feedback while the optimizer strives to satisfy design criteria that could include, for example, improved values of pressure loss, velocity, flow quality, mass flow, etc.
Scanning fluorescence detector for high-throughput DNA genotyping
NASA Astrophysics Data System (ADS)
Rusch, Terry L.; Petsinger, Jeremy; Christensen, Carl; Vaske, David A.; Brumley, Robert L., Jr.; Luckey, John A.; Weber, James L.
1996-04-01
A new scanning fluorescence detector (SCAFUD) was developed for high-throughput genotyping of short tandem repeat polymorphisms (STRPs). Fluorescent dyes are incorporated into relatively short DNA fragments via polymerase chain reaction (PCR) and are separated by electrophoresis in short, wide polyacrylamide gels (144 lanes with well to read distances of 14 cm). Excitation light from an argon laser with primary lines at 488 and 514 nm is introduced into the gel through a fiber optic cable, dichroic mirror, and 40X microscope objective. Emitted fluorescent light is collected confocally through a second fiber. The confocal head is translated across the bottom of the gel at 0.5 Hz. The detection unit utilizes dichroic mirrors and band pass filters to direct light with 10 - 20 nm bandwidths to four photomultiplier tubes (PMTs). PMT signals are independently amplified with variable gain and then sampled at a rate of 2500 points per scan using a computer based A/D board. LabView software (National Instruments) is used for instrument operation. Currently, three fluorescent dyes (Fam, Hex and Rox) are simultaneously detected with peak detection wavelengths of 543, 567, and 613 nm, respectively. The detection limit for fluorescein-labeled primers is about 100 attomoles. Planned SCAFUD upgrades include rearrangement of laser head geometry, use of additional excitation lasers for simultaneous detection of more dyes, and the use of detector arrays instead of individual PMTs. Extensive software has been written for automatic analysis of SCAFUD images. The software enables background subtraction, band identification, multiple- dye signal resolution, lane finding, band sizing and allele calling. Whole genome screens are currently underway to search for loci influencing such complex diseases as diabetes, asthma, and hypertension. Seven production SCAFUDs are currently in operation. Genotyping output for the coming year is projected to be about one million total genotypes (DNA samples X polymorphic markers) at a total cost of
Ogawa, Yasushi; Fawaz, Farah; Reyes, Candice; Lai, Julie; Pungor, Erno
2007-01-01
Parameter settings of a parallel line analysis procedure were defined by applying statistical analysis procedures to the absorbance data from a cell-based potency bioassay for a recombinant adenovirus, Adenovirus 5 Fibroblast Growth Factor-4 (Ad5FGF-4). The parallel line analysis was performed with a commercially available software, PLA 1.2. The software performs Dixon outlier test on replicates of the absorbance data, performs linear regression analysis to define linear region of the absorbance data, and tests parallelism between the linear regions of standard and sample. Width of Fiducial limit, expressed as a percent of the measured potency, was developed as a criterion for rejection of the assay data and to significantly improve the reliability of the assay results. With the linear range-finding criteria of the software set to a minimum of 5 consecutive dilutions and best statistical outcome, and in combination with the Fiducial limit width acceptance criterion of <135%, 13% of the assay results were rejected. With these criteria applied, the assay was found to be linear over the range of 0.25 to 4 relative potency units, defined as the potency of the sample normalized to the potency of Ad5FGF-4 standard containing 6 x 10(6) adenovirus particles/mL. The overall precision of the assay was estimated to be 52%. Without the application of Fiducial limit width criterion, the assay results were not linear over the range, and an overall precision of 76% was calculated from the data. An absolute unit of potency for the assay was defined by using the parallel line analysis procedure as the amount of Ad5FGF-4 that results in an absorbance value that is 121% of the average absorbance readings of the wells containing cells not infected with the adenovirus.
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1994-01-01
The Ada Namelist Package, developed for the Ada programming language, enables a calling program to read and write FORTRAN-style namelist files. A namelist file consists of any number of assignment statements in any order. Features of the Ada Namelist Package are: the handling of any combination of user-defined types; the ability to read vectors, matrices, and slices of vectors and matrices; the handling of mismatches between variables in the namelist file and those in the programmed list of namelist variables; and the ability to avoid searching the entire input file for each variable. The principle user benefits of this software are the following: the ability to write namelist-readable files, the ability to detect most file errors in the initialization phase, a package organization that reduces the number of instantiated units to a few packages rather than to many subprograms, a reduced number of restrictions, and an increased execution speed. The Ada Namelist reads data from an input file into variables declared within a user program. It then writes data from the user program to an output file, printer, or display. The input file contains a sequence of assignment statements in arbitrary order. The output is in namelist-readable form. There is a one-to-one correspondence between namelist I/O statements executed in the user program and variables read or written. Nevertheless, in the input file, mismatches are allowed between assignment statements in the file and the namelist read procedure statements in the user program. The Ada Namelist Package itself is non-generic. However, it has a group of nested generic packages following the nongeneric opening portion. The opening portion declares a variety of useraccessible constants, variables and subprograms. The subprograms are procedures for initializing namelists for reading, reading and writing strings. The subprograms are also functions for analyzing the content of the current dataset and diagnosing errors. Two nested generic packages follow the opening portion. The first generic package contains procedures that read and write objects of scalar type. The second contains subprograms that read and write one and two-dimensional arrays whose components are of scalar type and whose indices are of either of the two discrete types (integer or enumeration). Subprograms in the second package also read and write vector and matrix slices. The Ada Namelist ASCII text files are available on a 360k 5.25" floppy disk written on an IBM PC/AT running under the PC DOS operating system. The largest subprogram in the package requires 150k of memory. The package was developed using VAX Ada v. 1.5 under DEC VMS v. 4.5. It should be portable to any validated Ada compiler. The software was developed in 1989, and is a copyrighted work with all copyright vested in NASA.
UrQt: an efficient software for the Unsupervised Quality trimming of NGS data.
Modolo, Laurent; Lerat, Emmanuelle
2015-04-29
Quality control is a necessary step of any Next Generation Sequencing analysis. Although customary, this step still requires manual interventions to empirically choose tuning parameters according to various quality statistics. Moreover, current quality control procedures that provide a "good quality" data set, are not optimal and discard many informative nucleotides. To address these drawbacks, we present a new quality control method, implemented in UrQt software, for Unsupervised Quality trimming of Next Generation Sequencing reads. Our trimming procedure relies on a well-defined probabilistic framework to detect the best segmentation between two segments of unreliable nucleotides, framing a segment of informative nucleotides. Our software only requires one user-friendly parameter to define the minimal quality threshold (phred score) to consider a nucleotide to be informative, which is independent of both the experiment and the quality of the data. This procedure is implemented in C++ in an efficient and parallelized software with a low memory footprint. We tested the performances of UrQt compared to the best-known trimming programs, on seven RNA and DNA sequencing experiments and demonstrated its optimality in the resulting tradeoff between the number of trimmed nucleotides and the quality objective. By finding the best segmentation to delimit a segment of good quality nucleotides, UrQt greatly increases the number of reads and of nucleotides that can be retained for a given quality objective. UrQt source files, binary executables for different operating systems and documentation are freely available (under the GPLv3) at the following address: https://lbbe.univ-lyon1.fr/-UrQt-.html .
ERIC Educational Resources Information Center
Torppa, Minna; Eklund, Kenneth; Sulkunen, Sari; Niemi, Pekka; Ahonen, Timo
2018-01-01
The present study examined gender gap in Program for International Student Assessment (PISA) Reading and mediators of the gender gap in a Finnish sample (n = 1,309). We examined whether the gender gap in PISA Reading performance can be understood via the effects of reading fluency, achievement behaviour (mastery orientation and task-avoidant…
Harn, Nicholas R; Hunt, Suzanne L; Hill, Jacqueline; Vidoni, Eric; Perry, Mark; Burns, Jeffrey M
2017-08-01
Establishing reliable methods for interpreting elevated cerebral amyloid-β plaque on PET scans is increasingly important for radiologists, as availability of PET imaging in clinical practice increases. We examined a 3-step method to detect plaque in cognitively normal older adults, focusing on the additive value of quantitative information during the PET scan interpretation process. Fifty-five F-florbetapir PET scans were evaluated by 3 experienced raters. Scans were first visually interpreted as having "elevated" or "nonelevated" plaque burden ("Visual Read"). Images were then processed using a standardized quantitative analysis software (MIMneuro) to generate whole brain and region of interest SUV ratios. This "Quantitative Read" was considered elevated if at least 2 of 6 regions of interest had an SUV ratio of more than 1.1. The final interpretation combined both visual and quantitative data together ("VisQ Read"). Cohen kappa values were assessed as a measure of interpretation agreement. Plaque was elevated in 25.5% to 29.1% of the 165 total Visual Reads. Interrater agreement was strong (kappa = 0.73-0.82) and consistent with reported values. Quantitative Reads were elevated in 45.5% of participants. Final VisQ Reads changed from initial Visual Reads in 16 interpretations (9.7%), with most changing from "nonelevated" Visual Reads to "elevated." These changed interpretations demonstrated lower plaque quantification than those initially read as "elevated" that remained unchanged. Interrater variability improved for VisQ Reads with the addition of quantitative information (kappa = 0.88-0.96). Inclusion of quantitative information increases consistency of PET scan interpretations for early detection of cerebral amyloid-β plaque accumulation.
Philosophers and Technologists: Vicarious and Virtual Knowledge Constructs
ERIC Educational Resources Information Center
McNeese, Beverly D.
2007-01-01
In an age of continual technological advancement, user-friendly software, and consumer demand for the latest upgraded gadget, the ethical and moral discoveries derived from a careful reading of any fictional literature by college students is struggling in the American college classroom. Easy-access information systems, coinciding with the…
Thoughts on Information Literacy and the 21st Century Workplace.
ERIC Educational Resources Information Center
Beam, Walter R.
2001-01-01
Discusses changes in society that have led to literacy skills being a criterion for employment. Topics include reading; communication skills; writing; cognitive processes; math; computers, the Internet, and the information revolution; information needs and access; information cross-linking; information literacy; and hardware and software use. (LRW)
Optical Disc Technology for Information Management.
ERIC Educational Resources Information Center
Brumm, Eugenia K.
1991-01-01
This summary of the literature on document image processing from 1988-90 focuses on WORM (write once read many) technology and on rewritable (i.e., erasable) optical discs, and excludes CD-ROM. Highlights include vendors and products, standards, comparisons of storage media, software, legal issues, records management, indexing, and computer…
Multimedia Madness: Creating with a Purpose
ERIC Educational Resources Information Center
Bodley, Barb; Bremer, Janet
2004-01-01
High school students working in a project-driven environment create "projects with a purpose" that give younger students technology-based activities to help them practice skills in reading, math, spelling and science. An elective semester-long course using the Macromedia suite of programs with the objective of learning the software skills of…
Progression of a Data Visualization Assignment
ERIC Educational Resources Information Center
Adkins, Joni K.
2016-01-01
The growing popularity of data visualization due to increased amounts of data and easier-to-use software tools creates an information literacy skill gap for students. Students in an Information Technology Management graduate course were exposed to data visualization not only through their textbook reading but also through a data visualization…