Chang, Ho-Won; Sung, Youlboong; Kim, Kyoung-Ho; Nam, Young-Do; Roh, Seong Woon; Kim, Min-Soo; Jeon, Che Ok; Bae, Jin-Woo
2008-08-15
A crucial problem in the use of previously developed genome-probing microarrays (GPM) has been the inability to use uncultivated bacterial genomes to take advantage of the high sensitivity and specificity of GPM in microbial detection and monitoring. We show here a method, digital multiple displacement amplification (MDA), to amplify and analyze various genomes obtained from single uncultivated bacterial cells. We used 15 genomes from key microbes involved in dichloromethane (DCM)-dechlorinating enrichment as microarray probes to uncover the bacterial population dynamics of samples without PCR amplification. Genomic DNA amplified from single cells originating from uncultured bacteria with 80.3-99.4% similarity to 16S rRNA genes of cultivated bacteria. The digital MDA-GPM method successfully monitored the dynamics of DCM-dechlorinating communities from different phases of enrichment status. Without a priori knowledge of microbial diversity, the digital MDA-GPM method could be designed to monitor most microbial populations in a given environmental sample.
USDA-ARS?s Scientific Manuscript database
Consistent data across animal populations are required to inform genomic science aimed at finding important adaptive genetic variations. The ADAPTMap Digital Phenotype Collection- Prototype Method will yield a new procedure to provide consistent phenotypic data by digital enumeration of categorical ...
Li, Chunmei; Yu, Zhilong; Fu, Yusi; Pang, Yuhong; Huang, Yanyi
2017-04-26
We develop a novel single-cell-based platform through digital counting of amplified genomic DNA fragments, named multifraction amplification (mfA), to detect the copy number variations (CNVs) in a single cell. Amplification is required to acquire genomic information from a single cell, while introducing unavoidable bias. Unlike prevalent methods that directly infer CNV profiles from the pattern of sequencing depth, our mfA platform denatures and separates the DNA molecules from a single cell into multiple fractions of a reaction mix before amplification. By examining the sequencing result of each fraction for a specific fragment and applying a segment-merge maximum likelihood algorithm to the calculation of copy number, we digitize the sequencing-depth-based CNV identification and thus provide a method that is less sensitive to the amplification bias. In this paper, we demonstrate a mfA platform through multiple displacement amplification (MDA) chemistry. When performing the mfA platform, the noise of MDA is reduced; therefore, the resolution of single-cell CNV identification can be improved to 100 kb. We can also determine the genomic region free of allelic drop-out with mfA platform, which is impossible for conventional single-cell amplification methods.
Lock, Martin; Alvira, Mauricio R; Chen, Shu-Jen; Wilson, James M
2014-04-01
Accurate titration of adeno-associated viral (AAV) vector genome copies is critical for ensuring correct and reproducible dosing in both preclinical and clinical settings. Quantitative PCR (qPCR) is the current method of choice for titrating AAV genomes because of the simplicity, accuracy, and robustness of the assay. However, issues with qPCR-based determination of self-complementary AAV vector genome titers, due to primer-probe exclusion through genome self-annealing or through packaging of prematurely terminated defective interfering (DI) genomes, have been reported. Alternative qPCR, gel-based, or Southern blotting titering methods have been designed to overcome these issues but may represent a backward step from standard qPCR methods in terms of simplicity, robustness, and precision. Droplet digital PCR (ddPCR) is a new PCR technique that directly quantifies DNA copies with an unparalleled degree of precision and without the need for a standard curve or for a high degree of amplification efficiency; all properties that lend themselves to the accurate quantification of both single-stranded and self-complementary AAV genomes. Here we compare a ddPCR-based AAV genome titer assay with a standard and an optimized qPCR assay for the titration of both single-stranded and self-complementary AAV genomes. We demonstrate absolute quantification of single-stranded AAV vector genomes by ddPCR with up to 4-fold increases in titer over a standard qPCR titration but with equivalent readout to an optimized qPCR assay. In the case of self-complementary vectors, ddPCR titers were on average 5-, 1.9-, and 2.3-fold higher than those determined by standard qPCR, optimized qPCR, and agarose gel assays, respectively. Droplet digital PCR-based genome titering was superior to qPCR in terms of both intra- and interassay precision and is more resistant to PCR inhibitors, a desirable feature for in-process monitoring of early-stage vector production and for vector genome biodistribution analysis in inhibitory tissues.
Enhanced sequencing coverage with digital droplet multiple displacement amplification
Sidore, Angus M.; Lan, Freeman; Lim, Shaun W.; Abate, Adam R.
2016-01-01
Sequencing small quantities of DNA is important for applications ranging from the assembly of uncultivable microbial genomes to the identification of cancer-associated mutations. To obtain sufficient quantities of DNA for sequencing, the small amount of starting material must be amplified significantly. However, existing methods often yield errors or non-uniform coverage, reducing sequencing data quality. Here, we describe digital droplet multiple displacement amplification, a method that enables massive amplification of low-input material while maintaining sequence accuracy and uniformity. The low-input material is compartmentalized as single molecules in millions of picoliter droplets. Because the molecules are isolated in compartments, they amplify to saturation without competing for resources; this yields uniform representation of all sequences in the final product and, in turn, enhances the quality of the sequence data. We demonstrate the ability to uniformly amplify the genomes of single Escherichia coli cells, comprising just 4.7 fg of starting DNA, and obtain sequencing coverage distributions that rival that of unamplified material. Digital droplet multiple displacement amplification provides a simple and effective method for amplifying minute amounts of DNA for accurate and uniform sequencing. PMID:26704978
Genomic signal analysis of pathogen variability
NASA Astrophysics Data System (ADS)
Cristea, Paul Dan
2006-02-01
The paper presents results in the study of pathogen variability by using genomic signals. The conversion of symbolic nucleotide sequences into digital signals offers the possibility to apply signal processing methods to the analysis of genomic data. The method is particularly well suited to characterize small size genomic sequences, such as those found in viruses and bacteria, being a promising tool in tracking the variability of pathogens, especially in the context of developing drug resistance. The paper is based on data downloaded from GenBank [32], and comprises results on the variability of the eight segments of the influenza type A, subtype H5N1, virus genome, and of the Hemagglutinin (HA) gene, for the H1, H2, H3, H4, H5 and H16 types. Data from human and avian virus isolates are used.
Zill, Oliver A.; Sebisanovic, Dragan; Lopez, Rene; Blau, Sibel; Collisson, Eric A.; Divers, Stephen G.; Hoon, Dave S. B.; Kopetz, E. Scott; Lee, Jeeyun; Nikolinakos, Petros G.; Baca, Arthur M.; Kermani, Bahram G.; Eltoukhy, Helmy; Talasaz, AmirAli
2015-01-01
Next-generation sequencing of cell-free circulating solid tumor DNA addresses two challenges in contemporary cancer care. First this method of massively parallel and deep sequencing enables assessment of a comprehensive panel of genomic targets from a single sample, and second, it obviates the need for repeat invasive tissue biopsies. Digital SequencingTM is a novel method for high-quality sequencing of circulating tumor DNA simultaneously across a comprehensive panel of over 50 cancer-related genes with a simple blood test. Here we report the analytic and clinical validation of the gene panel. Analytic sensitivity down to 0.1% mutant allele fraction is demonstrated via serial dilution studies of known samples. Near-perfect analytic specificity (> 99.9999%) enables complete coverage of many genes without the false positives typically seen with traditional sequencing assays at mutant allele frequencies or fractions below 5%. We compared digital sequencing of plasma-derived cell-free DNA to tissue-based sequencing on 165 consecutive matched samples from five outside centers in patients with stage III-IV solid tumor cancers. Clinical sensitivity of plasma-derived NGS was 85.0%, comparable to 80.7% sensitivity for tissue. The assay success rate on 1,000 consecutive samples in clinical practice was 99.8%. Digital sequencing of plasma-derived DNA is indicated in advanced cancer patients to prevent repeated invasive biopsies when the initial biopsy is inadequate, unobtainable for genomic testing, or uninformative, or when the patient’s cancer has progressed despite treatment. Its clinical utility is derived from reduction in the costs, complications and delays associated with invasive tissue biopsies for genomic testing. PMID:26474073
Gutman, David A; Cobb, Jake; Somanna, Dhananjaya; Park, Yuna; Wang, Fusheng; Kurc, Tahsin; Saltz, Joel H; Brat, Daniel J; Cooper, Lee A D
2013-01-01
Background The integration and visualization of multimodal datasets is a common challenge in biomedical informatics. Several recent studies of The Cancer Genome Atlas (TCGA) data have illustrated important relationships between morphology observed in whole-slide images, outcome, and genetic events. The pairing of genomics and rich clinical descriptions with whole-slide imaging provided by TCGA presents a unique opportunity to perform these correlative studies. However, better tools are needed to integrate the vast and disparate data types. Objective To build an integrated web-based platform supporting whole-slide pathology image visualization and data integration. Materials and methods All images and genomic data were directly obtained from the TCGA and National Cancer Institute (NCI) websites. Results The Cancer Digital Slide Archive (CDSA) produced is accessible to the public (http://cancer.digitalslidearchive.net) and currently hosts more than 20 000 whole-slide images from 22 cancer types. Discussion The capabilities of CDSA are demonstrated using TCGA datasets to integrate pathology imaging with associated clinical, genomic and MRI measurements in glioblastomas and can be extended to other tumor types. CDSA also allows URL-based sharing of whole-slide images, and has preliminary support for directly sharing regions of interest and other annotations. Images can also be selected on the basis of other metadata, such as mutational profile, patient age, and other relevant characteristics. Conclusions With the increasing availability of whole-slide scanners, analysis of digitized pathology images will become increasingly important in linking morphologic observations with genomic and clinical endpoints. PMID:23893318
Kühnemund, Malte; Hernández-Neuta, Iván; Sharif, Mohd Istiaq; Cornaglia, Matteo; Gijs, Martin A.M.
2017-01-01
Abstract Single molecule quantification assays provide the ultimate sensitivity and precision for molecular analysis. However, most digital analysis techniques, i.e. droplet PCR, require sophisticated and expensive instrumentation for molecule compartmentalization, amplification and analysis. Rolling circle amplification (RCA) provides a simpler means for digital analysis. Nevertheless, the sensitivity of RCA assays has until now been limited by inefficient detection methods. We have developed a simple microfluidic strategy for enrichment of RCA products into a single field of view of a low magnification fluorescent sensor, enabling ultra-sensitive digital quantification of nucleic acids over a dynamic range from 1.2 aM to 190 fM. We prove the broad applicability of our analysis platform by demonstrating 5-plex detection of as little as ∼1 pg (∼300 genome copies) of pathogenic DNA with simultaneous antibiotic resistance marker detection, and the analysis of rare oncogene mutations. Our method is simpler, more cost-effective and faster than other digital analysis techniques and provides the means to implement digital analysis in any laboratory equipped with a standard fluorescent microscope. PMID:28077562
Wood-Bouwens, Christina; Lau, Billy T; Handy, Christine M; Lee, HoJoon; Ji, Hanlee P
2017-09-01
We describe a single-color digital PCR assay that detects and quantifies cancer mutations directly from circulating DNA collected from the plasma of cancer patients. This approach relies on a double-stranded DNA intercalator dye and paired allele-specific DNA primer sets to determine an absolute count of both the mutation and wild-type-bearing DNA molecules present in the sample. The cell-free DNA assay uses an input of 1 ng of nonamplified DNA, approximately 300 genome equivalents, and has a molecular limit of detection of three mutation DNA genome-equivalent molecules per assay reaction. When using more genome equivalents as input, we demonstrated a sensitivity of 0.10% for detecting the BRAF V600E and KRAS G12D mutations. We developed several mutation assays specific to the cancer driver mutations of patients' tumors and detected these same mutations directly from the nonamplified, circulating cell-free DNA. This rapid and high-performance digital PCR assay can be configured to detect specific cancer mutations unique to an individual cancer, making it a potentially valuable method for patient-specific longitudinal monitoring. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Virology: The Next Generation from Digital PCR to Single Virion Genomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Richard A.; Brazelton De Cardenas, Jessica N.; Hayden, Randall T.
In the past 25 years, virology has had major technology breakthroughs stemming first from the introduction of nucleic acid amplification testing, but more recently from the use of next-generation sequencing, digital PCR, and the possibility of single virion genomics. These technologies have and will improve diagnosis and disease state monitoring in clinical settings, aid in environmental monitoring, and reveal the vast genetic potential of viruses. Using the principle of limiting dilution, digital PCR amplifies single molecules of DNA in highly partitioned endpoint reactions and reads each of those reactions as either positive or negative based on the presence or absencemore » of target fluorophore. In this review, digital PCR will be highlighted along with current studies, advantages/disadvantages, and future perspectives with regard to digital PCR, viral load testing, and the possibility of single virion genomics.« less
Genomic signal processing methods for computation of alignment-free distances from DNA sequences.
Borrayo, Ernesto; Mendizabal-Ruiz, E Gerardo; Vélez-Pérez, Hugo; Romo-Vázquez, Rebeca; Mendizabal, Adriana P; Morales, J Alejandro
2014-01-01
Genomic signal processing (GSP) refers to the use of digital signal processing (DSP) tools for analyzing genomic data such as DNA sequences. A possible application of GSP that has not been fully explored is the computation of the distance between a pair of sequences. In this work we present GAFD, a novel GSP alignment-free distance computation method. We introduce a DNA sequence-to-signal mapping function based on the employment of doublet values, which increases the number of possible amplitude values for the generated signal. Additionally, we explore the use of three DSP distance metrics as descriptors for categorizing DNA signal fragments. Our results indicate the feasibility of employing GAFD for computing sequence distances and the use of descriptors for characterizing DNA fragments.
Genomic Signal Processing Methods for Computation of Alignment-Free Distances from DNA Sequences
Borrayo, Ernesto; Mendizabal-Ruiz, E. Gerardo; Vélez-Pérez, Hugo; Romo-Vázquez, Rebeca; Mendizabal, Adriana P.; Morales, J. Alejandro
2014-01-01
Genomic signal processing (GSP) refers to the use of digital signal processing (DSP) tools for analyzing genomic data such as DNA sequences. A possible application of GSP that has not been fully explored is the computation of the distance between a pair of sequences. In this work we present GAFD, a novel GSP alignment-free distance computation method. We introduce a DNA sequence-to-signal mapping function based on the employment of doublet values, which increases the number of possible amplitude values for the generated signal. Additionally, we explore the use of three DSP distance metrics as descriptors for categorizing DNA signal fragments. Our results indicate the feasibility of employing GAFD for computing sequence distances and the use of descriptors for characterizing DNA fragments. PMID:25393409
A time-and-motion approach to micro-costing of high-throughput genomic assays
Costa, S.; Regier, D.A.; Meissner, B.; Cromwell, I.; Ben-Neriah, S.; Chavez, E.; Hung, S.; Steidl, C.; Scott, D.W.; Marra, M.A.; Peacock, S.J.; Connors, J.M.
2016-01-01
Background Genomic technologies are increasingly used to guide clinical decision-making in cancer control. Economic evidence about the cost-effectiveness of genomic technologies is limited, in part because of a lack of published comprehensive cost estimates. In the present micro-costing study, we used a time-and-motion approach to derive cost estimates for 3 genomic assays and processes—digital gene expression profiling (gep), fluorescence in situ hybridization (fish), and targeted capture sequencing, including bioinformatics analysis—in the context of lymphoma patient management. Methods The setting for the study was the Department of Lymphoid Cancer Research laboratory at the BC Cancer Agency in Vancouver, British Columbia. Mean per-case hands-on time and resource measurements were determined from a series of direct observations of each assay. Per-case cost estimates were calculated using a bottom-up costing approach, with labour, capital and equipment, supplies and reagents, and overhead costs included. Results The most labour-intensive assay was found to be fish at 258.2 minutes per case, followed by targeted capture sequencing (124.1 minutes per case) and digital gep (14.9 minutes per case). Based on a historical case throughput of 180 cases annually, the mean per-case cost (2014 Canadian dollars) was estimated to be $1,029.16 for targeted capture sequencing and bioinformatics analysis, $596.60 for fish, and $898.35 for digital gep with an 807-gene code set. Conclusions With the growing emphasis on personalized approaches to cancer management, the need for economic evaluations of high-throughput genomic assays is increasing. Through economic modelling and budget-impact analyses, the cost estimates presented here can be used to inform priority-setting decisions about the implementation of such assays in clinical practice. PMID:27803594
The Basic Science Program will receive genomic DNA at a concentration of 50 ng/ul.Human leukocyte antigen (HLA) typing will be performed using atargeted next-generation sequencing (NGS) method.Briefly, locus-specific primers are use
Kühnemund, Malte; Hernández-Neuta, Iván; Sharif, Mohd Istiaq; Cornaglia, Matteo; Gijs, Martin A M; Nilsson, Mats
2017-05-05
Single molecule quantification assays provide the ultimate sensitivity and precision for molecular analysis. However, most digital analysis techniques, i.e. droplet PCR, require sophisticated and expensive instrumentation for molecule compartmentalization, amplification and analysis. Rolling circle amplification (RCA) provides a simpler means for digital analysis. Nevertheless, the sensitivity of RCA assays has until now been limited by inefficient detection methods. We have developed a simple microfluidic strategy for enrichment of RCA products into a single field of view of a low magnification fluorescent sensor, enabling ultra-sensitive digital quantification of nucleic acids over a dynamic range from 1.2 aM to 190 fM. We prove the broad applicability of our analysis platform by demonstrating 5-plex detection of as little as ∼1 pg (∼300 genome copies) of pathogenic DNA with simultaneous antibiotic resistance marker detection, and the analysis of rare oncogene mutations. Our method is simpler, more cost-effective and faster than other digital analysis techniques and provides the means to implement digital analysis in any laboratory equipped with a standard fluorescent microscope. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
ERIC Educational Resources Information Center
Busstra, Maria C.; Hartog, Rob; Kersten, Sander; Muller, Michael
2007-01-01
Nutritional genomics, or nutrigenomics, can be considered as the combination of molecular nutrition and genomics. Students who attend courses in nutrigenomics differ with respect to their prior knowledge. This study describes digital nutrigenomics learning material suitable for students from various backgrounds and provides design guidelines for…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rhee, Minsoung; Light, Yooli K.; Meagher, Robert J.
Here, multiple displacement amplification (MDA) is a widely used technique for amplification of DNA from samples containing limited amounts of DNA (e.g., uncultivable microbes or clinical samples) before whole genome sequencing. Despite its advantages of high yield and fidelity, it suffers from high amplification bias and non-specific amplification when amplifying sub-nanogram of template DNA. Here, we present a microfluidic digital droplet MDA (ddMDA) technique where partitioning of the template DNA into thousands of sub-nanoliter droplets, each containing a small number of DNA fragments, greatly reduces the competition among DNA fragments for primers and polymerase thereby greatly reducing amplification bias. Consequently,more » the ddMDA approach enabled a more uniform coverage of amplification over the entire length of the genome, with significantly lower bias and non-specific amplification than conventional MDA. For a sample containing 0.1 pg/μL of E. coli DNA (equivalent of ~3/1000 of an E. coli genome per droplet), ddMDA achieves a 65-fold increase in coverage in de novo assembly, and more than 20-fold increase in specificity (percentage of reads mapping to E. coli) compared to the conventional tube MDA. ddMDA offers a powerful method useful for many applications including medical diagnostics, forensics, and environmental microbiology.« less
Rhee, Minsoung; Light, Yooli K.; Meagher, Robert J.; ...
2016-05-04
Here, multiple displacement amplification (MDA) is a widely used technique for amplification of DNA from samples containing limited amounts of DNA (e.g., uncultivable microbes or clinical samples) before whole genome sequencing. Despite its advantages of high yield and fidelity, it suffers from high amplification bias and non-specific amplification when amplifying sub-nanogram of template DNA. Here, we present a microfluidic digital droplet MDA (ddMDA) technique where partitioning of the template DNA into thousands of sub-nanoliter droplets, each containing a small number of DNA fragments, greatly reduces the competition among DNA fragments for primers and polymerase thereby greatly reducing amplification bias. Consequently,more » the ddMDA approach enabled a more uniform coverage of amplification over the entire length of the genome, with significantly lower bias and non-specific amplification than conventional MDA. For a sample containing 0.1 pg/μL of E. coli DNA (equivalent of ~3/1000 of an E. coli genome per droplet), ddMDA achieves a 65-fold increase in coverage in de novo assembly, and more than 20-fold increase in specificity (percentage of reads mapping to E. coli) compared to the conventional tube MDA. ddMDA offers a powerful method useful for many applications including medical diagnostics, forensics, and environmental microbiology.« less
Repeat-aware modeling and correction of short read errors.
Yang, Xiao; Aluru, Srinivas; Dorman, Karin S
2011-02-15
High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors for genomes with high repeat content.
Genomic data into everyday work of a medical practitioner - digital tools for decision-making.
Jokiranta, Sakari; Hotakainen, Kristina; Salonen, Iiris; Pöllänen, Pasi; Hänninen, Kai-Petri; Forsström, Jari; Kunnamo, Ilkka
Recent technological development has enabled fast and cost-effective simultaneous analyses of several gene variants or sequence of even the whole genome. For medical practitioners this has created challenges although genomic information may be clinically useful in new applications such as finding out individual risk for diseases influenced by as many as 50,000 variable DNA regions or in detecting pharmacogenetic risks prior to prescribing a medicine. New digital tools have paved the way for utilization of genomic data via easy access and clear clinical interpretation for both doctor and patient. In this review we describe some of these tools and applications for clinical use.
Quantifying EGFR alterations in the lung cancer genome with nanofluidic digital PCR arrays.
Wang, Jun; Ramakrishnan, Ramesh; Tang, Zhe; Fan, Weiwen; Kluge, Amy; Dowlati, Afshin; Jones, Robert C; Ma, Patrick C
2010-04-01
The EGFR [epidermal growth factor receptor (erythroblastic leukemia viral (v-erb-b) oncogene homolog, avian)] gene is known to harbor genomic alterations in advanced lung cancer involving gene amplification and kinase mutations that predict the clinical response to EGFR-targeted inhibitors. Methods for detecting such molecular changes in lung cancer tumors are desirable. We used a nanofluidic digital PCR array platform and 16 cell lines and 20 samples of genomic DNA from resected tumors (stages I-III) to quantify the relative numbers of copies of the EGFR gene and to detect mutated EGFR alleles in lung cancer. We assessed the relative number of EGFR gene copies by calculating the ratio of the number of EGFR molecules (measured with a 6-carboxyfluorescein-labeled Scorpion assay) to the number of molecules of the single-copy gene RPP30 (ribonuclease P/MRP 30kDa subunit) (measured with a 6-carboxy-X-rhodamine-labeled TaqMan assay) in each panel. To assay for the EGFR L858R (exon 21) mutation and exon 19 in-frame deletions, we used the ARMS and Scorpion technologies in a DxS/Qiagen EGFR29 Mutation Test Kit for the digital PCR array. The digital array detected and quantified rare gefitinib/erlotinib-sensitizing EGFR mutations (0.02%-9.26% abundance) that were present in formalin-fixed, paraffin-embedded samples of early-stage resectable lung tumors without an associated increase in gene copy number. Our results also demonstrated the presence of intratumor molecular heterogeneity for the clinically relevant EGFR mutated alleles in these early-stage lung tumors. The digital PCR array platform allows characterization and quantification of oncogenes, such as EGFR, at the single-molecule level. Use of this nanofluidics platform may provide deeper insight into the specific roles of clinically relevant kinase mutations during different stages of lung tumor progression and may be useful in predicting the clinical response to EGFR-targeted inhibitors.
Shen, Feng; Davydova, Elena K; Du, Wenbin; Kreutz, Jason E; Piepenburg, Olaf; Ismagilov, Rustem F
2011-05-01
In this paper, digital quantitative detection of nucleic acids was achieved at the single-molecule level by chemical initiation of over one thousand sequence-specific, nanoliter isothermal amplification reactions in parallel. Digital polymerase chain reaction (digital PCR), a method used for quantification of nucleic acids, counts the presence or absence of amplification of individual molecules. However, it still requires temperature cycling, which is undesirable under resource-limited conditions. This makes isothermal methods for nucleic acid amplification, such as recombinase polymerase amplification (RPA), more attractive. A microfluidic digital RPA SlipChip is described here for simultaneous initiation of over one thousand nL-scale RPA reactions by adding a chemical initiator to each reaction compartment with a simple slipping step after instrument-free pipet loading. Two designs of the SlipChip, two-step slipping and one-step slipping, were validated using digital RPA. By using the digital RPA SlipChip, false-positive results from preinitiation of the RPA amplification reaction before incubation were eliminated. End point fluorescence readout was used for "yes or no" digital quantification. The performance of digital RPA in a SlipChip was validated by amplifying and counting single molecules of the target nucleic acid, methicillin-resistant Staphylococcus aureus (MRSA) genomic DNA. The digital RPA on SlipChip was also tolerant to fluctuations of the incubation temperature (37-42 °C), and its performance was comparable to digital PCR on the same SlipChip design. The digital RPA SlipChip provides a simple method to quantify nucleic acids without requiring thermal cycling or kinetic measurements, with potential applications in diagnostics and environmental monitoring under resource-limited settings. The ability to initiate thousands of chemical reactions in parallel on the nanoliter scale using solvent-resistant glass devices is likely to be useful for a broader range of applications.
Shen, Feng; Davydova, Elena K.; Du, Wenbin; Kreutz, Jason E.; Piepenburg, Olaf; Ismagilov, Rustem F.
2011-01-01
In this paper, digital quantitative detection of nucleic acids was achieved at the single-molecule level by chemical initiation of over one thousand sequence-specific, nanoliter, isothermal amplification reactions in parallel. Digital polymerase chain reaction (digital PCR), a method used for quantification of nucleic acids, counts the presence or absence of amplification of individual molecules. However it still requires temperature cycling, which is undesirable under resource-limited conditions. This makes isothermal methods for nucleic acid amplification, such as recombinase polymerase amplification (RPA), more attractive. A microfluidic digital RPA SlipChip is described here for simultaneous initiation of over one thousand nL-scale RPA reactions by adding a chemical initiator to each reaction compartment with a simple slipping step after instrument-free pipette loading. Two designs of the SlipChip, two-step slipping and one-step slipping, were validated using digital RPA. By using the digital RPA SlipChip, false positive results from pre-initiation of the RPA amplification reaction before incubation were eliminated. End-point fluorescence readout was used for “yes or no” digital quantification. The performance of digital RPA in a SlipChip was validated by amplifying and counting single molecules of the target nucleic acid, Methicillin-resistant Staphylococcus aureus (MRSA) genomic DNA. The digital RPA on SlipChip was also tolerant to fluctuations of the incubation temperature (37–42 °C), and its performance was comparable to digital PCR on the same SlipChip design. The digital RPA SlipChip provides a simple method to quantify nucleic acids without requiring thermal cycling or kinetic measurements, with potential applications in diagnostics and environmental monitoring under resource-limited settings. The ability to initiate thousands of chemical reactions in parallel on the nanoliter scale using solvent-resistant glass devices is likely to be useful for a broader range of applications. PMID:21476587
Thauvin-Robinet, Christel; Franco, Brunella; Saugier-Veber, Pascale; Aral, Bernard; Gigot, Nadège; Donzel, Anne; Van Maldergem, Lionel; Bieth, Eric; Layet, Valérie; Mathieu, Michèle; Teebi, Ahmad; Lespinasse, James; Callier, Patrick; Mugneret, Francine; Masurel-Paulet, Alice; Gautier, Elodie; Huet, Frédéric; Teyssier, Jean-Raymond; Tosi, Mario; Frébourg, Thierry; Faivre, Laurence
2009-02-01
Oral-facial-digital type I syndrome (OFDI) is characterised by an X-linked dominant mode of inheritance with lethality in males. Clinical features include facial dysmorphism with oral, dental and distal abnormalities, polycystic kidney disease and central nervous system malformations. Considerable allelic heterogeneity has been reported within the OFD1 gene, but DNA bi-directional sequencing of the exons and intron-exon boundaries of the OFD1 gene remains negative in more than 20% of cases. We hypothesized that genomic rearrangements could account for the majority of the remaining undiagnosed cases. Thus, we took advantage of two independent available series of patients with OFDI syndrome and negative DNA bi-directional sequencing of the exons and intron-exon boundaries of the OFD1 gene from two different European labs: 13/36 cases from the French lab; 13/95 from the Italian lab. All patients were screened by a semiquantitative fluorescent multiplex method (QFMPSF) and relative quantification by real-time PCR (qPCR). Six OFD1 genomic deletions (exon 5, exons 1-8, exons 1-14, exons 10-11, exons 13-23 and exon 17) were identified, accounting for 5% of OFDI patients and for 23% of patients with negative mutation screening by DNA sequencing. The association of DNA direct sequencing, QFMPSF and qPCR detects OFD1 alteration in up to 85% of patients with a phenotype suggestive of OFDI syndrome. Given the average percentage of large genomic rearrangements (5%), we suggest that dosage methods should be performed in addition to DNA direct sequencing analysis to exclude the involvement of the OFD1 transcript when there are genetic counselling issues. (c) 2008 Wiley-Liss, Inc.
Kim, Tae Hoon; Dekker, Job
2018-05-01
Owing to its digital nature, ChIP-seq has become the standard method for genome-wide ChIP analysis. Using next-generation sequencing platforms (notably the Illumina Genome Analyzer), millions of short sequence reads can be obtained. The densities of recovered ChIP sequence reads along the genome are used to determine the binding sites of the protein. Although a relatively small amount of ChIP DNA is required for ChIP-seq, the current sequencing platforms still require amplification of the ChIP DNA by ligation-mediated PCR (LM-PCR). This protocol, which involves linker ligation followed by size selection, is the standard ChIP-seq protocol using an Illumina Genome Analyzer. The size-selected ChIP DNA is amplified by LM-PCR and size-selected for the second time. The purified ChIP DNA is then loaded into the Genome Analyzer. The ChIP DNA can also be processed in parallel for ChIP-chip results. © 2018 Cold Spring Harbor Laboratory Press.
Efficient Parameter Searches for Colloidal Materials Design with Digital Alchemy
NASA Astrophysics Data System (ADS)
Dodd, Paul, M.; Geng, Yina; van Anders, Greg; Glotzer, Sharon C.
Optimal colloidal materials design is challenging, even for high-throughput or genomic approaches, because the design space provided by modern colloid synthesis techniques can easily have dozens of dimensions. In this talk we present the methodology of an inverse approach we term ''digital alchemy'' to perform rapid searches of design-paramenter spaces with up to 188 dimensions that yield thermodynamically optimal colloid parameters for target crystal structures with up to 20 particles in a unit cell. The method relies only on fundamental principles of statistical mechanics and Metropolis Monte Carlo techniques, and yields particle attribute tolerances via analogues of familiar stress-strain relationships.
Digital Family Histories for Data Mining
Hoyt, Robert; Linnville, Steven; Chung, Hui-Min; Hutfless, Brent; Rice, Courtney
2013-01-01
As we move closer to ubiquitous electronic health records (EHRs), genetic, familial, and clinical information will need to be incorporated into EHRs as structured data that can be used for data mining and clinical decision support. While the Human Genome Project has produced new and exciting genomic data, the cost to sequence the human personal genome is high, and significant controversies regarding how to interpret genomic data exist. Many experts feel that the family history is a surrogate marker for genetic information and should be part of any paper-based or electronic health record. A digital family history is now part of the Meaningful Use Stage 2 menu objectives for EHR reimbursement, projected for 2014. In this study, a secure online family history questionnaire was designed to collect data on a unique cohort of Vietnam-era repatriated male veterans and a comparison group in order to compare participant and family disease rates on common medical disorders with a genetic component. This article describes our approach to create the digital questionnaire and the results of analyzing family history data on 319 male participants. PMID:24159269
Digital family histories for data mining.
Hoyt, Robert; Linnville, Steven; Chung, Hui-Min; Hutfless, Brent; Rice, Courtney
2013-01-01
As we move closer to ubiquitous electronic health records (EHRs), genetic, familial, and clinical information will need to be incorporated into EHRs as structured data that can be used for data mining and clinical decision support. While the Human Genome Project has produced new and exciting genomic data, the cost to sequence the human personal genome is high, and significant controversies regarding how to interpret genomic data exist. Many experts feel that the family history is a surrogate marker for genetic information and should be part of any paper-based or electronic health record. A digital family history is now part of the Meaningful Use Stage 2 menu objectives for EHR reimbursement, projected for 2014. In this study, a secure online family history questionnaire was designed to collect data on a unique cohort of Vietnam-era repatriated male veterans and a comparison group in order to compare participant and family disease rates on common medical disorders with a genetic component. This article describes our approach to create the digital questionnaire and the results of analyzing family history data on 319 male participants.
Nacheva, Elizabeth; Mokretar, Katya; Soenmez, Aynur; Pittman, Alan M; Grace, Colin; Valli, Roberto; Ejaz, Ayesha; Vattathil, Selina; Maserati, Emanuela; Houlden, Henry; Taanman, Jan-Willem; Schapira, Anthony H; Proukakis, Christos
2017-01-01
Potential bias introduced during DNA isolation is inadequately explored, although it could have significant impact on downstream analysis. To investigate this in human brain, we isolated DNA from cerebellum and frontal cortex using spin columns under different conditions, and salting-out. We first analysed DNA using array CGH, which revealed a striking wave pattern suggesting primarily GC-rich cerebellar losses, even against matched frontal cortex DNA, with a similar pattern on a SNP array. The aCGH changes varied with the isolation protocol. Droplet digital PCR of two genes also showed protocol-dependent losses. Whole genome sequencing showed GC-dependent variation in coverage with spin column isolation from cerebellum. We also extracted and sequenced DNA from substantia nigra using salting-out and phenol / chloroform. The mtDNA copy number, assessed by reads mapping to the mitochondrial genome, was higher in substantia nigra when using phenol / chloroform. We thus provide evidence for significant method-dependent bias in DNA isolation from human brain, as reported in rat tissues. This may contribute to array "waves", and could affect copy number determination, particularly if mosaicism is being sought, and sequencing coverage. Variations in isolation protocol may also affect apparent mtDNA abundance.
Nacheva, Elizabeth; Mokretar, Katya; Soenmez, Aynur; Pittman, Alan M.; Grace, Colin; Valli, Roberto; Ejaz, Ayesha; Vattathil, Selina; Maserati, Emanuela; Houlden, Henry; Taanman, Jan-Willem; Schapira, Anthony H.
2017-01-01
Potential bias introduced during DNA isolation is inadequately explored, although it could have significant impact on downstream analysis. To investigate this in human brain, we isolated DNA from cerebellum and frontal cortex using spin columns under different conditions, and salting-out. We first analysed DNA using array CGH, which revealed a striking wave pattern suggesting primarily GC-rich cerebellar losses, even against matched frontal cortex DNA, with a similar pattern on a SNP array. The aCGH changes varied with the isolation protocol. Droplet digital PCR of two genes also showed protocol-dependent losses. Whole genome sequencing showed GC-dependent variation in coverage with spin column isolation from cerebellum. We also extracted and sequenced DNA from substantia nigra using salting-out and phenol / chloroform. The mtDNA copy number, assessed by reads mapping to the mitochondrial genome, was higher in substantia nigra when using phenol / chloroform. We thus provide evidence for significant method-dependent bias in DNA isolation from human brain, as reported in rat tissues. This may contribute to array “waves”, and could affect copy number determination, particularly if mosaicism is being sought, and sequencing coverage. Variations in isolation protocol may also affect apparent mtDNA abundance. PMID:28683077
Survey of protein–DNA interactions in Aspergillus oryzae on a genomic scale
Wang, Chao; Lv, Yangyong; Wang, Bin; Yin, Chao; Lin, Ying; Pan, Li
2015-01-01
The genome-scale delineation of in vivo protein–DNA interactions is key to understanding genome function. Only ∼5% of transcription factors (TFs) in the Aspergillus genus have been identified using traditional methods. Although the Aspergillus oryzae genome contains >600 TFs, knowledge of the in vivo genome-wide TF-binding sites (TFBSs) in aspergilli remains limited because of the lack of high-quality antibodies. We investigated the landscape of in vivo protein–DNA interactions across the A. oryzae genome through coupling the DNase I digestion of intact nuclei with massively parallel sequencing and the analysis of cleavage patterns in protein–DNA interactions at single-nucleotide resolution. The resulting map identified overrepresented de novo TF-binding motifs from genomic footprints, and provided the detailed chromatin remodeling patterns and the distribution of digital footprints near transcription start sites. The TFBSs of 19 known Aspergillus TFs were also identified based on DNase I digestion data surrounding potential binding sites in conjunction with TF binding specificity information. We observed that the cleavage patterns of TFBSs were dependent on the orientation of TF motifs and independent of strand orientation, consistent with the DNA shape features of binding motifs with flanking sequences. PMID:25883143
Accurate measurement of transgene copy number in crop plants using droplet digital PCR.
Collier, Ray; Dasgupta, Kasturi; Xing, Yan-Ping; Hernandez, Bryan Tarape; Shao, Min; Rohozinski, Dominica; Kovak, Emma; Lin, Jeanie; de Oliveira, Maria Luiza P; Stover, Ed; McCue, Kent F; Harmon, Frank G; Blechl, Ann; Thomson, James G; Thilmony, Roger
2017-06-01
Genetic transformation is a powerful means for the improvement of crop plants, but requires labor- and resource-intensive methods. An efficient method for identifying single-copy transgene insertion events from a population of independent transgenic lines is desirable. Currently, transgene copy number is estimated by either Southern blot hybridization analyses or quantitative polymerase chain reaction (qPCR) experiments. Southern hybridization is a convincing and reliable method, but it also is expensive, time-consuming and often requires a large amount of genomic DNA and radioactively labeled probes. Alternatively, qPCR requires less DNA and is potentially simpler to perform, but its results can lack the accuracy and precision needed to confidently distinguish between one- and two-copy events in transgenic plants with large genomes. To address this need, we developed a droplet digital PCR-based method for transgene copy number measurement in an array of crops: rice, citrus, potato, maize, tomato and wheat. The method utilizes specific primers to amplify target transgenes, and endogenous reference genes in a single duplexed reaction containing thousands of droplets. Endpoint amplicon production in the droplets is detected and quantified using sequence-specific fluorescently labeled probes. The results demonstrate that this approach can generate confident copy number measurements in independent transgenic lines in these crop species. This method and the compendium of probes and primers will be a useful resource for the plant research community, enabling the simple and accurate determination of transgene copy number in these six important crop species. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Positive dental identification using tooth anatomy and digital superimposition.
Johansen, Raymond J; Michael Bowers, C
2013-03-01
Dental identification of unknown human remains continues to be a relevant and reliable adjunct to forensic investigations. The advent of genomic and mitochondrial DNA procedures has not displaced the practical use of dental and related osseous structures remaining after destructive incidents that can render human remains unrecognizable, severely burned, and fragmented. The ability to conclusively identify victims of accident and homicide is based on the availability of antemortem records containing substantial and unambiguous proof of dental and related osseous characteristics. This case report documents the use of digital comparative analysis of antemortem dental models and postmortem dentition, to determine a dental identification. Images of dental models were digitally analyzed using Adobe Photoshop(TM) software. Individual tooth anatomy was compared between the antemortem and postmortem images. Digital superimposition techniques were also used for the comparison. With the absence of antemortem radiographs, this method proved useful to reach a positive identification in this case. © 2012 American Academy of Forensic Sciences.
Digital gene expression for non-model organisms
Hong, Lewis Z.; Li, Jun; Schmidt-Küntzel, Anne; Warren, Wesley C.; Barsh, Gregory S.
2011-01-01
Next-generation sequencing technologies offer new approaches for global measurements of gene expression but are mostly limited to organisms for which a high-quality assembled reference genome sequence is available. We present a method for gene expression profiling called EDGE, or EcoP15I-tagged Digital Gene Expression, based on ultra-high-throughput sequencing of 27-bp cDNA fragments that uniquely tag the corresponding gene, thereby allowing direct quantification of transcript abundance. We show that EDGE is capable of assaying for expression in >99% of genes in the genome and achieves saturation after 6–8 million reads. EDGE exhibits very little technical noise, reveals a large (106) dynamic range of gene expression, and is particularly suited for quantification of transcript abundance in non-model organisms where a high-quality annotated genome is not available. In a direct comparison with RNA-seq, both methods provide similar assessments of relative transcript abundance, but EDGE does better at detecting gene expression differences for poorly expressed genes and does not exhibit transcript length bias. Applying EDGE to laboratory mice, we show that a loss-of-function mutation in the melanocortin 1 receptor (Mc1r), recognized as a Mendelian determinant of yellow hair color in many different mammals, also causes reduced expression of genes involved in the interferon response. To illustrate the application of EDGE to a non-model organism, we examine skin biopsy samples from a cheetah (Acinonyx jubatus) and identify genes likely to control differences in the color of spotted versus non-spotted regions. PMID:21844123
Selective Amplification of the Genome Surrounding Key Placental Genes in Trophoblast Giant Cells.
Hannibal, Roberta L; Baker, Julie C
2016-01-25
While most cells maintain a diploid state, polyploid cells exist in many organisms and are particularly prevalent within the mammalian placenta [1], where they can generate more than 900 copies of the genome [2]. Polyploidy is thought to be an efficient method of increasing the content of the genome by avoiding the costly and slow process of cytokinesis [1, 3, 4]. Polyploidy can also affect gene regulation by amplifying a subset of genomic regions required for specific cellular function [1, 3, 4]. This mechanism is found in the fruit fly Drosophila melanogaster, where polyploid ovarian follicle cells amplify genomic regions containing chorion genes, which facilitate secretion of eggshell proteins [5]. Here, we report that genomic amplification also occurs in mammals at selective regions of the genome in parietal trophoblast giant cells (p-TGCs) of the mouse placenta. Using whole-genome sequencing (WGS) and digital droplet PCR (ddPCR) of mouse p-TGCs, we identified five amplified regions, each containing a gene family known to be involved in mammalian placentation: the prolactins (two clusters), serpins, cathepsins, and the natural killer (NK)/C-type lectin (CLEC) complex [6-12]. We report here the first description of amplification at selective genomic regions in mammals and present evidence that this is an important mode of genome regulation in placental TGCs. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Third Approach to Gene Prediction Suggests Thousands of Additional Human Transcribed Regions
Glusman, Gustavo; Qin, Shizhen; El-Gewely, M. Raafat; Siegel, Andrew F; Roach, Jared C; Hood, Leroy; Smit, Arian F. A
2006-01-01
The identification and characterization of the complete ensemble of genes is a main goal of deciphering the digital information stored in the human genome. Many algorithms for computational gene prediction have been described, ultimately derived from two basic concepts: (1) modeling gene structure and (2) recognizing sequence similarity. Successful hybrid methods combining these two concepts have also been developed. We present a third orthogonal approach to gene prediction, based on detecting the genomic signatures of transcription, accumulated over evolutionary time. We discuss four algorithms based on this third concept: Greens and CHOWDER, which quantify mutational strand biases caused by transcription-coupled DNA repair, and ROAST and PASTA, which are based on strand-specific selection against polyadenylation signals. We combined these algorithms into an integrated method called FEAST, which we used to predict the location and orientation of thousands of putative transcription units not overlapping known genes. Many of the newly predicted transcriptional units do not appear to code for proteins. The new algorithms are particularly apt at detecting genes with long introns and lacking sequence conservation. They therefore complement existing gene prediction methods and will help identify functional transcripts within many apparent “genomic deserts.” PMID:16543943
Cancer Slide Digital Archive (CDSA) | Informatics Technology for Cancer Research (ITCR)
The CDSA is a web-based platform to support the sharing, managment and analysis of digital pathology data. The Emory Instance currently hosts over 23,000 images from The Cancer Genome Atlas, and the software is being developed within the ITCR grant to be deployable as a digital pathology platform for other labs and Cancer Institutes.
Genomic characterization reconfirms the taxonomic status of Lactobacillus parakefiri
TANIZAWA, Yasuhiro; KOBAYASHI, Hisami; KAMINUMA, Eli; SAKAMOTO, Mitsuo; OHKUMA, Moriya; NAKAMURA, Yasukazu; ARITA, Masanori; TOHNO, Masanori
2017-01-01
Whole-genome sequencing was performed for Lactobacillus parakefiri JCM 8573T to confirm its hitherto controversial taxonomic position. Here, we report its first reliable reference genome. Genome-wide metrics, such as average nucleotide identity and digital DNA-DNA hybridization, and phylogenomic analysis based on multiple genes supported its taxonomic status as a distinct species in the genus Lactobacillus. The availability of a reliable genome sequence will aid future investigations on the industrial applications of L. parakefiri in functional foods such as kefir grains. PMID:28748134
Belaghzal, Houda; Dekker, Job; Gibcus, Johan H
2017-07-01
Chromosome conformation capture-based methods such as Hi-C have become mainstream techniques for the study of the 3D organization of genomes. These methods convert chromatin interactions reflecting topological chromatin structures into digital information (counts of pair-wise interactions). Here, we describe an updated protocol for Hi-C (Hi-C 2.0) that integrates recent improvements into a single protocol for efficient and high-resolution capture of chromatin interactions. This protocol combines chromatin digestion and frequently cutting enzymes to obtain kilobase (kb) resolution. It also includes steps to reduce random ligation and the generation of uninformative molecules, such as unligated ends, to improve the amount of valid intra-chromosomal read pairs. This protocol allows for obtaining information on conformational structures such as compartment and topologically associating domains, as well as high-resolution conformational features such as DNA loops. Copyright © 2017 Elsevier Inc. All rights reserved.
Digital genotyping of avian influenza viruses of H7 subtype detected in central Europe in 2007-2011.
Nagy, Alexander; Cerníková, Lenka; Křivda, Vlastimil; Horníčková, Jitka
2012-05-01
The objective of our study was to provide a genotype analysis of H7N7 and H7N9 influenza A viruses (IAV) and infer their relationships to co-circulating non-H7 IAV genomes. The H7N7 strains were collected in central Europe (Hungary-1, Czech Republic-1, Slovenia-1 and Poland-4) and the H7N9 in the Czech Republic and Spain between 2007 and 2011. Hand in hand with this effort, a novel IAV genotype visualization approach called digital genotyping was developed. This approach relies on phylogenetic data summarization and transformation into a pixel array called a segment identity matrix. The digital genotyping revealed a complicated genetic interplay between the H7 and co-circulating non-H7 IAV genotypes. At the H7 IAV level the most obvious relationships were observed between one Polish H7N7/446/09 and Czech H7N7/11 viruses which, despite the special and temporal distance of 800 km and 15 months, retained at least 6/8 genome segments. Close relationships were also observed between the Czech H7N9, Polish and Slovenian H7N7 on one hand and Hungarian and Slovenian H7N7 isolates on the other. In addition the former genomes exhibited close interplays with the Czech H6N2/09 and H11N9/10-like viruses. The Czech and Spanish H7N9 genomes were completely different and 6/8 of the Czech H7N9-like segments were traced to either the Czech H3N8/07, H11N9/09 and Polish H7N7/09-like viruses. The results of digital genotyping correlated with the previous observations obtained on the Polish H7N7 isolates. As was demonstrated, the digital genotyping provides a well-arranged and easily interpretable output and may serve as an alternative genotyping tool useful for handling and analysing even a large panel of IAV genomes. Copyright © 2012 Elsevier B.V. All rights reserved.
One Bacterial Cell, One Complete Genome
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woyke, Tanja; Tighe, Damon; Mavrommatis, Konstantinos
2010-04-26
While the bulk of the finished microbial genomes sequenced to date are derived from cultured bacterial and archaeal representatives, the vast majority of microorganisms elude current culturing attempts, severely limiting the ability to recover complete or even partial genomes from these environmental species. Single cell genomics is a novel culture-independent approach, which enables access to the genetic material of an individual cell. No single cell genome has to our knowledge been closed and finished to date. Here we report the completed genome from an uncultured single cell of Candidatus Sulcia muelleri DMIN. Digital PCR on single symbiont cells isolated frommore » the bacteriome of the green sharpshooter Draeculacephala minerva bacteriome allowed us to assess that this bacteria is polyploid with genome copies ranging from approximately 200?900 per cell, making it a most suitable target for single cell finishing efforts. For single cell shotgun sequencing, an individual Sulcia cell was isolated and whole genome amplified by multiple displacement amplification (MDA). Sanger-based finishing methods allowed us to close the genome. To verify the correctness of our single cell genome and exclude MDA-derived artifacts, we independently shotgun sequenced and assembled the Sulcia genome from pooled bacteriomes using a metagenomic approach, yielding a nearly identical genome. Four variations we detected appear to be genuine biological differences between the two samples. Comparison of the single cell genome with bacteriome metagenomic sequence data detected two single nucleotide polymorphisms (SNPs), indicating extremely low genetic diversity within a Sulcia population. This study demonstrates the power of single cell genomics to generate a complete, high quality, non-composite reference genome within an environmental sample, which can be used for population genetic analyzes.« less
Scalable Device for Automated Microbial Electroporation in a Digital Microfluidic Platform.
Madison, Andrew C; Royal, Matthew W; Vigneault, Frederic; Chen, Liji; Griffin, Peter B; Horowitz, Mark; Church, George M; Fair, Richard B
2017-09-15
Electrowetting-on-dielectric (EWD) digital microfluidic laboratory-on-a-chip platforms demonstrate excellent performance in automating labor-intensive protocols. When coupled with an on-chip electroporation capability, these systems hold promise for streamlining cumbersome processes such as multiplex automated genome engineering (MAGE). We integrated a single Ti:Au electroporation electrode into an otherwise standard parallel-plate EWD geometry to enable high-efficiency transformation of Escherichia coli with reporter plasmid DNA in a 200 nL droplet. Test devices exhibited robust operation with more than 10 transformation experiments performed per device without cross-contamination or failure. Despite intrinsic electric-field nonuniformity present in the EP/EWD device, the peak on-chip transformation efficiency was measured to be 8.6 ± 1.0 × 10 8 cfu·μg -1 for an average applied electric field strength of 2.25 ± 0.50 kV·mm -1 . Cell survival and transformation fractions at this electroporation pulse strength were found to be 1.5 ± 0.3 and 2.3 ± 0.1%, respectively. Our work expands the EWD toolkit to include on-chip microbial electroporation and opens the possibility of scaling advanced genome engineering methods, like MAGE, into the submicroliter regime.
Miyaoka, Yuichiro; Berman, Jennifer R; Cooper, Samantha B; Mayerl, Steven J; Chan, Amanda H; Zhang, Bin; Karlin-Neumann, George A; Conklin, Bruce R
2016-03-31
Precise genome-editing relies on the repair of sequence-specific nuclease-induced DNA nicking or double-strand breaks (DSBs) by homology-directed repair (HDR). However, nonhomologous end-joining (NHEJ), an error-prone repair, acts concurrently, reducing the rate of high-fidelity edits. The identification of genome-editing conditions that favor HDR over NHEJ has been hindered by the lack of a simple method to measure HDR and NHEJ directly and simultaneously at endogenous loci. To overcome this challenge, we developed a novel, rapid, digital PCR-based assay that can simultaneously detect one HDR or NHEJ event out of 1,000 copies of the genome. Using this assay, we systematically monitored genome-editing outcomes of CRISPR-associated protein 9 (Cas9), Cas9 nickases, catalytically dead Cas9 fused to FokI, and transcription activator-like effector nuclease at three disease-associated endogenous gene loci in HEK293T cells, HeLa cells, and human induced pluripotent stem cells. Although it is widely thought that NHEJ generally occurs more often than HDR, we found that more HDR than NHEJ was induced under multiple conditions. Surprisingly, the HDR/NHEJ ratios were highly dependent on gene locus, nuclease platform, and cell type. The new assay system, and our findings based on it, will enable mechanistic studies of genome-editing and help improve genome-editing technology.
Detection of BRCA1 gross rearrangements by droplet digital PCR.
Preobrazhenskaya, Elena V; Bizin, Ilya V; Kuligina, Ekatherina Sh; Shleykina, Alla Yu; Suspitsin, Evgeny N; Zaytseva, Olga A; Anisimova, Elena I; Laptiev, Sergey A; Gorodnova, Tatiana V; Belyaev, Alexey M; Imyanitov, Evgeny N; Sokolenko, Anna P
2017-10-01
Large genomic rearrangements (LGRs) constitute a significant share of pathogenic BRCA1 mutations. Multiplex ligation-dependent probe amplification (MLPA) is a leading method for LGR detection; however, it is entirely based on the use of commercial kits, includes relatively time-consuming hybridization step, and is not convenient for large-scale screening of recurrent LGRs. We developed and validated the droplet digital PCR (ddPCR) assay, which covers the entire coding region of BRCA1 gene and is capable to precisely quantitate the copy number for each exon. 141 breast cancer (BC) patients, who demonstrated evident clinical features of hereditary BC but turned out to be negative for founder BRCA1/2 mutations, were subjected to the LGR analysis. Four patients with LGR were identified, with three cases of exon 8 deletion and one women carrying the deletion of exons 5-7. Excellent concordance with MLPA test was observed. Exon 8 copy number was tested in additional 720 BC and 184 ovarian cancer (OC) high-risk patients, and another four cases with the deletion were revealed; MLPA re-analysis demonstrated that exon 8 loss was a part of a larger genetic alteration in two cases, while the remaining two patients had isolated defect of exon 8. Long-range PCR and next generation sequencing of DNA samples carrying exon 8 deletion revealed two types of recurrent LGRs. Droplet digital PCR is a reliable tool for the detection of large genomic rearrangements.
Micro computed tomography (CT) scanned anatomical gateway to insect pest bioinformatics
USDA-ARS?s Scientific Manuscript database
An international collaboration to establish an interactive Digital Video Library for a Systems Biology Approach to study the Asian citrus Psyllid and psyllid genomics/proteomics interactions is demonstrated. Advances in micro-CT, digital computed tomography (CT) scan uses X-rays to make detailed pic...
Detection of SEA-type α-thalassemia in embryo biopsies by digital PCR.
Lee, Ta-Hsien; Hsu, Ya-Chiung; Chang, Chia Lin
2017-08-01
Accurate and efficient pre-implantation genetic diagnosis (PGD) based on the analysis of single or oligo-cells is needed for timely identification of embryos that are affected by deleterious genetic traits in in vitro fertilization (IVF) clinics. Polymerase chain reaction (PCR) is the backbone of modern genetic diagnoses, and a spectrum of PCR-based techniques have been used to detect various thalassemia mutations in prenatal diagnosis (PND) and PGD. Among thalassemias, SEA-type α-thalassemia is the most common variety found in Asia, and can lead to Bart's hydrops fetalis and serious maternal complications. To formulate an efficient digital PCR for clinical diagnosis of SEA-type α-thalassemia in cultured embryos, we conducted a pilot study to detect the α-globin and SEA-type deletion alleles in blastomere biopsies with a highly sensitive microfluidics-based digital PCR method. Genomic DNA from embryo biopsy samples were extracted, and crude DNA extracts were first amplified by a conventional PCR procedure followed by a nested PCR reaction with primers and probes that are designed for digital PCR amplification. Analysis of microfluidics-based PCR reactions showed that robust signals for normal α-globin and SEA-type deletion alleles, together with an internal control gene, can be routinely generated using crude embryo biopsies after a 10 6 -fold dilution of primary PCR products. The SEA-type deletion in cultured embryos can be sensitively diagnosed with the digital PCR procedure in clinics. The adoption of this robust PGD method could prevent the implantation of IVF embryos that are destined to develop Bart's hydrops fetalis in a timely manner. The results also help inform future development of a standard digital PCR procedure for cost-effective PGD of α-thalassemia in a standard IVF clinic. Copyright © 2017. Published by Elsevier B.V.
Detection of MET Gene Copy Number in Cancer Samples Using the Droplet Digital PCR Method.
Zhang, Yanni; Tang, En-Tzu; Du, Zhiqiang
2016-01-01
The analysis of MET gene copy number (CN) has been considered to be a potential biomarker to predict the response to MET-targeted therapies in various cancers. However, the current standard methods to determine MET CN are SNP 6.0 in the genomic DNA of cancer cell lines and fluorescence in situ hybridization (FISH) in tumor models, respectively, which are costly and require advanced technical skills and result in relatively subjective judgments. Therefore, we employed a novel method, droplet digital PCR (ddPCR), to determine the MET gene copy number with high accuracy and precision. The genomic DNA of cancer cell lines or tumor models were tested and compared with the MET gene CN and MET/CEN-7 ratio determined by SNP 6.0 and FISH, respectively. In cell lines, the linear association of the MET CN detected by ddPCR and SNP 6.0 is strong (Pearson correlation = 0.867). In tumor models, the MET CN detected by ddPCR was significantly different between the MET gene amplification and non-amplification groups according to FISH (mean: 15.4 vs 2.1; P = 0.044). Given that MET gene amplification is defined as MET CN >5.5 by ddPCR, the concordance rate between ddPCR and FISH was 98.0%, and Cohen's kappa coefficient was 0.760 (95% CI, 0.498-1.000; P <0.001). The results demonstrated that the ddPCR method has the potential to quantify the MET gene copy number with high precision and accuracy as compared with the results from SNP 6.0 and FISH in cancer cell lines and tumor samples, respectively.
Image correlation method for DNA sequence alignment.
Curilem Saldías, Millaray; Villarroel Sassarini, Felipe; Muñoz Poblete, Carlos; Vargas Vásquez, Asticio; Maureira Butler, Iván
2012-01-01
The complexity of searches and the volume of genomic data make sequence alignment one of bioinformatics most active research areas. New alignment approaches have incorporated digital signal processing techniques. Among these, correlation methods are highly sensitive. This paper proposes a novel sequence alignment method based on 2-dimensional images, where each nucleic acid base is represented as a fixed gray intensity pixel. Query and known database sequences are coded to their pixel representation and sequence alignment is handled as object recognition in a scene problem. Query and database become object and scene, respectively. An image correlation process is carried out in order to search for the best match between them. Given that this procedure can be implemented in an optical correlator, the correlation could eventually be accomplished at light speed. This paper shows an initial research stage where results were "digitally" obtained by simulating an optical correlation of DNA sequences represented as images. A total of 303 queries (variable lengths from 50 to 4500 base pairs) and 100 scenes represented by 100 x 100 images each (in total, one million base pair database) were considered for the image correlation analysis. The results showed that correlations reached very high sensitivity (99.01%), specificity (98.99%) and outperformed BLAST when mutation numbers increased. However, digital correlation processes were hundred times slower than BLAST. We are currently starting an initiative to evaluate the correlation speed process of a real experimental optical correlator. By doing this, we expect to fully exploit optical correlation light properties. As the optical correlator works jointly with the computer, digital algorithms should also be optimized. The results presented in this paper are encouraging and support the study of image correlation methods on sequence alignment.
solveME: fast and reliable solution of nonlinear ME models.
Yang, Laurence; Ma, Ding; Ebrahim, Ali; Lloyd, Colton J; Saunders, Michael A; Palsson, Bernhard O
2016-09-22
Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models using a quad-precision NLP solver (Quad MINOS). Our method was up to 45 % faster than binary search for six significant digits in growth rate. We also develop a fast, quad-precision flux variability analysis that is accelerated (up to 60× speedup) via solver warm-starts. Finally, we employ the tools developed to investigate growth-coupled succinate overproduction, accounting for proteome constraints. Just as genome-scale metabolic reconstructions have become an invaluable tool for computational and systems biologists, we anticipate that these fast and numerically reliable ME solution methods will accelerate the wide-spread adoption of ME models for researchers in these fields.
USDA-ARS?s Scientific Manuscript database
A comprehensive transcriptome survey, or “Gene Atlas,” provides information essential for a complete understanding of the genomic biology of an organism. Using a digital gene expression approach, we developed a Gene Atlas of RNA abundance in 92 adult, juvenile and fetal cattle tissues. The samples...
Otaño-Rivera, Víctor; Boakye, Amma; Grobe, Nadja; Almutairi, Mohammed M; Kursan, Shams; Mattis, Lesan K; Castrop, Hayo; Gurley, Susan B; Elased, Khalid M; Boivin, Gregory P; Di Fulvio, Mauricio
2017-04-01
Genotyping of genetically-engineered mice is necessary for the effective design of breeding strategies and identification of mutant mice. This process relies on the identification of DNA markers introduced into genomic sequences of mice, a task usually performed using the polymerase chain reaction (PCR). Clearly, the limiting step in genotyping is isolating pure genomic DNA. Isolation of mouse DNA for genotyping typically involves painful procedures such as tail snip, digit removal, or ear punch. Although the harvesting of hair has previously been proposed as a source of genomic DNA, there has been a perceived complication and reluctance to use this non-painful technique because of low DNA yields and fear of contamination. In this study we developed a simple, economic, and efficient strategy using Chelex® resins to purify genomic DNA from hair roots of mice which are suitable for genotyping. Upon comparison with standard DNA purification methods using a commercially available kit, we demonstrate that Chelex® efficiently and consistently purifies high-quality DNA from hair roots, minimizing pain, shortening time and reducing costs associated with the determination of accurate genotypes. Therefore, the use of hair roots combined with Chelex® is a reliable and more humane alternative for DNA genotyping.
Evolution of biological complexity
Adami, Christoph; Ofria, Charles; Collier, Travis C.
2000-01-01
To make a case for or against a trend in the evolution of complexity in biological evolution, complexity needs to be both rigorously defined and measurable. A recent information-theoretic (but intuitively evident) definition identifies genomic complexity with the amount of information a sequence stores about its environment. We investigate the evolution of genomic complexity in populations of digital organisms and monitor in detail the evolutionary transitions that increase complexity. We show that, because natural selection forces genomes to behave as a natural “Maxwell Demon,” within a fixed environment, genomic complexity is forced to increase. PMID:10781045
NASA Astrophysics Data System (ADS)
Cruz-Roa, Angel; Gilmore, Hannah; Basavanhally, Ajay; Feldman, Michael; Ganesan, Shridar; Shih, Natalie N. C.; Tomaszewski, John; González, Fabio A.; Madabhushi, Anant
2017-04-01
With the increasing ability to routinely and rapidly digitize whole slide images with slide scanners, there has been interest in developing computerized image analysis algorithms for automated detection of disease extent from digital pathology images. The manual identification of presence and extent of breast cancer by a pathologist is critical for patient management for tumor staging and assessing treatment response. However, this process is tedious and subject to inter- and intra-reader variability. For computerized methods to be useful as decision support tools, they need to be resilient to data acquired from different sources, different staining and cutting protocols and different scanners. The objective of this study was to evaluate the accuracy and robustness of a deep learning-based method to automatically identify the extent of invasive tumor on digitized images. Here, we present a new method that employs a convolutional neural network for detecting presence of invasive tumor on whole slide images. Our approach involves training the classifier on nearly 400 exemplars from multiple different sites, and scanners, and then independently validating on almost 200 cases from The Cancer Genome Atlas. Our approach yielded a Dice coefficient of 75.86%, a positive predictive value of 71.62% and a negative predictive value of 96.77% in terms of pixel-by-pixel evaluation compared to manually annotated regions of invasive ductal carcinoma.
Cruz-Roa, Angel; Gilmore, Hannah; Basavanhally, Ajay; Feldman, Michael; Ganesan, Shridar; Shih, Natalie N C; Tomaszewski, John; González, Fabio A; Madabhushi, Anant
2017-04-18
With the increasing ability to routinely and rapidly digitize whole slide images with slide scanners, there has been interest in developing computerized image analysis algorithms for automated detection of disease extent from digital pathology images. The manual identification of presence and extent of breast cancer by a pathologist is critical for patient management for tumor staging and assessing treatment response. However, this process is tedious and subject to inter- and intra-reader variability. For computerized methods to be useful as decision support tools, they need to be resilient to data acquired from different sources, different staining and cutting protocols and different scanners. The objective of this study was to evaluate the accuracy and robustness of a deep learning-based method to automatically identify the extent of invasive tumor on digitized images. Here, we present a new method that employs a convolutional neural network for detecting presence of invasive tumor on whole slide images. Our approach involves training the classifier on nearly 400 exemplars from multiple different sites, and scanners, and then independently validating on almost 200 cases from The Cancer Genome Atlas. Our approach yielded a Dice coefficient of 75.86%, a positive predictive value of 71.62% and a negative predictive value of 96.77% in terms of pixel-by-pixel evaluation compared to manually annotated regions of invasive ductal carcinoma.
Tatematsu, Tsutomu; Suzuki, Ayumi; Oda, Risa; Sakane, Tadashi; Kawano, Osamu; Haneda, Hiroshi; Moriyama, Satoru; Sasaki, Hidefumi; Nakanishi, Ryoichi
2017-01-01
Background A gatekeeper T790M mutation is thought to cause resistance to epidermal growth factor receptor tyrosine kinase inhibitor (EGFR-TKI) treatment. The detection of a 2nd mutation is important for planning the next therapy when patients acquire resistance to the first line EGFR-TKI. Methods We used a competitive allele-specific polymerase chain reaction (CAST-PCR) to analyze the incidence and clinical significance of T790M mutations in 153 lung adenocarcinomas with EGFR-activating mutations. To increase the sensitivity and specificity of the detection of T790M mutations, we subjected 20 of the 153 cases to a digital PCR. The genomic DNAs were extracted from frozen, surgically resected tumor tissue specimens. Results The CAST-PCR detected T790M mutations in 45 (29.4%) of the 153 cases. The analytical sensitivity in the detection T790M mutations was 0.13–2.65% (average 0.27%, median 0.20%). In contrast, the digital PCR, detected T790M mutations in 8 (40%) out of 20 cases. Conclusions Our study shows that the pretreatment incidence of T790M mutation was less than that reported in previous studies. In order to clinically use pretreatment EGFR T790M mutation identification method, we should clarify the adequate methods and tissue preserved status. PMID:28932544
Cruz-Roa, Angel; Gilmore, Hannah; Basavanhally, Ajay; Feldman, Michael; Ganesan, Shridar; Shih, Natalie N.C.; Tomaszewski, John; González, Fabio A.; Madabhushi, Anant
2017-01-01
With the increasing ability to routinely and rapidly digitize whole slide images with slide scanners, there has been interest in developing computerized image analysis algorithms for automated detection of disease extent from digital pathology images. The manual identification of presence and extent of breast cancer by a pathologist is critical for patient management for tumor staging and assessing treatment response. However, this process is tedious and subject to inter- and intra-reader variability. For computerized methods to be useful as decision support tools, they need to be resilient to data acquired from different sources, different staining and cutting protocols and different scanners. The objective of this study was to evaluate the accuracy and robustness of a deep learning-based method to automatically identify the extent of invasive tumor on digitized images. Here, we present a new method that employs a convolutional neural network for detecting presence of invasive tumor on whole slide images. Our approach involves training the classifier on nearly 400 exemplars from multiple different sites, and scanners, and then independently validating on almost 200 cases from The Cancer Genome Atlas. Our approach yielded a Dice coefficient of 75.86%, a positive predictive value of 71.62% and a negative predictive value of 96.77% in terms of pixel-by-pixel evaluation compared to manually annotated regions of invasive ductal carcinoma. PMID:28418027
Bhat, Somanath; Polanowski, Andrea M; Double, Mike C; Jarman, Simon N; Emslie, Kerry R
2012-01-01
Recent advances in nanofluidic technologies have enabled the use of Integrated Fluidic Circuits (IFCs) for high-throughput Single Nucleotide Polymorphism (SNP) genotyping (GT). In this study, we implemented and validated a relatively low cost nanofluidic system for SNP-GT with and without Specific Target Amplification (STA). As proof of principle, we first validated the effect of input DNA copy number on genotype call rate using well characterised, digital PCR (dPCR) quantified human genomic DNA samples and then implemented the validated method to genotype 45 SNPs in the humpback whale, Megaptera novaeangliae, nuclear genome. When STA was not incorporated, for a homozygous human DNA sample, reaction chambers containing, on average 9 to 97 copies, showed 100% call rate and accuracy. Below 9 copies, the call rate decreased, and at one copy it was 40%. For a heterozygous human DNA sample, the call rate decreased from 100% to 21% when predicted copies per reaction chamber decreased from 38 copies to one copy. The tightness of genotype clusters on a scatter plot also decreased. In contrast, when the same samples were subjected to STA prior to genotyping a call rate and a call accuracy of 100% were achieved. Our results demonstrate that low input DNA copy number affects the quality of data generated, in particular for a heterozygous sample. Similar to human genomic DNA, a call rate and a call accuracy of 100% was achieved with whale genomic DNA samples following multiplex STA using either 15 or 45 SNP-GT assays. These calls were 100% concordant with their true genotypes determined by an independent method, suggesting that the nanofluidic system is a reliable platform for executing call rates with high accuracy and concordance in genomic sequences derived from biological tissue.
Zhu, Pengyu; Wang, Chenguang; Huang, Kunlun; Luo, Yunbo; Xu, Wentao
2016-03-18
Digital polymerase chain reaction (PCR) has developed rapidly since it was first reported in the 1990s. However, pretreatments are often required during preparation for digital PCR, which can increase operation error. The single-plex amplification of both the target and reference genes may cause uncertainties due to the different reaction volumes and the matrix effect. In the current study, a quantitative detection system based on the pretreatment-free duplex chamber digital PCR was developed. The dynamic range, limit of quantitation (LOQ), sensitivity and specificity were evaluated taking the GA21 event as the experimental object. Moreover, to determine the factors that may influence the stability of the duplex system, we evaluated whether the pretreatments, the primary and secondary structures of the probes and the SNP effect influence the detection. The results showed that the LOQ was 0.5% and the sensitivity was 0.1%. We also found that genome digestion and single nucleotide polymorphism (SNP) sites affect the detection results, whereas the unspecific hybridization within different probes had little side effect. This indicated that the detection system was suited for both chamber-based and droplet-based digital PCR. In conclusion, we have provided a simple and flexible way of achieving absolute quantitation for genetically modified organism (GMO) genome samples using commercial digital PCR detection systems.
Zhu, Pengyu; Wang, Chenguang; Huang, Kunlun; Luo, Yunbo; Xu, Wentao
2016-01-01
Digital polymerase chain reaction (PCR) has developed rapidly since it was first reported in the 1990s. However, pretreatments are often required during preparation for digital PCR, which can increase operation error. The single-plex amplification of both the target and reference genes may cause uncertainties due to the different reaction volumes and the matrix effect. In the current study, a quantitative detection system based on the pretreatment-free duplex chamber digital PCR was developed. The dynamic range, limit of quantitation (LOQ), sensitivity and specificity were evaluated taking the GA21 event as the experimental object. Moreover, to determine the factors that may influence the stability of the duplex system, we evaluated whether the pretreatments, the primary and secondary structures of the probes and the SNP effect influence the detection. The results showed that the LOQ was 0.5% and the sensitivity was 0.1%. We also found that genome digestion and single nucleotide polymorphism (SNP) sites affect the detection results, whereas the unspecific hybridization within different probes had little side effect. This indicated that the detection system was suited for both chamber-based and droplet-based digital PCR. In conclusion, we have provided a simple and flexible way of achieving absolute quantitation for genetically modified organism (GMO) genome samples using commercial digital PCR detection systems. PMID:26999129
New in-depth rainbow trout transcriptome reference and digital atlas of gene expression
USDA-ARS?s Scientific Manuscript database
Sequencing the rainbow trout genome is underway and a transcriptome reference sequence is required to help in genome assembly and gene discovery. Previously, we reported a transcriptome reference sequence using a 19X coverage of 454-pyrosequencing data. Although this work added a great wealth of ann...
Teng, Jade L L; Tang, Ying; Huang, Yi; Guo, Feng-Biao; Wei, Wen; Chen, Jonathan H K; Wong, Samson S Y; Lau, Susanna K P; Woo, Patrick C Y
2016-01-01
Owing to the highly similar phenotypic profiles, protein spectra and 16S rRNA gene sequences observed between three pairs of Tsukamurella species (Tsukamurella pulmonis/Tsukamurella spongiae, Tsukamurella tyrosinosolvens/Tsukamurella carboxy-divorans, and Tsukamurella pseudospumae/Tsukamurella sunchonensis), we hypothesize that and the six Tsukamurella species may have been misclassified and that there may only be three Tsukamurella species. In this study, we characterized the type strains of these six Tsukamurella species by tradition DNA-DNA hybridization (DDH) and "digital DDH" after genome sequencing to determine their exact taxonomic positions. Traditional DDH showed 81.2 ± 0.6% to 99.7 ± 1.0% DNA-DNA relatedness between the two Tsukamurella species in each of the three pairs, which was above the threshold for same species designation. "Digital DDH" based on Genome-To-Genome Distance Calculator and Average Nucleotide Identity for the three pairs also showed similarity results in the range of 82.3-92.9 and 98.1-99.1%, respectively, in line with results of traditional DDH. Based on these evidence and according to Rules 23a and 42 of the Bacteriological Code, we propose that T. spongiae Olson et al. 2007, should be reclassified as a later heterotypic synonym of T. pulmonis Yassin et al. 1996, T. carboxydivorans Park et al. 2009, as a later heterotypic synonym of T. tyrosinosolvens Yassin et al. 1997, and T. sunchonensis Seong et al. 2008 as a later heterotypic synonym of T. pseudospumae Nam et al. 2004. With the advancement of genome sequencing technologies, classification of bacterial species can be readily achieved by "digital DDH" than traditional DDH.
Large scale digital atlases in neuroscience
NASA Astrophysics Data System (ADS)
Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.
2014-03-01
Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.
Genomic diversity within the haloalkaliphilic genus Thioalkalivibrio
Ahn, Anne-Catherine; Meier-Kolthoff, Jan P.; Overmars, Lex; ...
2017-03-10
Thioalkalivibrio is a genus of obligate chemolithoautotrophic haloalkaliphilic sulfur-oxidizing bacteria. Their habitat are soda lakes which are dual extreme environments with a pH range from 9.5 to 11 and salt concentrations up to saturation. More than 100 strains of this genus have been isolated from various soda lakes all over the world, but only ten species have been effectively described yet. Therefore, the assignment of the remaining strains to either existing or novel species is important and will further elucidate their genomic diversity as well as give a better general understanding of this genus. Recently, the genomes of 76 Thioalkalivibriomore » strains were sequenced. On these, we applied different methods including (i) 16S rRNA gene sequence analysis, (ii) Multilocus Sequence Analysis (MLSA) based on eight housekeeping genes, (iii) Average Nucleotide Identity based on BLAST (ANI b) and MUMmer (ANI m ), (iv) Tetranucleotide frequency correlation coefficients (TETRA), (v) digital DNA:DNA hybridization (dDDH) as well as (vi) nucleotide- and amino acid-based Genome BLAST Distance Phylogeny (GBDP) analyses. We detected a high genomic diversity by revealing 15 new "genomic" species and 16 new "genomic" subspecies in addition to the ten already described species. Phylogenetic and phylogenomic analyses showed that the genus is not monophyletic, because four strains were clearly separated from the other Thioalkalivibrio by type strains from other genera. Therefore, it is recommended to classify the latter group as a novel genus. The biogeographic distribution of Thioalkalivibrio suggested that the different "genomic" species can be classified as candidate disjunct or candidate endemic species. This study is a detailed genome-based classification and identification of members within the genus Thioalkalivibrio. However, future phenotypical and chemotaxonomical studies will be needed for a full species description of this genus.« less
Qi, Xiaoxiao; Wu, Jun; Wang, Lifen; Li, Leiting; Cao, Yufen; Tian, Luming; Dong, Xingguang; Zhang, Shaoling
2013-10-23
'Kuerlexiangli' (Pyrus sinkiangensis Yu), a native pear of Xinjiang, China, is an important agricultural fruit and primary export to the international market. However, fruit with persistent calyxes affect fruit shape and quality. Although several studies have looked into the physiological aspects of the calyx abscission process, the underlying molecular mechanisms remain unknown. In order to better understand the molecular basis of the process of calyx abscission, materials at three critical stages of regulation, with 6000 × Flusilazole plus 300 × PBO treatment (calyx abscising treatment) and 50 mg.L-1GA3 treatment (calyx persisting treatment), were collected and cDNA fragments were sequenced using digital transcript abundance measurements to identify candidate genes. Digital transcript abundance measurements was performed using high-throughput Illumina GAII sequencing on seven samples that were collected at three important stages of the calyx abscission process with chemical agent treatments promoting calyx abscission and persistence. Altogether more than 251,123,845 high quality reads were obtained with approximately 8.0 M raw data for each library. The values of 69.85%-71.90% of clean data in the digital transcript abundance measurements could be mapped to the pear genome database. There were 12,054 differentially expressed genes having Gene Ontology (GO) terms and associating with 251 Kyoto Encyclopedia of Genes and Genomes (KEGG) defined pathways. The differentially expressed genes correlated with calyx abscission were mainly involved in photosynthesis, plant hormone signal transduction, cell wall modification, transcriptional regulation, and carbohydrate metabolism. Furthermore, candidate calyx abscission-specific genes, e.g. Inflorescence deficient in abscission gene, were identified. Quantitative real-time PCR was used to confirm the digital transcript abundance measurements results. We identified candidate genes that showed highly dynamic changes in expression during the calyx abscission process. These genes are potential targets for future functional characterization and should be valuable for exploration of the mechanisms of calyx abscission, and eventually for developing methods based on small molecule application to induce calyx abscission in fruit production.
Genetics Home Reference: oral-facial-digital syndrome
... Orofaciodigital syndromes Additional NIH Resources (1 link) National Human Genome Research Institute Educational Resources (13 links) Disease InfoSearch: Orofaciodigital syndromes MalaCards: orofaciodigital ...
Development of NIST standard reference material 2373: Genomic DNA standards for HER2 measurements.
He, Hua-Jun; Almeida, Jamie L; Lund, Steve P; Steffen, Carolyn R; Choquette, Steve; Cole, Kenneth D
2016-06-01
NIST standard reference material (SRM) 2373 was developed to improve the measurements of the HER2 gene amplification in DNA samples. SRM 2373 consists of genomic DNA extracted from five breast cancer cell lines with different amounts of amplification of the HER2 gene. The five components are derived from the human cell lines SK-BR-3, MDA-MB-231, MDA-MB-361, MDA-MB-453, and BT-474. The certified values are the ratios of the HER2 gene copy numbers to the copy numbers of selected reference genes DCK, EIF5B, RPS27A, and PMM1. The ratios were measured using quantitative polymerase chain reaction and digital PCR, methods that gave similar ratios. The five components of SRM 2373 have certified HER2 amplification ratios that range from 1.3 to 17.7. The stability and homogeneity of the reference materials were shown by repeated measurements over a period of several years. SRM 2373 is a well characterized genomic DNA reference material that can be used to improve the confidence of the measurements of HER2 gene copy number.
Medical imaging and computers in the diagnosis of breast cancer
NASA Astrophysics Data System (ADS)
Giger, Maryellen L.
2014-09-01
Computer-aided diagnosis (CAD) and quantitative image analysis (QIA) methods (i.e., computerized methods of analyzing digital breast images: mammograms, ultrasound, and magnetic resonance images) can yield novel image-based tumor and parenchyma characteristics (i.e., signatures that may ultimately contribute to the design of patient-specific breast cancer management plans). The role of QIA/CAD has been expanding beyond screening programs towards applications in risk assessment, diagnosis, prognosis, and response to therapy as well as in data mining to discover relationships of image-based lesion characteristics with genomics and other phenotypes; thus, as they apply to disease states. These various computer-based applications are demonstrated through research examples from the Giger Lab.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Anne-Catherine; Meier-Kolthoff, Jan P.; Overmars, Lex
Thioalkalivibrio is a genus of obligate chemolithoautotrophic haloalkaliphilic sulfur-oxidizing bacteria. Their habitat are soda lakes which are dual extreme environments with a pH range from 9.5 to 11 and salt concentrations up to saturation. More than 100 strains of this genus have been isolated from various soda lakes all over the world, but only ten species have been effectively described yet. Therefore, the assignment of the remaining strains to either existing or novel species is important and will further elucidate their genomic diversity as well as give a better general understanding of this genus. Recently, the genomes of 76 Thioalkalivibriomore » strains were sequenced. On these, we applied different methods including (i) 16S rRNA gene sequence analysis, (ii) Multilocus Sequence Analysis (MLSA) based on eight housekeeping genes, (iii) Average Nucleotide Identity based on BLAST (ANI b) and MUMmer (ANI m ), (iv) Tetranucleotide frequency correlation coefficients (TETRA), (v) digital DNA:DNA hybridization (dDDH) as well as (vi) nucleotide- and amino acid-based Genome BLAST Distance Phylogeny (GBDP) analyses. We detected a high genomic diversity by revealing 15 new "genomic" species and 16 new "genomic" subspecies in addition to the ten already described species. Phylogenetic and phylogenomic analyses showed that the genus is not monophyletic, because four strains were clearly separated from the other Thioalkalivibrio by type strains from other genera. Therefore, it is recommended to classify the latter group as a novel genus. The biogeographic distribution of Thioalkalivibrio suggested that the different "genomic" species can be classified as candidate disjunct or candidate endemic species. This study is a detailed genome-based classification and identification of members within the genus Thioalkalivibrio. However, future phenotypical and chemotaxonomical studies will be needed for a full species description of this genus.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiley, H. S.
There comes a time in every field of science when things suddenly change. While it might not be immediately apparent that things are different, a tipping point has occurred. Biology is now at such a point. The reason is the introduction of high-throughput genomics-based technologies. I am not talking about the consequences of the sequencing of the human genome (and every other genome within reach). The change is due to new technologies that generate an enormous amount of data about the molecular composition of cells. These include proteomics, transcriptional profiling by sequencing, and the ability to globally measure microRNAs andmore » post-translational modifications of proteins. These mountains of digital data can be mapped to a common frame of reference: the organism’s genome. With the new high-throughput technologies, we can generate tens of thousands of data points from each sample. Data are now measured in terabytes and the time necessary to analyze data can now require years. Obviously, we can’t wait to interpret the data fully before the next experiment. In fact, we might never be able to even look at all of it, much less understand it. This volume of data requires sophisticated computational and statistical methods for its analysis and is forcing biologists to approach data interpretation as a collaborative venture.« less
Detecting Single-Nucleotide Substitutions Induced by Genome Editing.
Miyaoka, Yuichiro; Chan, Amanda H; Conklin, Bruce R
2016-08-01
The detection of genome editing is critical in evaluating genome-editing tools or conditions, but it is not an easy task to detect genome-editing events-especially single-nucleotide substitutions-without a surrogate marker. Here we introduce a procedure that significantly contributes to the advancement of genome-editing technologies. It uses droplet digital polymerase chain reaction (ddPCR) and allele-specific hydrolysis probes to detect single-nucleotide substitutions generated by genome editing (via homology-directed repair, or HDR). HDR events that introduce substitutions using donor DNA are generally infrequent, even with genome-editing tools, and the outcome is only one base pair difference in 3 billion base pairs of the human genome. This task is particularly difficult in induced pluripotent stem (iPS) cells, in which editing events can be very rare. Therefore, the technological advances described here have implications for therapeutic genome editing and experimental approaches to disease modeling with iPS cells. © 2016 Cold Spring Harbor Laboratory Press.
RefCNV: Identification of Gene-Based Copy Number Variants Using Whole Exome Sequencing.
Chang, Lun-Ching; Das, Biswajit; Lih, Chih-Jian; Si, Han; Camalier, Corinne E; McGregor, Paul M; Polley, Eric
2016-01-01
With rapid advances in DNA sequencing technologies, whole exome sequencing (WES) has become a popular approach for detecting somatic mutations in oncology studies. The initial intent of WES was to characterize single nucleotide variants, but it was observed that the number of sequencing reads that mapped to a genomic region correlated with the DNA copy number variants (CNVs). We propose a method RefCNV that uses a reference set to estimate the distribution of the coverage for each exon. The construction of the reference set includes an evaluation of the sources of variability in the coverage distribution. We observed that the processing steps had an impact on the coverage distribution. For each exon, we compared the observed coverage with the expected normal coverage. Thresholds for determining CNVs were selected to control the false-positive error rate. RefCNV prediction correlated significantly (r = 0.96-0.86) with CNV measured by digital polymerase chain reaction for MET (7q31), EGFR (7p12), or ERBB2 (17q12) in 13 tumor cell lines. The genome-wide CNV analysis showed a good overall correlation (Spearman's coefficient = 0.82) between RefCNV estimation and publicly available CNV data in Cancer Cell Line Encyclopedia. RefCNV also showed better performance than three other CNV estimation methods in genome-wide CNV analysis.
Meher, J K; Meher, P K; Dash, G N; Raval, M K
2012-01-01
The first step in gene identification problem based on genomic signal processing is to convert character strings into numerical sequences. These numerical sequences are then analysed spectrally or using digital filtering techniques for the period-3 peaks, which are present in exons (coding areas) and absent in introns (non-coding areas). In this paper, we have shown that single-indicator sequences can be generated by encoding schemes based on physico-chemical properties. Two new methods are proposed for generating single-indicator sequences based on hydration energy and dipole moments. The proposed methods produce high peak at exon locations and effectively suppress false exons (intron regions having greater peak than exon regions) resulting in high discriminating factor, sensitivity and specificity.
Xu, Qing; Zhu, Yazhen; Bai, Yali; Wei, Xiumin; Zheng, Xirun; Mao, Mao; Zheng, Guangjuan
2015-01-01
Background Two types of epidermal growth factor receptor (EGFR) mutations in exon 19 and exon 21 (ex19del and L858R) are prevalent in lung cancer patients and sensitive to targeted EGFR inhibition. A resistance mutation in exon 20 (T790M) has been found to accompany drug treatment when patients relapse. These three mutations are valuable companion diagnostic biomarkers for guiding personalized treatment. Quantitative polymerase chain reaction (qPCR)-based methods have been widely used in the clinic by physicians to guide treatment decisions. The aim of this study was to evaluate the technical and clinical sensitivity and specificity of the droplet digital polymerase chain reaction (ddPCR) method in detecting the three EGFR mutations in patients with lung cancer. Methods Genomic DNA from H1975 and PC-9 cells, as well as 92 normal human blood specimens, was used to determine the technical sensitivity and specificity of the ddPCR assays. Genomic DNA of formalin-fixed, paraffin-embedded specimens from 78 Chinese patients with lung adenocarcinoma were assayed using both qPCR and ddPCR. Results The three ddPCR assays had a limit of detection of 0.02% and a wide dynamic range from 1 to 20,000 copies measurement. The L858R and ex19del assays had a 0% background level in the technical and clinical settings. The T790M assay appeared to have a 0.03% technical background. The ddPCR assays were robust for correct determination of EGFR mutation status in patients, and the dynamic range appeared to be better than qPCR methods. The ddPCR assay for T790M could detect patient samples that the qPCR method failed to detect. About 49% of this patient cohort had EGFR mutations (L858R, 15.4%; ex19del, 29.5%; T790M, 6.4%). Two patients with the ex19del mutation also had a naïve T790M mutation. Conclusion These data suggest that the ddPCR method could be useful in the personalized treatment of patients with lung cancer. PMID:26124670
Hughesman, Curtis B; Lu, X J David; Liu, Kelly Y P; Zhu, Yuqi; Towle, Rebecca M; Haynes, Charles; Poh, Catherine F
2017-09-19
Copy number alterations (CNAs), a common genomic event during carcinogenesis, are known to affect a large fraction of the genome. Common recurrent gains or losses of specific chromosomal regions occur at frequencies that they may be considered distinctive features of tumoral cells. Here we introduce a novel multiplexed droplet digital PCR (ddPCR) assay capable of detecting recurrent CNAs that drive tumorigenesis of oral squamous cell carcinoma. Applied to DNA extracted from oral cell lines and clinical samples of various disease stages, we found good agreement between CNAs detected by our ddPCR assay with those previously reported using comparative genomic hybridization or single nucleotide polymorphism arrays. Furthermore, we demonstrate that the ability to target specific locations of the genome permits detection of clinically relevant oncogenic events such as small, submicroscopic homozygous deletions. Additional capabilities of the multiplexed ddPCR assay include the ability to infer ploidy level, quantify the change in copy number of target loci with high-level gains, and simultaneously assess the status and viral load for high-risk human papillomavirus types 16 and 18. This novel multiplexed ddPCR assay therefore may have clinical value in differentiating between benign oral lesions from those that are at risk of progressing to oral cancer.
Consistency of gene starts among Burkholderia genomes
2011-01-01
Background Evolutionary divergence in the position of the translational start site among orthologous genes can have significant functional impacts. Divergence can alter the translation rate, degradation rate, subcellular location, and function of the encoded proteins. Results Existing Genbank gene maps for Burkholderia genomes suggest that extensive divergence has occurred--53% of ortholog sets based on Genbank gene maps had inconsistent gene start sites. However, most of these inconsistencies appear to be gene-calling errors. Evolutionary divergence was the most plausible explanation for only 17% of the ortholog sets. Correcting probable errors in the Genbank gene maps decreased the percentage of ortholog sets with inconsistent starts by 68%, increased the percentage of ortholog sets with extractable upstream intergenic regions by 32%, increased the sequence similarity of intergenic regions and predicted proteins, and increased the number of proteins with identifiable signal peptides. Conclusions Our findings highlight an emerging problem in comparative genomics: single-digit percent errors in gene predictions can lead to double-digit percentages of inconsistent ortholog sets. The work demonstrates a simple approach to evaluate and improve the quality of gene maps. PMID:21342528
Borsu, Laetitia; Intrieri, Julie; Thampi, Linta; Yu, Helena; Riely, Gregory; Nafa, Khedoudja; Chandramohan, Raghu; Ladanyi, Marc; Arcila, Maria E
2016-11-01
Although next-generation sequencing (NGS) is a robust technology for comprehensive assessment of EGFR-mutant lung adenocarcinomas with acquired resistance to tyrosine kinase inhibitors, it may not provide sufficiently rapid and sensitive detection of the EGFR T790M mutation, the most clinically relevant resistance biomarker. Here, we describe a digital PCR (dPCR) assay for rapid T790M detection on aliquots of NGS libraries prepared for comprehensive profiling, fully maximizing broad genomic analysis on limited samples. Tumor DNAs from patients with EGFR-mutant lung adenocarcinomas and acquired resistance to epidermal growth factor receptor inhibitors were prepared for Memorial Sloan-Kettering-Integrated Mutation Profiling of Actionable Cancer Targets sequencing, a hybrid capture-based assay interrogating 410 cancer-related genes. Precapture library aliquots were used for rapid EGFR T790M testing by dPCR, and results were compared with NGS and locked nucleic acid-PCR Sanger sequencing (reference high sensitivity method). Seventy resistance samples showed 99% concordance with the reference high sensitivity method in accuracy studies. Input as low as 2.5 ng provided a sensitivity of 1% and improved further with increasing DNA input. dPCR on libraries required less DNA and showed better performance than direct genomic DNA. dPCR on NGS libraries is a robust and rapid approach to EGFR T790M testing, allowing most economical utilization of limited material for comprehensive assessment. The same assay can also be performed directly on any limited DNA source and cell-free DNA. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Tao, Xiang; Lai, Xian-Jun; Zhang, Yi-Zheng; Tan, Xue-Mei; Wang, Haiyan
2014-01-01
Background Transposable elements (TEs) are the most abundant genomic components in eukaryotes and affect the genome by their replications and movements to generate genetic plasticity. Sweet potato performs asexual reproduction generally and the TEs may be an important genetic factor for genome reorganization. Complete identification of TEs is essential for the study of genome evolution. However, the TEs of sweet potato are still poorly understood because of its complex hexaploid genome and difficulty in genome sequencing. The recent availability of the sweet potato transcriptome databases provides an opportunity for discovering and characterizing the expressed TEs. Methodology/Principal Findings We first established the integrated-transcriptome database by de novo assembling four published sweet potato transcriptome databases from three cultivars in China. Using sequence-similarity search and analysis, a total of 1,405 TEs including 883 retrotransposons and 522 DNA transposons were predicted and categorized. Depending on mapping sets of RNA-Seq raw short reads to the predicted TEs, we compared the quantities, classifications and expression activities of TEs inter- and intra-cultivars. Moreover, the differential expressions of TEs in seven tissues of Xushu 18 cultivar were analyzed by using Illumina digital gene expression (DGE) tag profiling. It was found that 417 TEs were expressed in one or more tissues and 107 in all seven tissues. Furthermore, the copy number of 11 transposase genes was determined to be 1–3 copies in the genome of sweet potato by Real-time PCR-based absolute quantification. Conclusions/Significance Our result provides a new method for TE searching on species with transcriptome sequences while lacking genome information. The searching, identification and expression analysis of TEs will provide useful TE information in sweet potato, which are valuable for the further studies of TE-mediated gene mutation and optimization in asexual reproduction. It contributes to elucidating the roles of TEs in genome evolution. PMID:24608103
Witte, Anna Kristina; Fister, Susanne; Mester, Patrick; Schoder, Dagmar; Rossmanith, Peter
2016-11-01
Fast and reliable pathogen detection is an important issue for human health. Since conventional microbiological methods are rather slow, there is growing interest in detection and quantification using molecular methods. The droplet digital polymerase chain reaction (ddPCR) is a relatively new PCR method for absolute and accurate quantification without external standards. Using the Listeria monocytogenes specific prfA assay, we focused on the questions of whether the assay was directly transferable to ddPCR and whether ddPCR was suitable for samples derived from heterogeneous matrices, such as foodstuffs that often included inhibitors and a non-target bacterial background flora. Although the prfA assay showed suboptimal cluster formation, use of ddPCR for quantification of L. monocytogenes from pure bacterial cultures, artificially contaminated cheese, and naturally contaminated foodstuff was satisfactory over a relatively broad dynamic range. Moreover, results demonstrated the outstanding detection limit of one copy. However, while poorer DNA quality, such as resulting from longer storage, can impair ddPCR, internal amplification control (IAC) of prfA by ddPCR, that is integrated in the genome of L. monocytogenes ΔprfA, showed even slightly better quantification over a broader dynamic range. Graphical Abstract Evaluating the absolute quantification potential of ddPCR targeting Listeria monocytogenes prfA.
Digital PCR provides absolute quantitation of viral load for an occult RNA virus.
White, Richard Allen; Quake, Stephen R; Curr, Kenneth
2012-01-01
Using a multiplexed LNA-based Taqman assay, RT-digital PCR (RT-dPCR) was performed in a prefabricated microfluidic device that monitored absolute viral load in native and immortalized cell lines, overall precision of detection, and the absolute detection limit of an occult RNA virus GB Virus Type C (GBV-C). RT-dPCR had on average a 10% lower overall coefficient of variation (CV, a measurement of precision) for viral load testing than RT-qPCR and had a higher overall detection limit, able to quantify as low as three 5'-UTR molecules of GBV-C genome. Two commercial high-yield in vitro transcription kits (T7 Ribomax Express by Promega and Ampliscribe T7 Flash by Epicentre) were compared to amplify GBV-C RNA genome with T7-mediated amplification. The Ampliscribe T7 Flash outperformed the T7 Ribomax Express in yield of full-length GBV-C RNA genome. THP-1 cells (a model of monocytic derived cells) were transfected with GBV-C, yielding infectious virions that replicated over a 120h time course and could be infected directly. This study provides the first evidence of GBV-C replication in monocytic derived clonal cells. Thus far, it is the only study using a microfluidic device that measures directly viral load of mammalian RNA virus in a digital format without need for a standard curve. Copyright © 2011 Elsevier B.V. All rights reserved.
edgeR: a Bioconductor package for differential expression analysis of digital gene expression data.
Robinson, Mark D; McCarthy, Davis J; Smyth, Gordon K
2010-01-01
It is expected that emerging digital gene expression (DGE) technologies will overtake microarray technologies in the near future for many functional genomics applications. One of the fundamental data analysis tasks, especially for gene expression studies, involves determining whether there is evidence that counts for a transcript or exon are significantly different across experimental conditions. edgeR is a Bioconductor software package for examining differential expression of replicated count data. An overdispersed Poisson model is used to account for both biological and technical variability. Empirical Bayes methods are used to moderate the degree of overdispersion across transcripts, improving the reliability of inference. The methodology can be used even with the most minimal levels of replication, provided at least one phenotype or experimental condition is replicated. The software may have other applications beyond sequencing data, such as proteome peptide count data. The package is freely available under the LGPL licence from the Bioconductor web site (http://bioconductor.org).
Gutman, David A; Khalilia, Mohammed; Lee, Sanghoon; Nalisnik, Michael; Mullen, Zach; Beezley, Jonathan; Chittajallu, Deepak R; Manthey, David; Cooper, Lee A D
2017-11-01
Tissue-based cancer studies can generate large amounts of histology data in the form of glass slides. These slides contain important diagnostic, prognostic, and biological information and can be digitized into expansive and high-resolution whole-slide images using slide-scanning devices. Effectively utilizing digital pathology data in cancer research requires the ability to manage, visualize, share, and perform quantitative analysis on these large amounts of image data, tasks that are often complex and difficult for investigators with the current state of commercial digital pathology software. In this article, we describe the Digital Slide Archive (DSA), an open-source web-based platform for digital pathology. DSA allows investigators to manage large collections of histologic images and integrate them with clinical and genomic metadata. The open-source model enables DSA to be extended to provide additional capabilities. Cancer Res; 77(21); e75-78. ©2017 AACR . ©2017 American Association for Cancer Research.
Digital detection of endonuclease mediated gene disruption in the HIV provirus
Sedlak, Ruth Hall; Liang, Shu; Niyonzima, Nixon; De Silva Feelixge, Harshana S.; Roychoudhury, Pavitra; Greninger, Alexander L.; Weber, Nicholas D.; Boissel, Sandrine; Scharenberg, Andrew M.; Cheng, Anqi; Magaret, Amalia; Bumgarner, Roger; Stone, Daniel; Jerome, Keith R.
2016-01-01
Genome editing by designer nucleases is a rapidly evolving technology utilized in a highly diverse set of research fields. Among all fields, the T7 endonuclease mismatch cleavage assay, or Surveyor assay, is the most commonly used tool to assess genomic editing by designer nucleases. This assay, while relatively easy to perform, provides only a semi-quantitative measure of mutation efficiency that lacks sensitivity and accuracy. We demonstrate a simple droplet digital PCR assay that quickly quantitates a range of indel mutations with detection as low as 0.02% mutant in a wild type background and precision (≤6%CV) and accuracy superior to either mismatch cleavage assay or clonal sequencing when compared to next-generation sequencing. The precision and simplicity of this assay will facilitate comparison of gene editing approaches and their optimization, accelerating progress in this rapidly-moving field. PMID:26829887
Entomological Collections in the Age of Big Data.
Short, Andrew Edward Z; Dikow, Torsten; Moreau, Corrie S
2018-01-07
With a million described species and more than half a billion preserved specimens, the large scale of insect collections is unequaled by those of any other group. Advances in genomics, collection digitization, and imaging have begun to more fully harness the power that such large data stores can provide. These new approaches and technologies have transformed how entomological collections are managed and utilized. While genomic research has fundamentally changed the way many specimens are collected and curated, advances in technology have shown promise for extracting sequence data from the vast holdings already in museums. Efforts to mainstream specimen digitization have taken root and have accelerated traditional taxonomic studies as well as distribution modeling and global change research. Emerging imaging technologies such as microcomputed tomography and confocal laser scanning microscopy are changing how morphology can be investigated. This review provides an overview of how the realization of big data has transformed our field and what may lie in store.
Sun, Yue; Joyce, Priya Aiyar
2017-11-01
Droplet digital PCR combined with the low copy ACT allele as endogenous reference gene, makes accurate and rapid estimation of gene copy number in Q208 A and Q240 A attainable. Sugarcane is an important cultivated crop with both high polyploidy and aneuploidy in its 10 Gb genome. Without a known copy number reference gene, it is difficult to accurately estimate the copy number of any gene of interest by PCR-based methods in sugarcane. Recently, a new technology, known as droplet digital PCR (ddPCR) has been developed which can measure the absolute amount of the target DNA in a given sample. In this study, we deduced the true copy number of three endogenous genes, actin depolymerizing factor (ADF), adenine phosphoribosyltransferase (APRT) and actin (ACT) in three Australian sugarcane varieties, using ddPCR by comparing the absolute amounts of the above genes with a transgene of known copy number. A single copy of the ACT allele was detected in Q208 A , two copies in Q240 A , but was absent in Q117. Copy number variation was also observed for both APRT and ADF, and ranged from 9 to 11 in the three tested varieties. Using this newly developed ddPCR method, transgene copy number was successfully determined in 19 transgenic Q208 A and Q240 A events using ACT as the reference endogenous gene. Our study demonstrates that ddPCR can be used for high-throughput genetic analysis and is a quick, accurate and reliable alternative method for gene copy number determination in sugarcane. This discovered ACT allele would be a suitable endogenous reference gene for future gene copy number variation and dosage studies of functional genes in Q208 A and Q240 A .
Glioma grading using cell nuclei morphologic features in digital pathology images
NASA Astrophysics Data System (ADS)
Reza, Syed M. S.; Iftekharuddin, Khan M.
2016-03-01
This work proposes a computationally efficient cell nuclei morphologic feature analysis technique to characterize the brain gliomas in tissue slide images. In this work, our contributions are two-fold: 1) obtain an optimized cell nuclei segmentation method based on the pros and cons of the existing techniques in literature, 2) extract representative features by k-mean clustering of nuclei morphologic features to include area, perimeter, eccentricity, and major axis length. This clustering based representative feature extraction avoids shortcomings of extensive tile [1] [2] and nuclear score [3] based methods for brain glioma grading in pathology images. Multilayer perceptron (MLP) is used to classify extracted features into two tumor types: glioblastoma multiforme (GBM) and low grade glioma (LGG). Quantitative scores such as precision, recall, and accuracy are obtained using 66 clinical patients' images from The Cancer Genome Atlas (TCGA) [4] dataset. On an average ~94% accuracy from 10 fold crossvalidation confirms the efficacy of the proposed method.
Shen, Feng; Du, Wenbin; Kreutz, Jason E; Fok, Alice; Ismagilov, Rustem F
2010-10-21
This paper describes a SlipChip to perform digital PCR in a very simple and inexpensive format. The fluidic path for introducing the sample combined with the PCR mixture was formed using elongated wells in the two plates of the SlipChip designed to overlap during sample loading. This fluidic path was broken up by simple slipping of the two plates that removed the overlap among wells and brought each well in contact with a reservoir preloaded with oil to generate 1280 reaction compartments (2.6 nL each) simultaneously. After thermal cycling, end-point fluorescence intensity was used to detect the presence of nucleic acid. Digital PCR on the SlipChip was tested quantitatively by using Staphylococcus aureus genomic DNA. As the concentration of the template DNA in the reaction mixture was diluted, the fraction of positive wells decreased as expected from the statistical analysis. No cross-contamination was observed during the experiments. At the extremes of the dynamic range of digital PCR the standard confidence interval determined using a normal approximation of the binomial distribution is not satisfactory. Therefore, statistical analysis based on the score method was used to establish these confidence intervals. The SlipChip provides a simple strategy to count nucleic acids by using PCR. It may find applications in research applications such as single cell analysis, prenatal diagnostics, and point-of-care diagnostics. SlipChip would become valuable for diagnostics, including applications in resource-limited areas after integration with isothermal nucleic acid amplification technologies and visual readout.
Genome-wide and digital polymerase chain reaction epigenetic assessments of alcohol consumption.
Philibert, Robert; Dogan, Meesha; Noel, Amanda; Miller, Shelly; Krukow, Brianna; Papworth, Emma; Cowley, Joseph; Knudsen, April; Beach, Steven R H; Black, Donald
2018-04-28
The lack of readily employable biomarkers of alcohol consumption is a problem for clinicians and researchers. In 2014, we published a preliminary DNA methylation signature of heavy alcohol consumption that remits as a function of abstinence. Herein, we present new genome-wide methylation findings from a cohort of additional subjects and a meta-analysis of the data. Using DNA from 47 consecutive heavy drinkers admitted for alcohol detoxification in the context of alcohol treatment and 47 abstinent controls, we replicate the 2014 results and show that 21,221 CpG residues are differentially methylated in active heavy drinkers. Meta-analysis of all data from the 448,058 probes common to the two methylation platforms shows a similarly profound signature with confirmation of findings from other groups. Principal components analyses show that genome-wide methylation changes in response to alcohol consumption load on two major factors with one component accounting at least 50% of the total variance in both smokers and nonsmoking alcoholics. Using data from the arrays, we derive a panel of five methylation probes that classifies use status with a receiver operator characteristic area under the curve (AUC) of 0.97. Finally, using droplet digital polymerase chain reaction (PCR), we convert these array-based findings to two marker assays with an AUC of 0.95 and a four marker set AUC of 0.98. We conclude that DNA methylation assessments are capable of quantifying alcohol use status and suggest that readily employable digital PCR approaches for substance consumption may find widespread use in alcohol-related research and patient care. © 2018 Wiley Periodicals, Inc.
Emerging methods to study bacteriophage infection at the single-cell level.
Dang, Vinh T; Sullivan, Matthew B
2014-01-01
Bacteria and their viruses (phages) are abundant across diverse ecosystems and their interactions influence global biogeochemical cycles and incidence of disease. Problematically, both classical and metagenomic methods insufficiently assess the host specificity of phages and phage-host infection dynamics in nature. Here we review emerging methods to study phage-host interaction and infection dynamics with a focus on those that offer resolution at the single-cell level. These methods leverage ever-increasing sequence data to identify virus signals from single-cell amplified genome datasets or to produce primers/probes to target particular phage-bacteria pairs (digital PCR and phageFISH), even in complex communities. All three methods enable study of phage infection of uncultured bacteria from environmental samples, while the latter also discriminates between phage-host interaction outcomes (e.g., lytic, chronic, lysogenic) in model systems. Together these techniques enable quantitative, spatiotemporal studies of phage-bacteria interactions from environmental samples of any ecosystem, which will help elucidate and predict the ecological and evolutionary impacts of specific phage-host pairings in nature.
Genetic Architecture Promotes the Evolution and Maintenance of Cooperation
Frénoy, Antoine; Taddei, François; Misevic, Dusan
2013-01-01
When cooperation has a direct cost and an indirect benefit, a selfish behavior is more likely to be selected for than an altruistic one. Kin and group selection do provide evolutionary explanations for the stability of cooperation in nature, but we still lack the full understanding of the genomic mechanisms that can prevent cheater invasion. In our study we used Aevol, an agent-based, in silico genomic platform to evolve populations of digital organisms that compete, reproduce, and cooperate by secreting a public good for tens of thousands of generations. We found that cooperating individuals may share a phenotype, defined as the amount of public good produced, but have very different abilities to resist cheater invasion. To understand the underlying genetic differences between cooperator types, we performed bio-inspired genomics analyses of our digital organisms by recording and comparing the locations of metabolic and secretion genes, as well as the relevant promoters and terminators. Association between metabolic and secretion genes (promoter sharing, overlap via frame shift or sense-antisense encoding) was characteristic for populations with robust cooperation and was more likely to evolve when secretion was costly. In mutational analysis experiments, we demonstrated the potential evolutionary consequences of the genetic association by performing a large number of mutations and measuring their phenotypic and fitness effects. The non-cooperating mutants arising from the individuals with genetic association were more likely to have metabolic deleterious mutations that eventually lead to selection eliminating such mutants from the population due to the accompanying fitness decrease. Effectively, cooperation evolved to be protected and robust to mutations through entangled genetic architecture. Our results confirm the importance of second-order selection on evolutionary outcomes, uncover an important genetic mechanism for the evolution and maintenance of cooperation, and suggest promising methods for preventing gene loss in synthetically engineered organisms. PMID:24278000
Okuma, Kazu; Yamagishi, Makoto; Yamochi, Tadanori; Firouzi, Sanaz; Momose, Haruka; Mizukami, Takuo; Takizawa, Kazuya; Araki, Kumiko; Sugamura, Kazuo; Yamaguchi, Kazunari; Watanabe, Toshiki
2014-01-01
Quantitative PCR (qPCR) for human T-lymphotropic virus 1 (HTLV-1) is useful for measuring the amount of integrated HTLV-1 proviral DNA in peripheral blood mononuclear cells. Many laboratories in Japan have developed different HTLV-1 qPCR methods. However, when six independent laboratories analyzed the proviral load of the same samples, there was a 5-fold difference in their results. To standardize HTLV-1 qPCR, preparation of a well-defined reference material is needed. We analyzed the integrated HTLV-1 genome and the internal control (IC) genes of TL-Om1, a cell line derived from adult T-cell leukemia, to confirm its suitability as a reference material for HTLV-1 qPCR. Fluorescent in situ hybridization (FISH) showed that HTLV-1 provirus was monoclonally integrated in chromosome 1 at the site of 1p13 in the TL-Om1 genome. HTLV-1 proviral genome was not transferred from TL-Om1 to an uninfected T-cell line, suggesting that the HTLV-1 proviral copy number in TL-Om1 cells is stable. To determine the copy number of HTLV-1 provirus and IC genes in TL-Om1 cells, we used FISH, digital PCR, and qPCR. HTLV-1 copy numbers obtained by these three methods were similar, suggesting that their results were accurate. Also, the ratio of the copy number of HTLV-1 provirus to one of the IC genes, RNase P, was consistent for all three methods. These findings indicate that TL-Om1 cells are an appropriate reference material for HTLV-1 qPCR. PMID:25502533
Recognizing and engineering digital-like logic gates and switches in gene regulatory networks.
Bradley, Robert W; Buck, Martin; Wang, Baojun
2016-10-01
A central aim of synthetic biology is to build organisms that can perform useful activities in response to specified conditions. The digital computing paradigm which has proved so successful in electrical engineering is being mapped to synthetic biological systems to allow them to make such decisions. However, stochastic molecular processes have graded input-output functions, thus, bioengineers must select those with desirable characteristics and refine their transfer functions to build logic gates with digital-like switching behaviour. Recent efforts in genome mining and the development of programmable RNA-based switches, especially CRISPRi, have greatly increased the number of parts available to synthetic biologists. Improvements to the digital characteristics of these parts are required to enable robust predictable design of deeply layered logic circuits. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Odegaard, Justin I; Vincent, John J; Mortimer, Stefanie; Vowles, James V; Ulrich, Bryan C; Banks, Kimberly C; Fairclough, Stephen R; Zill, Oliver A; Sikora, Marcin; Mokhtari, Reza; Abdueva, Diana; Nagy, Rebecca J; Lee, Christine E; Kiedrowski, Lesli A; Paweletz, Cloud P; Eltoukhy, Helmy; Lanman, Richard B; Chudova, Darya I; Talasaz, AmirAli
2018-04-24
Purpose: To analytically and clinically validate a circulating cell-free tumor DNA sequencing test for comprehensive tumor genotyping and demonstrate its clinical feasibility. Experimental Design: Analytic validation was conducted according to established principles and guidelines. Blood-to-blood clinical validation comprised blinded external comparison with clinical droplet digital PCR across 222 consecutive biomarker-positive clinical samples. Blood-to-tissue clinical validation comprised comparison of digital sequencing calls to those documented in the medical record of 543 consecutive lung cancer patients. Clinical experience was reported from 10,593 consecutive clinical samples. Results: Digital sequencing technology enabled variant detection down to 0.02% to 0.04% allelic fraction/2.12 copies with ≤0.3%/2.24-2.76 copies 95% limits of detection while maintaining high specificity [prevalence-adjusted positive predictive values (PPV) >98%]. Clinical validation using orthogonal plasma- and tissue-based clinical genotyping across >750 patients demonstrated high accuracy and specificity [positive percent agreement (PPAs) and negative percent agreement (NPAs) >99% and PPVs 92%-100%]. Clinical use in 10,593 advanced adult solid tumor patients demonstrated high feasibility (>99.6% technical success rate) and clinical sensitivity (85.9%), with high potential actionability (16.7% with FDA-approved on-label treatment options; 72.0% with treatment or trial recommendations), particularly in non-small cell lung cancer, where 34.5% of patient samples comprised a directly targetable standard-of-care biomarker. Conclusions: High concordance with orthogonal clinical plasma- and tissue-based genotyping methods supports the clinical accuracy of digital sequencing across all four types of targetable genomic alterations. Digital sequencing's clinical applicability is further supported by high rates of technical success and biomarker target discovery. Clin Cancer Res; 1-11. ©2018 AACR. ©2018 American Association for Cancer Research.
Möhlendick, Birte; Bartenhagen, Christoph; Behrens, Bianca; Honisch, Ellen; Raba, Katharina; Knoefel, Wolfram T; Stoecklein, Nikolas H
2013-01-01
Comprehensive genome wide analyses of single cells became increasingly important in cancer research, but remain to be a technically challenging task. Here, we provide a protocol for array comparative genomic hybridization (aCGH) of single cells. The protocol is based on an established adapter-linker PCR (WGAM) and allowed us to detect copy number alterations as small as 56 kb in single cells. In addition we report on factors influencing the success of single cell aCGH downstream of the amplification method, including the characteristics of the reference DNA, the labeling technique, the amount of input DNA, reamplification, the aCGH resolution, and data analysis. In comparison with two other commercially available non-linear single cell amplification methods, WGAM showed a very good performance in aCGH experiments. Finally, we demonstrate that cancer cells that were processed and identified by the CellSearch® System and that were subsequently isolated from the CellSearch® cartridge as single cells by fluorescence activated cell sorting (FACS) could be successfully analyzed using our WGAM-aCGH protocol. We believe that even in the era of next-generation sequencing, our single cell aCGH protocol will be a useful and (cost-) effective approach to study copy number alterations in single cells at resolution comparable to those reported currently for single cell digital karyotyping based on next generation sequencing data.
Duewer, David L; Kline, Margaret C; Romsos, Erica L; Toman, Blaza
2018-05-01
The highly multiplexed polymerase chain reaction (PCR) assays used for forensic human identification perform best when used with an accurately determined quantity of input DNA. To help ensure the reliable performance of these assays, we are developing a certified reference material (CRM) for calibrating human genomic DNA working standards. To enable sharing information over time and place, CRMs must provide accurate and stable values that are metrologically traceable to a common reference. We have shown that droplet digital PCR (ddPCR) limiting dilution end-point measurements of the concentration of DNA copies per volume of sample can be traceably linked to the International System of Units (SI). Unlike values assigned using conventional relationships between ultraviolet absorbance and DNA mass concentration, entity-based ddPCR measurements are expected to be stable over time. However, the forensic community expects DNA quantity to be stated in terms of mass concentration rather than entity concentration. The transformation can be accomplished given SI-traceable values and uncertainties for the number of nucleotide bases per human haploid genome equivalent (HHGE) and the average molar mass of a nucleotide monomer in the DNA polymer. This report presents the considerations required to establish the metrological traceability of ddPCR-based mass concentration estimates of human nuclear DNA. Graphical abstract The roots of metrological traceability for human nuclear DNA mass concentration results. Values for the factors in blue must be established experimentally. Values for the factors in red have been established from authoritative source materials. HHGE stands for "haploid human genome equivalent"; there are two HHGE per diploid human genome.
Hughesman, Curtis B; Lu, X J David; Liu, Kelly Y P; Zhu, Yuqi; Poh, Catherine F; Haynes, Charles
2016-01-01
The ability of droplet digital PCR (ddPCR) to accurately determine the concentrations of amplifiable targets makes it a promising platform for measuring copy number alterations (CNAs) in genomic biomarkers. However, its application to clinical samples, particularly formalin-fixed paraffin-embedded specimens, will require strategies to reliably determine CNAs in DNA of limited quantity and quality. When applied to cancerous tissue, those methods must also account for global genetic instability and the associated probability that the abundance(s) of one or more chosen reference loci do not represent the average ploidy of cells comprising the specimen. Here we present an experimental design strategy and associated data analysis tool that enables accurate determination of CNAs in a panel of biomarkers using multiplexed ddPCR. The method includes strategies to optimize primer and probes design to cleanly segregate droplets in the data output from reaction wells amplifying multiple independent templates, and to correct for bias from artifacts such as DNA fragmentation. We demonstrate how a panel of reference loci can be used to determine a stable CNA-neutral benchmark. These innovations, when taken together, provide a comprehensive strategy that can be used to reliably detect biomarker CNAs in DNA extracted from either frozen or FFPE tissue biopsies.
Identifying potential maternal genes of Bombyx mori using digital gene expression profiling
Xu, Pingzhen
2018-01-01
Maternal genes present in mature oocytes play a crucial role in the early development of silkworm. Although maternal genes have been widely studied in many other species, there has been limited research in Bombyx mori. High-throughput next generation sequencing provides a practical method for gene discovery on a genome-wide level. Herein, a transcriptome study was used to identify maternal-related genes from silkworm eggs. Unfertilized eggs from five different stages of early development were used to detect the changing situation of gene expression. The expressed genes showed different patterns over time. Seventy-six maternal genes were annotated according to homology analysis with Drosophila melanogaster. More than half of the differentially expressed maternal genes fell into four expression patterns, while the expression patterns showed a downward trend over time. The functional annotation of these material genes was mainly related to transcription factor activity, growth factor activity, nucleic acid binding, RNA binding, ATP binding, and ion binding. Additionally, twenty-two gene clusters including maternal genes were identified from 18 scaffolds. Altogether, we plotted a profile for the maternal genes of Bombyx mori using a digital gene expression profiling method. This will provide the basis for maternal-specific signature research and improve the understanding of the early development of silkworm. PMID:29462160
Didelot, Audrey; Kotsopoulos, Steve K; Lupo, Audrey; Pekin, Deniz; Li, Xinyu; Atochin, Ivan; Srinivasan, Preethi; Zhong, Qun; Olson, Jeff; Link, Darren R; Laurent-Puig, Pierre; Blons, Hélène; Hutchison, J Brian; Taly, Valerie
2013-05-01
Assessment of DNA integrity and quantity remains a bottleneck for high-throughput molecular genotyping technologies, including next-generation sequencing. In particular, DNA extracted from paraffin-embedded tissues, a major potential source of tumor DNA, varies widely in quality, leading to unpredictable sequencing data. We describe a picoliter droplet-based digital PCR method that enables simultaneous detection of DNA integrity and the quantity of amplifiable DNA. Using a multiplex assay, we detected 4 different target lengths (78, 159, 197, and 550 bp). Assays were validated with human genomic DNA fragmented to sizes of 170 bp to 3000 bp. The technique was validated with DNA quantities as low as 1 ng. We evaluated 12 DNA samples extracted from paraffin-embedded lung adenocarcinoma tissues. One sample contained no amplifiable DNA. The fractions of amplifiable DNA for the 11 other samples were between 0.05% and 10.1% for 78-bp fragments and ≤1% for longer fragments. Four samples were chosen for enrichment and next-generation sequencing. The quality of the sequencing data was in agreement with the results of the DNA-integrity test. Specifically, DNA with low integrity yielded sequencing results with lower levels of coverage and uniformity and had higher levels of false-positive variants. The development of DNA-quality assays will enable researchers to downselect samples or process more DNA to achieve reliable genome sequencing with the highest possible efficiency of cost and effort, as well as minimize the waste of precious samples. © 2013 American Association for Clinical Chemistry.
Takai, Erina; Totoki, Yasushi; Nakamura, Hiromi; Kato, Mamoru; Shibata, Tatsuhiro; Yachida, Shinichi
2016-01-01
Pancreatic ductal adenocarcinoma (PDAC) remains one of the most lethal malignancies. The genomic landscape of the PDAC genome features four frequently mutated genes (KRAS, CDKN2A, TP53, and SMAD4) and dozens of candidate driver genes altered at low frequency, including potential clinical targets. Circulating cell-free DNA (cfDNA) is a promising resource to detect molecular characteristics of tumors, supporting the concept of "liquid biopsy".We determined the mutational status of KRAS in plasma cfDNA using multiplex droplet digital PCR in 259 patients with PDAC, retrospectively. Furthermore, we constructed a novel modified SureSelect-KAPA-Illumina platform and an original panel of 60 genes. We then performed targeted deep sequencing of cfDNA in 48 patients who had ≥1 % mutant allele frequencies of KRAS in plasma cfDNA.Droplet digital PCR detected KRAS mutations in plasma cfDNA in 63 of 107 (58.9 %) patients with inoperable tumors. Importantly, potentially targetable somatic mutations were identified in 14 of 48 patients (29.2 %) examined by cfDNA sequencing.Our two-step approach with plasma cfDNA, combining droplet digital PCR and targeted deep sequencing, is a feasible clinical approach. Assessment of mutations in plasma cfDNA may provide a new diagnostic tool, assisting decisions for optimal therapeutic strategies for PDAC patients.
Jelinek, Jaroslav; Liang, Shoudan; Lu, Yue; He, Rong; Ramagli, Louis S.; Shpall, Elizabeth J.; Estecio, Marcos R.H.; Issa, Jean-Pierre J.
2012-01-01
Genome wide analysis of DNA methylation provides important information in a variety of diseases, including cancer. Here, we describe a simple method, Digital Restriction Enzyme Analysis of Methylation (DREAM), based on next generation sequencing analysis of methylation-specific signatures created by sequential digestion of genomic DNA with SmaI and XmaI enzymes. DREAM provides information on 150,000 unique CpG sites, of which 39,000 are in CpG islands and 30,000 are at transcription start sites of 13,000 RefSeq genes. We analyzed DNA methylation in healthy white blood cells and found methylation patterns to be remarkably uniform. Inter individual differences > 30% were observed only at 227 of 28,331 (0.8%) of autosomal CpG sites. Similarly, > 30% differences were observed at only 59 sites when we comparing the cord and adult blood. These conserved methylation patterns contrasted with extensive changes affecting 18–40% of CpG sites in a patient with acute myeloid leukemia and in two leukemia cell lines. The method is cost effective, quantitative (r2 = 0.93 when compared with bisulfite pyrosequencing) and reproducible (r2 = 0.997). Using 100-fold coverage, DREAM can detect differences in methylation greater than 10% or 30% with a false positive rate below 0.05 or 0.001, respectively. DREAM can be useful in quantifying epigenetic effects of environment and nutrition, correlating developmental epigenetic variation with phenotypes, understanding epigenetics of cancer and chronic diseases, measuring the effects of drugs on DNA methylation or deriving new biological insights into mammalian genomes. PMID:23075513
Opportunities and challenges for digital morphology
2010-01-01
Advances in digital data acquisition, analysis, and storage have revolutionized the work in many biological disciplines such as genomics, molecular phylogenetics, and structural biology, but have not yet found satisfactory acceptance in morphology. Improvements in non-invasive imaging and three-dimensional visualization techniques, however, permit high-throughput analyses also of whole biological specimens, including museum material. These developments pave the way towards a digital era in morphology. Using sea urchins (Echinodermata: Echinoidea), we provide examples illustrating the power of these techniques. However, remote visualization, the creation of a specialized database, and the implementation of standardized, world-wide accepted data deposition practices prior to publication are essential to cope with the foreseeable exponential increase in digital morphological data. Reviewers This article was reviewed by Marc D. Sutton (nominated by Stephan Beck), Gonzalo Giribet (nominated by Lutz Walter), and Lennart Olsson (nominated by Purificación López-García). PMID:20604956
Luo, Jun; Li, Junhua; Yang, Hang; Yu, Junping; Wei, Hongping
2017-10-01
Accurate and rapid identification of methicillin-resistant Staphylococcus aureus (MRSA) is needed to screen MRSA carriers and improve treatment. The current widely used duplex PCR methods are not able to differentiate MRSA from coexisting methicillin-susceptible S. aureus (MSSA) or other methicillin-resistant staphylococci. In this study, we aimed to develop a direct method for accurate and rapid detection of MRSA in clinical samples from open environments, such as nasal swabs. The new molecular assay is based on detecting the cooccurrence of nuc and mecA markers in a single bacterial cell by utilizing droplet digital PCR (ddPCR) with the chimeric lysin ClyH for cell lysis. The method consists of (i) dispersion of an intact single bacterium into nanoliter droplets, (ii) temperature-controlled release of genomic DNA (gDNA) by ClyH at 37°C, and (iii) amplification and detection of the markers ( nuc and mecA ) using standard TaqMan chemistries with ddPCR. Results were analyzed based on MRSA index ratios used for indicating the presence of the duplex-positive markers in droplets. The method was able to achieve an absolute limit of detection (LOD) of 2,900 CFU/ml for MRSA in nasal swabs spiked with excess amounts of Escherichia coli , MSSA, and other mecA -positive bacteria within 4 h. Initial testing of 104 nasal swabs showed that the method had 100% agreement with the standard culture method, while the normal duplex qPCR method had only about 87.5% agreement. The single-bacterium duplex ddPCR assay is rapid and powerful for more accurate detection of MRSA directly from clinical specimens. Copyright © 2017 American Society for Microbiology.
Luo, Jun; Li, Junhua; Yang, Hang; Yu, Junping
2017-01-01
ABSTRACT Accurate and rapid identification of methicillin-resistant Staphylococcus aureus (MRSA) is needed to screen MRSA carriers and improve treatment. The current widely used duplex PCR methods are not able to differentiate MRSA from coexisting methicillin-susceptible S. aureus (MSSA) or other methicillin-resistant staphylococci. In this study, we aimed to develop a direct method for accurate and rapid detection of MRSA in clinical samples from open environments, such as nasal swabs. The new molecular assay is based on detecting the cooccurrence of nuc and mecA markers in a single bacterial cell by utilizing droplet digital PCR (ddPCR) with the chimeric lysin ClyH for cell lysis. The method consists of (i) dispersion of an intact single bacterium into nanoliter droplets, (ii) temperature-controlled release of genomic DNA (gDNA) by ClyH at 37°C, and (iii) amplification and detection of the markers (nuc and mecA) using standard TaqMan chemistries with ddPCR. Results were analyzed based on MRSA index ratios used for indicating the presence of the duplex-positive markers in droplets. The method was able to achieve an absolute limit of detection (LOD) of 2,900 CFU/ml for MRSA in nasal swabs spiked with excess amounts of Escherichia coli, MSSA, and other mecA-positive bacteria within 4 h. Initial testing of 104 nasal swabs showed that the method had 100% agreement with the standard culture method, while the normal duplex qPCR method had only about 87.5% agreement. The single-bacterium duplex ddPCR assay is rapid and powerful for more accurate detection of MRSA directly from clinical specimens. PMID:28724560
Measuring digit lengths with 3D digital stereophotogrammetry: A comparison across methods.
Gremba, Allison; Weinberg, Seth M
2018-05-09
We compared digital 3D stereophotogrammetry to more traditional measurement methods (direct anthropometry and 2D scanning) to capture digit lengths and ratios. The length of the second and fourth digits was measured by each method and the second-to-fourth ratio was calculated. For each digit measurement, intraobserver agreement was calculated for each of the three collection methods. Further, measurements from the three methods were compared directly to one another. Agreement statistics included the intraclass correlation coefficient (ICC) and technical error of measurement (TEM). Intraobserver agreement statistics for the digit length measurements were high for all three methods; ICC values exceeded 0.97 and TEM values were below 1 mm. For digit ratio, intraobserver agreement was also acceptable for all methods, with direct anthropometry exhibiting lower agreement (ICC = 0.87) compared to indirect methods. For the comparison across methods, the overall agreement was high for digit length measurements (ICC values ranging from 0.93 to 0.98; TEM values below 2 mm). For digit ratios, high agreement was observed between the two indirect methods (ICC = 0.93), whereas indirect methods showed lower agreement when compared to direct anthropometry (ICC < 0.75). Digit measurements and derived ratios from 3D stereophotogrammetry showed high intraobserver agreement (similar to more traditional methods) suggesting that landmarks could be placed reliably on 3D hand surface images. While digit length measurements were found to be comparable across all three methods, ratios derived from direct anthropometry tended to be higher than those calculated indirectly from 2D or 3D images. © 2018 Wiley Periodicals, Inc.
Studies on Monitoring and Tracking Genetic Resources: An Executive Summary
Garrity, George M.; Thompson, Lorraine M.; Ussery, David W.; Paskin, Norman; Baker, Dwight; Desmeth, Philippe; Schindel, D.E.; Ong, P.S.
2009-01-01
The principles underlying fair and equitable sharing of benefits derived from the utilization of genetic resources are set out in Article 15 of the UN Convention on Biological Diversity, which stipulate that access to genetic resources is subject to the prior informed consent of the country where such resources are located and to mutually agreed terms regarding the sharing of benefits that could be derived from such access. One issue of particular concern for provider countries is how to monitor and track genetic resources once they have left the provider country and enter into use in a variety of forms. This report was commissioned to provide a detailed review of advances in DNA sequencing technologies, as those methods apply to identification of genetic resources, and the use of globally unique persistent identifiers for persistently linking to data and other forms of digital documentation that is linked to individual genetic resources. While the report was written for an audience with a mixture of technical, legal, and policy backgrounds it is relevant to the genomics community as it is an example of downstream application of genomics information. PMID:21304641
Research and development of biochip technologies in Taiwan
NASA Astrophysics Data System (ADS)
Ting, Solomon J.; Chiou, Arthur E. T.
2000-07-01
Recent advancements in several genome-sequencing projects have stimulated an enormous interest in microarray DNA chip technology, especially in the biomedical sciences and pharmaceutical industries. The DNA chips facilitated the miniaturization of conventional nucleic acid hybridizations, by either robotically spotting thousands of library cDNAs or in situ synthesis of high-density oligonucleotides onto solid supports. These innovations have found a wide range of applications in molecular biology, especially in studying gene expression and discovering new genes from the global view of genomic analysis. The research and development of this powerful tool has also received great attentions in Taiwan. In this paper, we report the current progresses of our DNA chip project, along with the current status of other biochip projects in Taiwan, such as protein chip, PCR chip, electrophoresis chip, olfactory chip, etc. The new development of biochip technologies integrates the biotechnology with the semiconductor processing, the micro- electro-mechanical, optoelectronic, and digital signal processing technologies. Most of these biochip technologies utilitze optical detection methods for data acquisition and analysis. The strengths and advantages of different approaches are compared and discussed in this report.
Digital Quantification of Human Eye Color Highlights Genetic Association of Three New Loci
Liu, Fan; Wollstein, Andreas; Hysi, Pirro G.; Ankra-Badu, Georgina A.; Spector, Timothy D.; Park, Daniel; Zhu, Gu; Larsson, Mats; Duffy, David L.; Montgomery, Grant W.; Mackey, David A.; Walsh, Susan; Lao, Oscar; Hofman, Albert; Rivadeneira, Fernando; Vingerling, Johannes R.; Uitterlinden, André G.; Martin, Nicholas G.; Hammond, Christopher J.; Kayser, Manfred
2010-01-01
Previous studies have successfully identified genetic variants in several genes associated with human iris (eye) color; however, they all used simplified categorical trait information. Here, we quantified continuous eye color variation into hue and saturation values using high-resolution digital full-eye photographs and conducted a genome-wide association study on 5,951 Dutch Europeans from the Rotterdam Study. Three new regions, 1q42.3, 17q25.3, and 21q22.13, were highlighted meeting the criterion for genome-wide statistically significant association. The latter two loci were replicated in 2,261 individuals from the UK and in 1,282 from Australia. The LYST gene at 1q42.3 and the DSCR9 gene at 21q22.13 serve as promising functional candidates. A model for predicting quantitative eye colors explained over 50% of trait variance in the Rotterdam Study. Over all our data exemplify that fine phenotyping is a useful strategy for finding genes involved in human complex traits. PMID:20463881
de Smith, Adam J; Walsh, Kyle M; Hansen, Helen M; Endicott, Alyson A; Wiencke, John K; Metayer, Catherine; Wiemels, Joseph L
2015-01-01
The extent to which heritable genetic variants can affect tumor development has yet to be fully elucidated. Tumor selection of single nucleotide polymorphism (SNP) risk alleles, a phenomenon called preferential allelic imbalance (PAI), has been demonstrated in some cancer types. We developed a novel application of digital PCR termed Somatic Mutation Allelic Ratio Test using Droplet Digital PCR (SMART-ddPCR) for accurate assessment of tumor PAI, and have applied this method to test the hypothesis that heritable SNPs associated with childhood acute lymphoblastic leukemia (ALL) may demonstrate tumor PAI. These SNPs are located at CDKN2A (rs3731217) and IKZF1 (rs4132601), genes frequently lost in ALL, and at CEBPE (rs2239633), ARID5B (rs7089424), PIP4K2A (rs10764338), and GATA3 (rs3824662), genes located on chromosomes gained in high-hyperdiploid ALL. We established thresholds of AI using constitutional DNA from SNP heterozygotes, and subsequently measured allelic copy number in tumor DNA from 19-142 heterozygote samples per SNP locus. We did not find significant tumor PAI at these loci, though CDKN2A and IKZF1 SNPs showed a trend towards preferential selection of the risk allele (p = 0.17 and p = 0.23, respectively). Using a genomic copy number control ddPCR assay, we investigated somatic copy number alterations (SCNA) underlying AI at CDKN2A and IKZF1, revealing a complex range of alterations including homozygous and hemizygous deletions and copy-neutral loss of heterozygosity, with varying degrees of clonality. Copy number estimates from ddPCR showed high agreement with those from multiplex ligation-dependent probe amplification (MLPA) assays. We demonstrate that SMART-ddPCR is a highly accurate method for investigation of tumor PAI and for assessment of the somatic alterations underlying AI. Furthermore, analysis of publicly available data from The Cancer Genome Atlas identified 16 recurrent SCNA loci that contain heritable cancer risk SNPs associated with a matching tumor type, and which represent candidate PAI regions warranting further investigation.
Peña, Arantxa; Busquets, Antonio; Gomila, Margarita; ...
2016-09-01
Pseudomonas has the highest number of species out of any genus of Gram-negative bacteria and is phylogenetically divided into several groups. The Pseudomonas putida phylogenetic branch includes at least 13 species of environmental and industrial interest, plant-associated bacteria, insect pathogens, and even some members that have been found in clinical specimens. In the context of the Genomic Encyclopedia of Bacteria and Archaea project, we present the permanent, high-quality draft genomes of the type strains of 3 taxonomically and ecologically closely related species in the Pseudomonas putida phylogenetic branch: Pseudomonas fulva DSM 17717 T, Pseudomonas parafulva DSM 17004 T and Pseudomonasmore » cremoricolorata DSM 17059T. All three genomes are comparable in size (4.6-4.9Mb), with 4,119-4,459 protein-coding genes. Average nucleotide identity based on BLAST comparisons and digital genome-to-genome distance calculations are in good agreement with experimental DNA-DNA hybridization results. The genome sequences presented here will be very helpful in elucidating the taxonomy, phylogeny and evolution of the Pseudomonas putida species complex.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peña, Arantxa; Busquets, Antonio; Gomila, Margarita
Pseudomonas has the highest number of species out of any genus of Gram-negative bacteria and is phylogenetically divided into several groups. The Pseudomonas putida phylogenetic branch includes at least 13 species of environmental and industrial interest, plant-associated bacteria, insect pathogens, and even some members that have been found in clinical specimens. In the context of the Genomic Encyclopedia of Bacteria and Archaea project, we present the permanent, high-quality draft genomes of the type strains of 3 taxonomically and ecologically closely related species in the Pseudomonas putida phylogenetic branch: Pseudomonas fulva DSM 17717 T, Pseudomonas parafulva DSM 17004 T and Pseudomonasmore » cremoricolorata DSM 17059T. All three genomes are comparable in size (4.6-4.9Mb), with 4,119-4,459 protein-coding genes. Average nucleotide identity based on BLAST comparisons and digital genome-to-genome distance calculations are in good agreement with experimental DNA-DNA hybridization results. The genome sequences presented here will be very helpful in elucidating the taxonomy, phylogeny and evolution of the Pseudomonas putida species complex.« less
Hogan, Andrew J
2014-07-01
This paper explores evolving conceptions and depictions of the human genome among human and medical geneticists during the postwar period. Historians of science and medicine have shown significant interest in the use of informational approaches in postwar genetics, which treat the genome as an expansive digital data set composed of three billion DNA nucleotides. Since the 1950s, however, geneticists have largely interacted with the human genome at the microscopically visible level of chromosomes. Mindful of this, I examine the observational and representational approaches of postwar human and medical genetics. During the 1970s and 1980s, the genome increasingly came to be understood as, at once, a discrete part of the human anatomy and a standardised scientific object. This paper explores the role of influential medical geneticists in recasting the human genome as being a visible, tangible, and legible entity, which was highly relevant to traditional medical thinking and practice. I demonstrate how the human genome was established as an object amenable to laboratory and clinical research, and argue that the observational and representational approaches of postwar medical genetics reflect, more broadly, the interdisciplinary efforts underlying the development of contemporary biomedicine.
Ontology based heterogeneous materials database integration and semantic query
NASA Astrophysics Data System (ADS)
Zhao, Shuai; Qian, Quan
2017-10-01
Materials digital data, high throughput experiments and high throughput computations are regarded as three key pillars of materials genome initiatives. With the fast growth of materials data, the integration and sharing of data is very urgent, that has gradually become a hot topic of materials informatics. Due to the lack of semantic description, it is difficult to integrate data deeply in semantic level when adopting the conventional heterogeneous database integration approaches such as federal database or data warehouse. In this paper, a semantic integration method is proposed to create the semantic ontology by extracting the database schema semi-automatically. Other heterogeneous databases are integrated to the ontology by means of relational algebra and the rooted graph. Based on integrated ontology, semantic query can be done using SPARQL. During the experiments, two world famous First Principle Computational databases, OQMD and Materials Project are used as the integration targets, which show the availability and effectiveness of our method.
Consolidation of molecular testing in clinical virology.
Scagnolari, Carolina; Turriziani, Ombretta; Monteleone, Katia; Pierangeli, Alessandra; Antonelli, Guido
2017-04-01
The development of quantitative methods for the detection of viral nucleic acids have significantly improved our ability to manage disease progression and to assess the efficacy of antiviral treatment. Moreover, major advances in molecular technologies during the last decade have allowed the identification of new host genetic markers associated with antiviral drug response but have also strongly revolutionized the way we see and perform virus diagnostics in the coming years. Areas covered: In this review, we describe the history and development of virology diagnostic methods, dedicating particular emphasis on the gradual evolution and recent advances toward the introduction of multiparametric platforms for the syndromic diagnosis. In parallel, we outline the consolidation of viral genome quantification practice in different clinical settings. Expert commentary: More rapid, accurate and affordable molecular technology can be predictable with particular emphasis on emerging techniques (next generation sequencing, digital PCR, point of care testing and syndromic diagnosis) to simplify viral diagnosis in the next future.
Christen, Matthias; Deutsch, Samuel; Christen, Beat
2015-08-21
Recent advances in synthetic biology have resulted in an increasing demand for the de novo synthesis of large-scale DNA constructs. Any process improvement that enables fast and cost-effective streamlining of digitized genetic information into fabricable DNA sequences holds great promise to study, mine, and engineer genomes. Here, we present Genome Calligrapher, a computer-aided design web tool intended for whole genome refactoring of bacterial chromosomes for de novo DNA synthesis. By applying a neutral recoding algorithm, Genome Calligrapher optimizes GC content and removes obstructive DNA features known to interfere with the synthesis of double-stranded DNA and the higher order assembly into large DNA constructs. Subsequent bioinformatics analysis revealed that synthesis constraints are prevalent among bacterial genomes. However, a low level of codon replacement is sufficient for refactoring bacterial genomes into easy-to-synthesize DNA sequences. To test the algorithm, 168 kb of synthetic DNA comprising approximately 20 percent of the synthetic essential genome of the cell-cycle bacterium Caulobacter crescentus was streamlined and then ordered from a commercial supplier of low-cost de novo DNA synthesis. The successful assembly into eight 20 kb segments indicates that Genome Calligrapher algorithm can be efficiently used to refactor difficult-to-synthesize DNA. Genome Calligrapher is broadly applicable to recode biosynthetic pathways, DNA sequences, and whole bacterial genomes, thus offering new opportunities to use synthetic biology tools to explore the functionality of microbial diversity. The Genome Calligrapher web tool can be accessed at https://christenlab.ethz.ch/GenomeCalligrapher .
Jiang, Jinjin; Wang, Yue; Zhu, Bao; Fang, Tingting; Fang, Yujie; Wang, Youping
2015-01-27
Brassica includes many successfully cultivated crop species of polyploid origin, either by ancestral genome triplication or by hybridization between two diploid progenitors, displaying complex repetitive sequences and transposons. The U's triangle, which consists of three diploids and three amphidiploids, is optimal for the analysis of complicated genomes after polyploidization. Next-generation sequencing enables the transcriptome profiling of polyploids on a global scale. We examined the gene expression patterns of three diploids (Brassica rapa, B. nigra, and B. oleracea) and three amphidiploids (B. napus, B. juncea, and B. carinata) via digital gene expression analysis. In total, the libraries generated between 5.7 and 6.1 million raw reads, and the clean tags of each library were mapped to 18547-21995 genes of B. rapa genome. The unambiguous tag-mapped genes in the libraries were compared. Moreover, the majority of differentially expressed genes (DEGs) were explored among diploids as well as between diploids and amphidiploids. Gene ontological analysis was performed to functionally categorize these DEGs into different classes. The Kyoto Encyclopedia of Genes and Genomes analysis was performed to assign these DEGs into approximately 120 pathways, among which the metabolic pathway, biosynthesis of secondary metabolites, and peroxisomal pathway were enriched. The non-additive genes in Brassica amphidiploids were analyzed, and the results indicated that orthologous genes in polyploids are frequently expressed in a non-additive pattern. Methyltransferase genes showed differential expression pattern in Brassica species. Our results provided an understanding of the transcriptome complexity of natural Brassica species. The gene expression changes in diploids and allopolyploids may help elucidate the morphological and physiological differences among Brassica species.
Gray, B.A.; Zori, Roberto T.; McGuire, P.M.; Bonde, R.K.
2002-01-01
Detailed chromosome studies were conducted for the Florida manatee (Trichechus manatus latirostris) utilizing primary chromosome banding techniques (G- and Q-banding). Digital microscopic imaging methods were employed and a standard G-banded karyotype was constructed for both sexes. Based on chromosome banding patterns and measurements obtained in these studies, a standard karyotype and ideogram are proposed. Characterization of additional cytogenetic features of this species by supplemental chromosome banding techniques, C-banding (constitutive heterochromatin), Ag-NOR staining (nucleolar organizer regions), and DA/DAPI staining, was also performed. These studies provide detailed cytogenetic data for T. manatus latirostris, which could enhance future genetic mapping projects and interspecific and intraspecific genomic comparisons by techniques such as zoo-FISH.
Cancer-Associated Mutations in Endometriosis without Cancer
Anglesio, M.S.; Papadopoulos, N.; Ayhan, A.; Nazeran, T.M.; Noë, M.; Horlings, H.M.; Lum, A.; Jones, S.; Senz, J.; Seckin, T.; Ho, J.; Wu, R.-C.; Lac, V.; Ogawa, H.; Tessier-Cloutier, B.; Alhassan, R.; Wang, A.; Wang, Y.; Cohen, J.D.; Wong, F.; Hasanovic, A.; Orr, N.; Zhang, M.; Popoli, M.; McMahon, W.; Wood, L.D.; Mattox, A.; Allaire, C.; Segars, J.; Williams, C.; Tomasetti, C.; Boyd, N.; Kinzler, K.W.; Gilks, C.B.; Diaz, L.; Wang, T.-L.; Vogelstein, B.; Yong, P.J.; Huntsman, D.G.; Shih, I.-M.
2017-01-01
BACKGROUND Endometriosis, defined as the presence of ectopic endometrial stroma and epithelium, affects approximately 10% of reproductive-age women and can cause pelvic pain and infertility. Endometriotic lesions are considered to be benign inflammatory lesions but have cancerlike features such as local invasion and resistance to apoptosis. METHODS We analyzed deeply infiltrating endometriotic lesions from 27 patients by means of exomewide sequencing (24 patients) or cancer-driver targeted sequencing (3 patients). Mutations were validated with the use of digital genomic methods in micro-dissected epithelium and stroma. Epithelial and stromal components of lesions from an additional 12 patients were analyzed by means of a droplet digital polymerase-chain-reaction (PCR) assay for recurrent activating KRAS mutations. RESULTS Exome sequencing revealed somatic mutations in 19 of 24 patients (79%). Five patients harbored known cancer driver mutations in ARID1A, PIK3CA, KRAS, or PPP2R1A, which were validated by Safe-Sequencing System or immunohistochemical analysis. The likelihood of driver genes being affected at this rate in the absence of selection was estimated at P = 0.001 (binomial test). Targeted sequencing and a droplet digital PCR assay identified KRAS mutations in 2 of 3 patients and 3 of 12 patients, respectively, with mutations in the epithelium but not the stroma. One patient harbored two different KRAS mutations, c.35G→T and c.35G→C, and another carried identical KRAS c.35G→A mutations in three distinct lesions. CONCLUSIONS We found that lesions in deep infiltrating endometriosis, which are associated with virtually no risk of malignant transformation, harbor somatic cancer driver mutations. Ten of 39 deep infiltrating lesions (26%) carried driver mutations; all the tested somatic mutations appeared to be confined to the epithelial compartment of endometriotic lesions. PMID:28489996
Timing performance comparison of digital methods in positron emission tomography
NASA Astrophysics Data System (ADS)
Aykac, Mehmet; Hong, Inki; Cho, Sanghee
2010-11-01
Accurate timing information is essential in positron emission tomography (PET). Recent improvements in high speed electronics made digital methods more attractive to find alternative solutions to create a time mark for an event. Two new digital methods (mean PMT pulse model, MPPM, and median filtered zero crossing method, MFZCM) were introduced in this work and compared to traditional methods such as digital leading edge (LE) and digital constant fraction discrimination (CFD). In addition, the performances of all four digital methods were compared to analog based LE and CFD. The time resolution values for MPPM and MFZCM were measured below 300 ps at 1.6 GS/s and above that was similar to the analog based coincidence timing results. In addition, the two digital methods were insensitive to the changes in threshold setting that might give some improvement in system dead time.
Hierarchical nucleus segmentation in digital pathology images
NASA Astrophysics Data System (ADS)
Gao, Yi; Ratner, Vadim; Zhu, Liangjia; Diprima, Tammy; Kurc, Tahsin; Tannenbaum, Allen; Saltz, Joel
2016-03-01
Extracting nuclei is one of the most actively studied topic in the digital pathology researches. Most of the studies directly search the nuclei (or seeds for the nuclei) from the finest resolution available. While the richest information has been utilized by such approaches, it is sometimes difficult to address the heterogeneity of nuclei in different tissues. In this work, we propose a hierarchical approach which starts from the lower resolution level and adaptively adjusts the parameters while progressing into finer and finer resolution. The algorithm is tested on brain and lung cancers images from The Cancer Genome Atlas data set.
Fidler, Samantha; D'Orsogna, Lloyd; Irish, Ashley B; Lewis, Joshua R; Wong, Germaine; Lim, Wai H
2018-03-02
Structural human leukocyte antigen (HLA) matching at the eplet level can be identified by HLAMatchmaker, which requires the entry of four-digit alleles. The aim of this study was to evaluate the agreement between eplet mismatches calculated by serological and two-digit typing methods compared to high-resolution four-digit typing. In a cohort of 264 donor/recipient pairs, the evaluation of measurement error was assessed using intra-class correlation to confirm the absolute agreement between the number of eplet mismatches at class I (HLA-A, -B, C) and II loci (HLA-DQ and -DR) calculated using serological or two-digit molecular typing compared to four-digit molecular typing methods. The proportion of donor/recipient pairs with a difference of >5 eplet mismatches between the HLA typing methods was also determined. Intra-class correlation coefficients between serological and four-digit molecular typing methods were 0.969 (95% confidence intervals [95% CI] 0.960-0.975) and 0.926 (95% CI 0.899-0.944), respectively; and 0.995 (95% CI 0.994-0.996) and 0.993 (95% CI 0.991-0.995), respectively between two-digit and four-digit molecular typing methods. The proportion of donor/recipient pairs with a difference of >5 eplet mismatches at class I and II loci was 4% and 16% for serological versus four-digit molecular typing methods, and 0% and 2% for two-digit versus four-digit molecular typing methods, respectively. In this small predominantly Caucasian population, compared with serology, there is a high level of agreement in the number of eplet mismatches calculated using two-compared to four-digit molecular HLA-typing methods, suggesting that two-digit typing may be sufficient in determining eplet mismatch load in kidney transplantation.
Methods for applying accurate digital PCR analysis on low copy DNA samples.
Whale, Alexandra S; Cowen, Simon; Foy, Carole A; Huggett, Jim F
2013-01-01
Digital PCR (dPCR) is a highly accurate molecular approach, capable of precise measurements, offering a number of unique opportunities. However, in its current format dPCR can be limited by the amount of sample that can be analysed and consequently additional considerations such as performing multiplex reactions or pre-amplification can be considered. This study investigated the impact of duplexing and pre-amplification on dPCR analysis by using three different assays targeting a model template (a portion of the Arabidopsis thaliana alcohol dehydrogenase gene). We also investigated the impact of different template types (linearised plasmid clone and more complex genomic DNA) on measurement precision using dPCR. We were able to demonstrate that duplex dPCR can provide a more precise measurement than uniplex dPCR, while applying pre-amplification or varying template type can significantly decrease the precision of dPCR. Furthermore, we also demonstrate that the pre-amplification step can introduce measurement bias that is not consistent between experiments for a sample or assay and so could not be compensated for during the analysis of this data set. We also describe a model for estimating the prevalence of molecular dropout and identify this as a source of dPCR imprecision. Our data have demonstrated that the precision afforded by dPCR at low sample concentration can exceed that of the same template post pre-amplification thereby negating the need for this additional step. Our findings also highlight the technical differences between different templates types containing the same sequence that must be considered if plasmid DNA is to be used to assess or control for more complex templates like genomic DNA.
Methods for Applying Accurate Digital PCR Analysis on Low Copy DNA Samples
Whale, Alexandra S.; Cowen, Simon; Foy, Carole A.; Huggett, Jim F.
2013-01-01
Digital PCR (dPCR) is a highly accurate molecular approach, capable of precise measurements, offering a number of unique opportunities. However, in its current format dPCR can be limited by the amount of sample that can be analysed and consequently additional considerations such as performing multiplex reactions or pre-amplification can be considered. This study investigated the impact of duplexing and pre-amplification on dPCR analysis by using three different assays targeting a model template (a portion of the Arabidopsis thaliana alcohol dehydrogenase gene). We also investigated the impact of different template types (linearised plasmid clone and more complex genomic DNA) on measurement precision using dPCR. We were able to demonstrate that duplex dPCR can provide a more precise measurement than uniplex dPCR, while applying pre-amplification or varying template type can significantly decrease the precision of dPCR. Furthermore, we also demonstrate that the pre-amplification step can introduce measurement bias that is not consistent between experiments for a sample or assay and so could not be compensated for during the analysis of this data set. We also describe a model for estimating the prevalence of molecular dropout and identify this as a source of dPCR imprecision. Our data have demonstrated that the precision afforded by dPCR at low sample concentration can exceed that of the same template post pre-amplification thereby negating the need for this additional step. Our findings also highlight the technical differences between different templates types containing the same sequence that must be considered if plasmid DNA is to be used to assess or control for more complex templates like genomic DNA. PMID:23472156
Orhant, Lucie; Anselem, Olivia; Fradin, Mélanie; Becker, Pierre Hadrien; Beugnet, Caroline; Deburgrave, Nathalie; Tafuri, Gilles; Letourneur, Franck; Goffinet, François; Allach El Khattabi, Laïla; Leturcq, France; Bienvenu, Thierry; Tsatsaris, Vassilis; Nectoux, Juliette
2016-05-01
Achondroplasia is generally detected by abnormal prenatal ultrasound findings in the third trimester of pregnancy and then confirmed by molecular genetic testing of fetal genomic DNA obtained by aspiration of amniotic fluid. This invasive procedure presents a small but significant risk for both the fetus and mother. Therefore, non-invasive procedures using cell-free fetal DNA in maternal plasma have been developed for the detection of the fetal achondroplasia mutations. To determine whether the fetus carries the de novo mis-sense genetic mutation at nucleotide 1138 in FGFR3 gene involved in >99% of achondroplasia cases, we developed two independent methods: digital-droplet PCR combined with minisequencing, which are very sensitive methods allowing detection of rare alleles. We collected 26 plasmatic samples from women carrying fetus at risk of achondroplasia and diagnosed to date a total of five affected fetuses in maternal blood. The sensitivity and specificity of our test are respectively 100% [95% confidence interval, 56.6-100%] and 100% [95% confidence interval, 84.5-100%]. This novel, original strategy for non-invasive prenatal diagnosis of achondroplasia is suitable for implementation in routine clinical testing and allows considering extending the applications of these technologies in non-invasive prenatal diagnosis of many other monogenic diseases. © 2016 John Wiley & Sons, Ltd. © 2016 John Wiley & Sons, Ltd.
Lohmann, Katja; Redin, Claire; Tönnies, Holger; Bressman, Susan B; Subero, Jose Ignacio Martin; Wiegers, Karin; Hinrichs, Frauke; Hellenbroich, Yorck; Rakovic, Aleksandar; Raymond, Deborah; Ozelius, Laurie J; Schwinger, Eberhard; Siebert, Reiner; Talkowski, Michael E; Saunders-Pullman, Rachel; Klein, Christine
2017-07-01
Chromosomal rearrangements are increasingly recognized to underlie neurologic disorders and are often accompanied by additional clinical signs beyond the gene-specific phenotypic spectrum. To elucidate the causal genetic variant in a large US family with co-occurrence of dopa-responsive dystonia as well as skeletal and eye abnormalities (ie, ptosis, myopia, and retina detachment). We examined 10 members of a family, including 5 patients with dopa-responsive dystonia and skeletal and/or eye abnormalities, from a US tertiary referral center for neurological diseases using multiple conventional molecular methods, including fluorescence in situ hybridization and array comparative genomic hybridization as well as large-insert whole-genome sequencing to survey multiple classes of genomic variations. Of note, there was a seemingly implausible transmission pattern in this family due to a mutation-negative obligate mutation carrier. Genetic diagnosis in affected family members and insight into the formation of large deletions. Four members were diagnosed with definite and 1 with probable dopa-responsive dystonia. All 5 affected individuals carried a large heterozygous deletion encompassing all 6 exons of GCH1. Additionally, all mutation carriers had congenital ptosis requiring surgery, 4 had myopia, 2 had retinal detachment, and 2 showed skeletal abnormalities of the hands, ie, polydactyly or syndactyly or missing a hand digit. Two individuals were reported to be free of any disease. Analyses revealed complex chromosomal rearrangements on chromosome 14q21-22 in unaffected individuals that triggered the expansion to a larger deletion segregating with affection status. The expansion occurred recurrently, explaining the seemingly non-mendelian inheritance pattern. These rearrangements included a deletion of GCH1, which likely contributes to the dopa-responsive dystonia, as well as a deletion of BMP4 as a potential cause of digital and eye abnormalities. Our findings alert neurologists to the importance of clinical red flags, ie, unexpected co-occurrence of clinical features that may point to the presence of chromosomal rearrangements as the primary disease cause. The clinical management and diagnostics of such patients requires an interdisciplinary approach in modern clinical-diagnostic care.
Gene editing by CRISPR/Cas9 in the obligatory outcrossing Medicago sativa.
Gao, Ruimin; Feyissa, Biruk A; Croft, Mana; Hannoufa, Abdelali
2018-04-01
The CRISPR/Cas9 technique was successfully used to edit the genome of the obligatory outcrossing plant species Medicago sativa L. (alfalfa). RNA-guided genome engineering using Clustered Regularly Interspersed Short Palindromic Repeats (CRISPR)/Cas9 technology enables a variety of applications in plants. Successful application and validation of the CRISPR technique in a multiplex genome, such as that of M. sativa (alfalfa) will ultimately lead to major advances in the improvement of this crop. We used CRISPR/Cas9 technique to mutate squamosa promoter binding protein like 9 (SPL9) gene in alfalfa. Because of the complex features of the alfalfa genome, we first used droplet digital PCR (ddPCR) for high-throughput screening of large populations of CRISPR-modified plants. Based on the results of genome editing rates obtained from the ddPCR screening, plants with relatively high rates were subjected to further analysis by restriction enzyme digestion/PCR amplification analyses. PCR products encompassing the respective small guided RNA target locus were then sub-cloned and sequenced to verify genome editing. In summary, we successfully applied the CRISPR/Cas9 technique to edit the SPL9 gene in a multiplex genome, providing some insights into opportunities to apply this technology in future alfalfa breeding. The overall efficiency in the polyploid alfalfa genome was lower compared to other less-complex plant genomes. Further refinement of the CRISPR technology system will thus be required for more efficient genome editing in this plant.
Ginseng Genome Database: an open-access platform for genomics of Panax ginseng.
Jayakodi, Murukarthick; Choi, Beom-Soon; Lee, Sang-Choon; Kim, Nam-Hoon; Park, Jee Young; Jang, Woojong; Lakshmanan, Meiyappan; Mohan, Shobhana V G; Lee, Dong-Yup; Yang, Tae-Jin
2018-04-12
The ginseng (Panax ginseng C.A. Meyer) is a perennial herbaceous plant that has been used in traditional oriental medicine for thousands of years. Ginsenosides, which have significant pharmacological effects on human health, are the foremost bioactive constituents in this plant. Having realized the importance of this plant to humans, an integrated omics resource becomes indispensable to facilitate genomic research, molecular breeding and pharmacological study of this herb. The first draft genome sequences of P. ginseng cultivar "Chunpoong" were reported recently. Here, using the draft genome, transcriptome, and functional annotation datasets of P. ginseng, we have constructed the Ginseng Genome Database http://ginsengdb.snu.ac.kr /, the first open-access platform to provide comprehensive genomic resources of P. ginseng. The current version of this database provides the most up-to-date draft genome sequence (of approximately 3000 Mbp of scaffold sequences) along with the structural and functional annotations for 59,352 genes and digital expression of genes based on transcriptome data from different tissues, growth stages and treatments. In addition, tools for visualization and the genomic data from various analyses are provided. All data in the database were manually curated and integrated within a user-friendly query page. This database provides valuable resources for a range of research fields related to P. ginseng and other species belonging to the Apiales order as well as for plant research communities in general. Ginseng genome database can be accessed at http://ginsengdb.snu.ac.kr /.
The Global Genome Biodiversity Network (GGBN) Data Standard specification
Droege, G.; Barker, K.; Seberg, O.; Coddington, J.; Benson, E.; Berendsohn, W. G.; Bunk, B.; Butler, C.; Cawsey, E. M.; Deck, J.; Döring, M.; Flemons, P.; Gemeinholzer, B.; Güntsch, A.; Hollowell, T.; Kelbert, P.; Kostadinov, I.; Kottmann, R.; Lawlor, R. T.; Lyal, C.; Mackenzie-Dodds, J.; Meyer, C.; Mulcahy, D.; Nussbeck, S. Y.; O'Tuama, É.; Orrell, T.; Petersen, G.; Robertson, T.; Söhngen, C.; Whitacre, J.; Wieczorek, J.; Yilmaz, P.; Zetzsche, H.; Zhang, Y.; Zhou, X.
2016-01-01
Genomic samples of non-model organisms are becoming increasingly important in a broad range of studies from developmental biology, biodiversity analyses, to conservation. Genomic sample definition, description, quality, voucher information and metadata all need to be digitized and disseminated across scientific communities. This information needs to be concise and consistent in today’s ever-increasing bioinformatic era, for complementary data aggregators to easily map databases to one another. In order to facilitate exchange of information on genomic samples and their derived data, the Global Genome Biodiversity Network (GGBN) Data Standard is intended to provide a platform based on a documented agreement to promote the efficient sharing and usage of genomic sample material and associated specimen information in a consistent way. The new data standard presented here build upon existing standards commonly used within the community extending them with the capability to exchange data on tissue, environmental and DNA sample as well as sequences. The GGBN Data Standard will reveal and democratize the hidden contents of biodiversity biobanks, for the convenience of everyone in the wider biobanking community. Technical tools exist for data providers to easily map their databases to the standard. Database URL: http://terms.tdwg.org/wiki/GGBN_Data_Standard PMID:27694206
Structural forms of the human amylase locus and their relationships to SNPs, haplotypes, and obesity
Usher, Christina L; Handsaker, Robert E; Esko, Tõnu; Tuke, Marcus A; Weedon, Michael N; Hastie, Alex R; Cao, Han; Moon, Jennifer E; Kashin, Seva; Fuchsberger, Christian; Metspalu, Andres; Pato, Carlos N; Pato, Michele T; McCarthy, Mark I; Boehnke, Michael; Altshuler, David M; Frayling, Timothy M; Hirschhorn, Joel N; McCarroll, Steven A
2016-01-01
Hundreds of genes reside in structurally complex, poorly understood regions of the human genome1-3. One such region contains the three amylase genes (AMY2B, AMY2A, and AMY1) responsible for digesting starch into sugar. The copy number of AMY1 is reported to be the genome’s largest influence on obesity4, though genome-wide association studies for obesity have found this locus unremarkable. Using whole genome sequence analysis3,5, droplet digital PCR6, and genome mapping7, we identified eight common structural haplotypes of the amylase locus that suggest its mutational history. We found that AMY1 copy number in individuals’ genomes is generally even (rather than odd) and partially correlates to nearby SNPs, which do not associate with BMI. We measured amylase gene copy number in 1,000 obese or lean Estonians and in two other cohorts totaling ~3,500 individuals. We had 99% power to detect the lower bound of the reported effects on BMI4, yet found no association. PMID:26098870
Hogan, Andrew J.
2014-01-01
This paper explores evolving conceptions and depictions of the human genome among human and medical geneticists during the postwar period. Historians of science and medicine have shown significant interest in the use of informational approaches in postwar genetics, which treat the genome as an expansive digital data set composed of three billion DNA nucleotides. Since the 1950s, however, geneticists have largely interacted with the human genome at the microscopically visible level of chromosomes. Mindful of this, I examine the observational and representational approaches of postwar human and medical genetics. During the 1970s and 1980s, the genome increasingly came to be understood as, at once, a discrete part of the human anatomy and a standardised scientific object. This paper explores the role of influential medical geneticists in recasting the human genome as being a visible, tangible, and legible entity, which was highly relevant to traditional medical thinking and practice. I demonstrate how the human genome was established as an object amenable to laboratory and clinical research, and argue that the observational and representational approaches of postwar medical genetics reflect, more broadly, the interdisciplinary efforts underlying the development of contemporary biomedicine. PMID:25045177
PGP repository: a plant phenomics and genomics data publication infrastructure
Arend, Daniel; Junker, Astrid; Scholz, Uwe; Schüler, Danuta; Wylie, Juliane; Lange, Matthias
2016-01-01
Plant genomics and phenomics represents the most promising tools for accelerating yield gains and overcoming emerging crop productivity bottlenecks. However, accessing this wealth of plant diversity requires the characterization of this material using state-of-the-art genomic, phenomic and molecular technologies and the release of subsequent research data via a long-term stable, open-access portal. Although several international consortia and public resource centres offer services for plant research data management, valuable digital assets remains unpublished and thus inaccessible to the scientific community. Recently, the Leibniz Institute of Plant Genetics and Crop Plant Research and the German Plant Phenotyping Network have jointly initiated the Plant Genomics and Phenomics Research Data Repository (PGP) as infrastructure to comprehensively publish plant research data. This covers in particular cross-domain datasets that are not being published in central repositories because of its volume or unsupported data scope, like image collections from plant phenotyping and microscopy, unfinished genomes, genotyping data, visualizations of morphological plant models, data from mass spectrometry as well as software and documents. The repository is hosted at Leibniz Institute of Plant Genetics and Crop Plant Research using e!DAL as software infrastructure and a Hierarchical Storage Management System as data archival backend. A novel developed data submission tool was made available for the consortium that features a high level of automation to lower the barriers of data publication. After an internal review process, data are published as citable digital object identifiers and a core set of technical metadata is registered at DataCite. The used e!DAL-embedded Web frontend generates for each dataset a landing page and supports an interactive exploration. PGP is registered as research data repository at BioSharing.org, re3data.org and OpenAIRE as valid EU Horizon 2020 open data archive. Above features, the programmatic interface and the support of standard metadata formats, enable PGP to fulfil the FAIR data principles—findable, accessible, interoperable, reusable. Database URL: http://edal.ipk-gatersleben.de/repos/pgp/ PMID:27087305
Yin, Changchuan
2015-04-01
To apply digital signal processing (DSP) methods to analyze DNA sequences, the sequences first must be specially mapped into numerical sequences. Thus, effective numerical mappings of DNA sequences play key roles in the effectiveness of DSP-based methods such as exon prediction. Despite numerous mappings of symbolic DNA sequences to numerical series, the existing mapping methods do not include the genetic coding features of DNA sequences. We present a novel numerical representation of DNA sequences using genetic codon context (GCC) in which the numerical values are optimized by simulation annealing to maximize the 3-periodicity signal to noise ratio (SNR). The optimized GCC representation is then applied in exon and intron prediction by Short-Time Fourier Transform (STFT) approach. The results show the GCC method enhances the SNR values of exon sequences and thus increases the accuracy of predicting protein coding regions in genomes compared with the commonly used 4D binary representation. In addition, this study offers a novel way to reveal specific features of DNA sequences by optimizing numerical mappings of symbolic DNA sequences.
Lu, Bingxin; Leong, Hon Wai
2016-02-01
Genomic islands (GIs) are clusters of functionally related genes acquired by lateral genetic transfer (LGT), and they are present in many bacterial genomes. GIs are extremely important for bacterial research, because they not only promote genome evolution but also contain genes that enhance adaption and enable antibiotic resistance. Many methods have been proposed to predict GI. But most of them rely on either annotations or comparisons with other closely related genomes. Hence these methods cannot be easily applied to new genomes. As the number of newly sequenced bacterial genomes rapidly increases, there is a need for methods to detect GI based solely on sequences of a single genome. In this paper, we propose a novel method, GI-SVM, to predict GIs given only the unannotated genome sequence. GI-SVM is based on one-class support vector machine (SVM), utilizing composition bias in terms of k-mer content. From our evaluations on three real genomes, GI-SVM can achieve higher recall compared with current methods, without much loss of precision. Besides, GI-SVM allows flexible parameter tuning to get optimal results for each genome. In short, GI-SVM provides a more sensitive method for researchers interested in a first-pass detection of GI in newly sequenced genomes.
Digital microarray analysis for digital artifact genomics
NASA Astrophysics Data System (ADS)
Jaenisch, Holger; Handley, James; Williams, Deborah
2013-06-01
We implement a Spatial Voting (SV) based analogy of microarray analysis for digital gene marker identification in malware code sections. We examine a famous set of malware formally analyzed by Mandiant and code named Advanced Persistent Threat (APT1). APT1 is a Chinese organization formed with specific intent to infiltrate and exploit US resources. Manidant provided a detailed behavior and sting analysis report for the 288 malware samples available. We performed an independent analysis using a new alternative to the traditional dynamic analysis and static analysis we call Spatial Analysis (SA). We perform unsupervised SA on the APT1 originating malware code sections and report our findings. We also show the results of SA performed on some members of the families associated by Manidant. We conclude that SV based SA is a practical fast alternative to dynamics analysis and static analysis.
Mutations in X-linked PORCN, a putative regulator of Wnt signaling, cause focal dermal hypoplasia
USDA-ARS?s Scientific Manuscript database
Focal dermal hypoplasia is an X-linked dominant disorder characterized by patchy hypoplastic skin and digital, ocular, and dental malformations. We used array comparative genomic hybridization to identify a 219-kb deletion in Xp11.23 in two affected females. We sequenced genes in this region and fou...
Fidler, Samantha; D’Orsogna, Lloyd; Irish, Ashley B.; Lewis, Joshua R.; Wong, Germaine; Lim, Wai H.
2018-01-01
Structural human leukocyte antigen (HLA) matching at the eplet level can be identified by HLAMatchmaker, which requires the entry of four-digit alleles. The aim of this study was to evaluate the agreement between eplet mismatches calculated by serological and two-digit typing methods compared to high-resolution four-digit typing. In a cohort of 264 donor/recipient pairs, the evaluation of measurement error was assessed using intra-class correlation to confirm the absolute agreement between the number of eplet mismatches at class I (HLA-A, -B, C) and II loci (HLA-DQ and -DR) calculated using serological or two-digit molecular typing compared to four-digit molecular typing methods. The proportion of donor/recipient pairs with a difference of >5 eplet mismatches between the HLA typing methods was also determined. Intra-class correlation coefficients between serological and four-digit molecular typing methods were 0.969 (95% confidence intervals [95% CI] 0.960–0.975) and 0.926 (95% CI 0.899–0.944), respectively; and 0.995 (95% CI 0.994–0.996) and 0.993 (95% CI 0.991–0.995), respectively between two-digit and four-digit molecular typing methods. The proportion of donor/recipient pairs with a difference of >5 eplet mismatches at class I and II loci was 4% and 16% for serological versus four-digit molecular typing methods, and 0% and 2% for two-digit versus four-digit molecular typing methods, respectively. In this small predominantly Caucasian population, compared with serology, there is a high level of agreement in the number of eplet mismatches calculated using two-compared to four-digit molecular HLA-typing methods, suggesting that two-digit typing may be sufficient in determining eplet mismatch load in kidney transplantation. PMID:29568344
Mehrian-Shai, Ruty; Yalon, Michal; Moshe, Itai; Barshack, Iris; Nass, Dvorah; Jacob, Jasmine; Dor, Chen; Reichardt, Juergen K V; Constantini, Shlomi; Toren, Amos
2016-01-14
The genetic mechanisms underlying hemangioblastoma development are still largely unknown. We used high-resolution single nucleotide polymorphism microarrays and droplet digital PCR analysis to detect copy number variations (CNVs) in total of 45 hemangioblastoma tumors. We identified 94 CNVs with a median of 18 CNVs per sample. The most frequently gained regions were on chromosomes 1 (p36.32) and 7 (p11.2). These regions contain the EGFR and PRDM16 genes. Recurrent losses were located at chromosome 12 (q24.13), which includes the gene PTPN11. Our findings provide the first high-resolution genome-wide view of chromosomal changes in hemangioblastoma and identify 23 candidate genes: EGFR, PRDM16, PTPN11, HOXD11, HOXD13, FLT3, PTCH, FGFR1, FOXP1, GPC3, HOXC13, HOXC11, MKL1, CHEK2, IRF4, GPHN, IKZF1, RB1, HOXA9, and micro RNA, such as hsa-mir-196a-2 for hemangioblastoma pathogenesis. Furthermore, our data implicate that cell proliferation and angiogenesis promoting pathways may be involved in the molecular pathogenesis of hemangioblastoma.
Wang, Bin; Diao, Yutao; Liu, Qiji; An, Hongqiang; Ma, Ruiping; Jiang, Guosheng; Lai, Nannan; Li, Ziwei; Zhu, Xiaoxiao; Zhao, Lin; Guo, Qiang; Zhang, Zhen; Sun, Rong; Li, Xia
2016-12-06
Preaxial polydactyly (PPD) is inherited in an autosomal dominant fashion and characterized by the presence of one or more supernumerary digits on the thumb side. It had been identified that point mutation or genomic duplications of the long-range limb-specific cis-regulator - zone of polarizing activity regulatory sequence (ZRS) cause PPD or other limb deformities such as syndactyly type IV (SD4) and Triphalangeal thumb-polysyndactyly syndrome (TPTPS). Most previously reported cases involved with no more than one extra finger; however, the role of the point mutation or genomic duplications of ZRS in the case of more than one redundant finger polydactyly remains unclear. In this article, we reported a family case of more than one redundant finger polydactyly on the thumb side for bilateral hands with a pedigree chart of the family. Results of quantitative PCR (qPCR) and sequence analysis suggested that the relative copy number (RCN) of ZRS but not point mutation (including insertion and deletion) was involved in all affected individuals.
Applicability of digital PCR to the investigation of pediatric-onset genetic disorders.
Butchbach, Matthew E R
2016-12-01
Early-onset rare diseases have a strong impact on child healthcare even though the incidence of each of these diseases is relatively low. In order to better manage the care of these children, it is imperative to quickly diagnose the molecular bases for these disorders as well as to develop technologies with prognostic potential. Digital PCR (dPCR) is well suited for this role by providing an absolute quantification of the target DNA within a sample. This review illustrates how dPCR can be used to identify genes associated with pediatric-onset disorders, to identify copy number status of important disease-causing genes and variants and to quantify modifier genes. It is also a powerful technology to track changes in genomic biomarkers with disease progression. Based on its capability to accurately and reliably detect genomic alterations with high sensitivity and a large dynamic detection range, dPCR has the potential to become the tool of choice for the verification of pediatric disease-associated mutations identified by next generation sequencing, copy number determination and noninvasive prenatal screening.
Wheat EST resources for functional genomics of abiotic stress
Houde, Mario; Belcaid, Mahdi; Ouellet, François; Danyluk, Jean; Monroy, Antonio F; Dryanova, Ani; Gulick, Patrick; Bergeron, Anne; Laroche, André; Links, Matthew G; MacCarthy, Luke; Crosby, William L; Sarhan, Fathey
2006-01-01
Background Wheat is an excellent species to study freezing tolerance and other abiotic stresses. However, the sequence of the wheat genome has not been completely characterized due to its complexity and large size. To circumvent this obstacle and identify genes involved in cold acclimation and associated stresses, a large scale EST sequencing approach was undertaken by the Functional Genomics of Abiotic Stress (FGAS) project. Results We generated 73,521 quality-filtered ESTs from eleven cDNA libraries constructed from wheat plants exposed to various abiotic stresses and at different developmental stages. In addition, 196,041 ESTs for which tracefiles were available from the National Science Foundation wheat EST sequencing program and DuPont were also quality-filtered and used in the analysis. Clustering of the combined ESTs with d2_cluster and TGICL yielded a few large clusters containing several thousand ESTs that were refractory to routine clustering techniques. To resolve this problem, the sequence proximity and "bridges" were identified by an e-value distance graph to manually break clusters into smaller groups. Assembly of the resolved ESTs generated a 75,488 unique sequence set (31,580 contigs and 43,908 singletons/singlets). Digital expression analyses indicated that the FGAS dataset is enriched in stress-regulated genes compared to the other public datasets. Over 43% of the unique sequence set was annotated and classified into functional categories according to Gene Ontology. Conclusion We have annotated 29,556 different sequences, an almost 5-fold increase in annotated sequences compared to the available wheat public databases. Digital expression analysis combined with gene annotation helped in the identification of several pathways associated with abiotic stress. The genomic resources and knowledge developed by this project will contribute to a better understanding of the different mechanisms that govern stress tolerance in wheat and other cereals. PMID:16772040
NASA Astrophysics Data System (ADS)
Aspinall, M. D.; Joyce, M. J.; Mackin, R. O.; Jarrah, Z.; Boston, A. J.; Nolan, P. J.; Peyton, A. J.; Hawkes, N. P.
2009-01-01
A unique, digital time pick-off method, known as sample-interpolation timing (SIT) is described. This method demonstrates the possibility of improved timing resolution for the digital measurement of time of flight compared with digital replica-analogue time pick-off methods for signals sampled at relatively low rates. Three analogue timing methods have been replicated in the digital domain (leading-edge, crossover and constant-fraction timing) for pulse data sampled at 8 GSa s-1. Events arising from the 7Li(p, n)7Be reaction have been detected with an EJ-301 organic liquid scintillator and recorded with a fast digital sampling oscilloscope. Sample-interpolation timing was developed solely for the digital domain and thus performs more efficiently on digital signals compared with analogue time pick-off methods replicated digitally, especially for fast signals that are sampled at rates that current affordable and portable devices can achieve. Sample interpolation can be applied to any analogue timing method replicated digitally and thus also has the potential to exploit the generic capabilities of analogue techniques with the benefits of operating in the digital domain. A threshold in sampling rate with respect to the signal pulse width is observed beyond which further improvements in timing resolution are not attained. This advance is relevant to many applications in which time-of-flight measurement is essential.
Method and apparatus for data sampling
Odell, Daniel M. C.
1994-01-01
A method and apparatus for sampling radiation detector outputs and determining event data from the collected samples. The method uses high speed sampling of the detector output, the conversion of the samples to digital values, and the discrimination of the digital values so that digital values representing detected events are determined. The high speed sampling and digital conversion is performed by an A/D sampler that samples the detector output at a rate high enough to produce numerous digital samples for each detected event. The digital discrimination identifies those digital samples that are not representative of detected events. The sampling and discrimination also provides for temporary or permanent storage, either serially or in parallel, to a digital storage medium.
Ai, Yuncan; Ai, Hannan; Meng, Fanmei; Zhao, Lei
2013-01-01
No attention has been paid on comparing a set of genome sequences crossing genetic components and biological categories with far divergence over large size range. We define it as the systematic comparative genomics and aim to develop the methodology. First, we create a method, GenomeFingerprinter, to unambiguously produce a set of three-dimensional coordinates from a sequence, followed by one three-dimensional plot and six two-dimensional trajectory projections, to illustrate the genome fingerprint of a given genome sequence. Second, we develop a set of concepts and tools, and thereby establish a method called the universal genome fingerprint analysis (UGFA). Particularly, we define the total genetic component configuration (TGCC) (including chromosome, plasmid, and phage) for describing a strain as a systematic unit, the universal genome fingerprint map (UGFM) of TGCC for differentiating strains as a universal system, and the systematic comparative genomics (SCG) for comparing a set of genomes crossing genetic components and biological categories. Third, we construct a method of quantitative analysis to compare two genomes by using the outcome dataset of genome fingerprint analysis. Specifically, we define the geometric center and its geometric mean for a given genome fingerprint map, followed by the Euclidean distance, the differentiate rate, and the weighted differentiate rate to quantitatively describe the difference between two genomes of comparison. Moreover, we demonstrate the applications through case studies on various genome sequences, giving tremendous insights into the critical issues in microbial genomics and taxonomy. We have created a method, GenomeFingerprinter, for rapidly computing, geometrically visualizing, intuitively comparing a set of genomes at genome fingerprint level, and hence established a method called the universal genome fingerprint analysis, as well as developed a method of quantitative analysis of the outcome dataset. These have set up the methodology of systematic comparative genomics based on the genome fingerprint analysis.
Open Window: When Easily Identifiable Genomes and Traits Are in the Public Domain
Angrist, Misha
2014-01-01
“One can't be of an enquiring and experimental nature, and still be very sensible.” - Charles Fort [1] As the costs of personal genetic testing “self-quantification” fall, publicly accessible databases housing people's genotypic and phenotypic information are gradually increasing in number and scope. The latest entrant is openSNP, which allows participants to upload their personal genetic/genomic and self-reported phenotypic data. I believe the emergence of such open repositories of human biological data is a natural reflection of inquisitive and digitally literate people's desires to make genomic and phenotypic information more easily available to a community beyond the research establishment. Such unfettered databases hold the promise of contributing mightily to science, science education and medicine. That said, in an age of increasingly widespread governmental and corporate surveillance, we would do well to be mindful that genomic DNA is uniquely identifying. Participants in open biological databases are engaged in a real-time experiment whose outcome is unknown. PMID:24647311
Internet Versus Virtual Reality Settings for Genomics Information Provision.
Persky, Susan; Kistler, William D; Klein, William M P; Ferrer, Rebecca A
2018-06-22
Current models of genomic information provision will be unable to handle large-scale clinical integration of genomic information, as may occur in primary care settings. Therefore, adoption of digital tools for genetic and genomic information provision is anticipated, primarily using Internet-based, distributed approaches. The emerging consumer communication platform of virtual reality (VR) is another potential intermediate approach between face-to-face and distributed Internet platforms to engage in genomics education and information provision. This exploratory study assessed whether provision of genomics information about body weight in a simulated, VR-based consultation (relative to a distributed, Internet platform) would be associated with differences in health behavior-related attitudes and beliefs, and interpersonal reactions to the avatar-physician. We also assessed whether outcomes differed depending upon whether genomic versus lifestyle-oriented information was conveyed. There were significant differences between communication platforms for all health behavior-oriented outcomes. Following communication in the VR setting, participants reported greater self-efficacy, dietary behavioral intentions, and exercise behavioral intentions than in the Internet-based setting. There were no differences in trust of the physician by setting, and no interaction between setting effects and the content of the information. This study was a first attempt to examine the potential capabilities of a VR-based communication setting for conveying genomic content in the context of weight management. There may be benefits to use of VR settings for communication about genomics, as well as more traditional health information, when it comes to influencing the attitudes and beliefs that underlie healthy lifestyle behaviors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidsmeier, T.; Koehl, R.; Lanham, R.
2008-07-15
The current design and fabrication process for RERTR fuel plates utilizes film radiography during the nondestructive testing and characterization. Digital radiographic methods offer a potential increases in efficiency and accuracy. The traditional and digital radiographic methods are described and demonstrated on a fuel plate constructed with and average of 51% by volume fuel using the dispersion method. Fuel loading data from each method is analyzed and compared to a third baseline method to assess accuracy. The new digital method is shown to be more accurate, save hours of work, and provide additional information not easily available in the traditional method.more » Additional possible improvements suggested by the new digital method are also raised. (author)« less
Modulation for terrestrial broadcasting of digital HDTV
NASA Technical Reports Server (NTRS)
Kohn, Elliott S.
1991-01-01
The digital modulation methods used by the DigiCipher, DSC-HDTV, ADTV, and ATVA-P digital high-definition television (HDTV) systems are discussed. Three of the systems use a quadrature amplitude modulation method, and the fourth uses a vestigial sideband modulation method. The channel equalization and spectrum sharing of the digital HDTV systems is discussed.
Using circulating cell-free DNA to monitor personalized cancer therapy.
Oellerich, Michael; Schütz, Ekkehard; Beck, Julia; Kanzow, Philipp; Plowman, Piers N; Weiss, Glen J; Walson, Philip D
2017-05-01
High-quality genomic analysis is critical for personalized pharmacotherapy in patients with cancer. Tumor-specific genomic alterations can be identified in cell-free DNA (cfDNA) from patient blood samples and can complement biopsies for real-time molecular monitoring of treatment, detection of recurrence, and tracking resistance. cfDNA can be especially useful when tumor tissue is unavailable or insufficient for testing. For blood-based genomic profiling, next-generation sequencing (NGS) and droplet digital PCR (ddPCR) have been successfully applied. The US Food and Drug Administration (FDA) recently approved the first such "liquid biopsy" test for EGFR mutations in patients with non-small cell lung cancer (NSCLC). Such non-invasive methods allow for the identification of specific resistance mutations selected by treatment, such as EGFR T790M, in patients with NSCLC treated with gefitinib. Chromosomal aberration pattern analysis by low coverage whole genome sequencing is a more universal approach based on genomic instability. Gains and losses of chromosomal regions have been detected in plasma tumor-specific cfDNA as copy number aberrations and can be used to compute a genomic copy number instability (CNI) score of cfDNA. A specific CNI index obtained by massive parallel sequencing discriminated those patients with prostate cancer from both healthy controls and men with benign prostatic disease. Furthermore, androgen receptor gene aberrations in cfDNA were associated with therapeutic resistance in metastatic castration resistant prostate cancer. Change in CNI score has been shown to serve as an early predictor of response to standard chemotherapy for various other cancer types (e.g. NSCLC, colorectal cancer, pancreatic ductal adenocarcinomas). CNI scores have also been shown to predict therapeutic responses to immunotherapy. Serial genomic profiling can detect resistance mutations up to 16 weeks before radiographic progression. There is a potential for cost savings when ineffective use of expensive new anticancer drugs is avoided or halted. Challenges for routine implementation of liquid biopsy tests include the necessity of specialized personnel, instrumentation, and software, as well as further development of quality management (e.g. external quality control). Validation of blood-based tumor genomic profiling in additional multicenter outcome studies is necessary; however, cfDNA monitoring can provide clinically important actionable information for precision oncology approaches.
Organogenesis in deep time: A problem in genomics, development, and paleontology.
Pieretti, Joyce; Gehrke, Andrew R; Schneider, Igor; Adachi, Noritaka; Nakamura, Tetsuya; Shubin, Neil H
2015-04-21
The fossil record is a unique repository of information on major morphological transitions. Increasingly, developmental, embryological, and functional genomic approaches have also conspired to reveal evolutionary trajectory of phenotypic shifts. Here, we use the vertebrate appendage to demonstrate how these disciplines can mutually reinforce each other to facilitate the generation and testing of hypotheses of morphological evolution. We discuss classical theories on the origins of paired fins, recent data on regulatory modulations of fish fins and tetrapod limbs, and case studies exploring the mechanisms of digit loss in tetrapods. We envision an era of research in which the deep history of morphological evolution can be revealed by integrating fossils of transitional forms with direct experimentation in the laboratory via genome manipulation, thereby shedding light on the relationship between genes, developmental processes, and the evolving phenotype.
Cell-free circulating tumour DNA as a liquid biopsy in breast cancer.
De Mattos-Arruda, Leticia; Caldas, Carlos
2016-03-01
Recent developments in massively parallel sequencing and digital genomic techniques support the clinical validity of cell-free circulating tumour DNA (ctDNA) as a 'liquid biopsy' in human cancer. In breast cancer, ctDNA detected in plasma can be used to non-invasively scan tumour genomes and quantify tumour burden. The applications for ctDNA in plasma include identifying actionable genomic alterations, monitoring treatment responses, unravelling therapeutic resistance, and potentially detecting disease progression before clinical and radiological confirmation. ctDNA may be used to characterise tumour heterogeneity and metastasis-specific mutations providing information to adapt the therapeutic management of patients. In this article, we review the current status of ctDNA as a 'liquid biopsy' in breast cancer. Copyright © 2015 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
3D measurement by digital photogrammetry
NASA Astrophysics Data System (ADS)
Schneider, Carl T.
1993-12-01
Photogrammetry is well known in geodetic surveys as aerial photogrammetry or close range applications as architectural photogrammetry. The photogrammetric methods and algorithms combined with digital cameras and digital image processing methods are now introduced for industrial applications as automation and quality control. The presented paper will describe the photogrammetric and digital image processing algorithms and the calibration methods. These algorithms and methods were demonstrated with application examples. These applications are a digital photogrammetric workstation as a mobil multi purpose 3D measuring tool and a tube measuring system as an example for a single purpose tool.
Muley, Vijaykumar Yogesh; Ranjan, Akash
2012-01-01
Recent progress in computational methods for predicting physical and functional protein-protein interactions has provided new insights into the complexity of biological processes. Most of these methods assume that functionally interacting proteins are likely to have a shared evolutionary history. This history can be traced out for the protein pairs of a query genome by correlating different evolutionary aspects of their homologs in multiple genomes known as the reference genomes. These methods include phylogenetic profiling, gene neighborhood and co-occurrence of the orthologous protein coding genes in the same cluster or operon. These are collectively known as genomic context methods. On the other hand a method called mirrortree is based on the similarity of phylogenetic trees between two interacting proteins. Comprehensive performance analyses of these methods have been frequently reported in literature. However, very few studies provide insight into the effect of reference genome selection on detection of meaningful protein interactions. We analyzed the performance of four methods and their variants to understand the effect of reference genome selection on prediction efficacy. We used six sets of reference genomes, sampled in accordance with phylogenetic diversity and relationship between organisms from 565 bacteria. We used Escherichia coli as a model organism and the gold standard datasets of interacting proteins reported in DIP, EcoCyc and KEGG databases to compare the performance of the prediction methods. Higher performance for predicting protein-protein interactions was achievable even with 100-150 bacterial genomes out of 565 genomes. Inclusion of archaeal genomes in the reference genome set improves performance. We find that in order to obtain a good performance, it is better to sample few genomes of related genera of prokaryotes from the large number of available genomes. Moreover, such a sampling allows for selecting 50-100 genomes for comparable accuracy of predictions when computational resources are limited.
Deformation analysis of MEMS structures by modified digital moiré methods
NASA Astrophysics Data System (ADS)
Liu, Zhanwei; Lou, Xinhao; Gao, Jianxin
2010-11-01
Quantitative deformation analysis of micro-fabricated electromechanical systems is of importance for the design and functional control of microsystems. In this paper, two modified digital moiré processing methods, Gaussian blurring algorithm combined with digital phase shifting and geometrical phase analysis (GPA) technique based on digital moiré method, are developed to quantitatively analyse the deformation behaviour of micro-electro-mechanical system (MEMS) structures. Measuring principles and experimental procedures of the two methods are described in detail. A digital moiré fringe pattern is generated by superimposing a specimen grating etched directly on a microstructure surface with a digital reference grating (DRG). Most of the grating noise is removed from the digital moiré fringes, which enables the phase distribution of the moiré fringes to be obtained directly. Strain measurement result of a MEMS structure demonstrates the feasibility of the two methods.
Recent developments in detection and enumeration of waterborne bacteria: a retrospective minireview.
Deshmukh, Rehan A; Joshi, Kopal; Bhand, Sunil; Roy, Utpal
2016-12-01
Waterborne diseases have emerged as global health problems and their rapid and sensitive detection in environmental water samples is of great importance. Bacterial identification and enumeration in water samples is significant as it helps to maintain safe drinking water for public consumption. Culture-based methods are laborious, time-consuming, and yield false-positive results, whereas viable but nonculturable (VBNCs) microorganisms cannot be recovered. Hence, numerous methods have been developed for rapid detection and quantification of waterborne pathogenic bacteria in water. These rapid methods can be classified into nucleic acid-based, immunology-based, and biosensor-based detection methods. This review summarizes the principle and current state of rapid methods for the monitoring and detection of waterborne bacterial pathogens. Rapid methods outlined are polymerase chain reaction (PCR), digital droplet PCR, real-time PCR, multiplex PCR, DNA microarray, Next-generation sequencing (pyrosequencing, Illumina technology and genomics), and fluorescence in situ hybridization that are categorized as nucleic acid-based methods. Enzyme-linked immunosorbent assay (ELISA) and immunofluorescence are classified into immunology-based methods. Optical, electrochemical, and mass-based biosensors are grouped into biosensor-based methods. Overall, these methods are sensitive, specific, time-effective, and important in prevention and diagnosis of waterborne bacterial diseases. © 2016 The Authors. MicrobiologyOpen published by John Wiley & Sons Ltd.
Digital mammography, cancer screening: Factors important for image compression
NASA Technical Reports Server (NTRS)
Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria
1993-01-01
The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.
Multiple Hotspot Mutations Scanning by Single Droplet Digital PCR.
Decraene, Charles; Silveira, Amanda B; Bidard, François-Clément; Vallée, Audrey; Michel, Marc; Melaabi, Samia; Vincent-Salomon, Anne; Saliou, Adrien; Houy, Alexandre; Milder, Maud; Lantz, Olivier; Ychou, Marc; Denis, Marc G; Pierga, Jean-Yves; Stern, Marc-Henri; Proudhon, Charlotte
2018-02-01
Progress in the liquid biopsy field, combined with the development of droplet digital PCR (ddPCR), has enabled noninvasive monitoring of mutations with high detection accuracy. However, current assays detect a restricted number of mutations per reaction. ddPCR is a recognized method for detecting alterations previously characterized in tumor tissues, but its use as a discovery tool when the mutation is unknown a priori remains limited. We established 2 ddPCR assays detecting all genomic alterations within KRAS exon 2 and EGFR exon 19 mutation hotspots, which are of clinical importance in colorectal and lung cancer, with use of a unique pair of TaqMan ® oligoprobes. The KRAS assay scanned for the 7 most common mutations in codons 12/13 but also all other mutations found in that region. The EGFR assay screened for all in-frame deletions of exon 19, which are frequent EGFR-activating events. The KRAS and EGFR assays were highly specific and both reached a limit of detection of <0.1% in mutant allele frequency. We further validated their performance on multiple plasma and formalin-fixed and paraffin-embedded tumor samples harboring a panel of different KRAS or EGFR mutations. This method presents the advantage of detecting a higher number of mutations with single-reaction ddPCRs while consuming a minimum of patient sample. This is particularly useful in the context of liquid biopsy because the amount of circulating tumor DNA is often low. This method should be useful as a discovery tool when the tumor tissue is unavailable or to monitor disease during therapy. © 2017 American Association for Clinical Chemistry.
Method and apparatus for data sampling
Odell, D.M.C.
1994-04-19
A method and apparatus for sampling radiation detector outputs and determining event data from the collected samples is described. The method uses high speed sampling of the detector output, the conversion of the samples to digital values, and the discrimination of the digital values so that digital values representing detected events are determined. The high speed sampling and digital conversion is performed by an A/D sampler that samples the detector output at a rate high enough to produce numerous digital samples for each detected event. The digital discrimination identifies those digital samples that are not representative of detected events. The sampling and discrimination also provides for temporary or permanent storage, either serially or in parallel, to a digital storage medium. 6 figures.
Cruz-Roa, Angel; Gilmore, Hannah; Basavanhally, Ajay; Feldman, Michael; Ganesan, Shridar; Shih, Natalie; Tomaszewski, John; Madabhushi, Anant; González, Fabio
2018-01-01
Precise detection of invasive cancer on whole-slide images (WSI) is a critical first step in digital pathology tasks of diagnosis and grading. Convolutional neural network (CNN) is the most popular representation learning method for computer vision tasks, which have been successfully applied in digital pathology, including tumor and mitosis detection. However, CNNs are typically only tenable with relatively small image sizes (200 × 200 pixels). Only recently, Fully convolutional networks (FCN) are able to deal with larger image sizes (500 × 500 pixels) for semantic segmentation. Hence, the direct application of CNNs to WSI is not computationally feasible because for a WSI, a CNN would require billions or trillions of parameters. To alleviate this issue, this paper presents a novel method, High-throughput Adaptive Sampling for whole-slide Histopathology Image analysis (HASHI), which involves: i) a new efficient adaptive sampling method based on probability gradient and quasi-Monte Carlo sampling, and, ii) a powerful representation learning classifier based on CNNs. We applied HASHI to automated detection of invasive breast cancer on WSI. HASHI was trained and validated using three different data cohorts involving near 500 cases and then independently tested on 195 studies from The Cancer Genome Atlas. The results show that (1) the adaptive sampling method is an effective strategy to deal with WSI without compromising prediction accuracy by obtaining comparative results of a dense sampling (∼6 million of samples in 24 hours) with far fewer samples (∼2,000 samples in 1 minute), and (2) on an independent test dataset, HASHI is effective and robust to data from multiple sites, scanners, and platforms, achieving an average Dice coefficient of 76%.
Gilmore, Hannah; Basavanhally, Ajay; Feldman, Michael; Ganesan, Shridar; Shih, Natalie; Tomaszewski, John; Madabhushi, Anant; González, Fabio
2018-01-01
Precise detection of invasive cancer on whole-slide images (WSI) is a critical first step in digital pathology tasks of diagnosis and grading. Convolutional neural network (CNN) is the most popular representation learning method for computer vision tasks, which have been successfully applied in digital pathology, including tumor and mitosis detection. However, CNNs are typically only tenable with relatively small image sizes (200 × 200 pixels). Only recently, Fully convolutional networks (FCN) are able to deal with larger image sizes (500 × 500 pixels) for semantic segmentation. Hence, the direct application of CNNs to WSI is not computationally feasible because for a WSI, a CNN would require billions or trillions of parameters. To alleviate this issue, this paper presents a novel method, High-throughput Adaptive Sampling for whole-slide Histopathology Image analysis (HASHI), which involves: i) a new efficient adaptive sampling method based on probability gradient and quasi-Monte Carlo sampling, and, ii) a powerful representation learning classifier based on CNNs. We applied HASHI to automated detection of invasive breast cancer on WSI. HASHI was trained and validated using three different data cohorts involving near 500 cases and then independently tested on 195 studies from The Cancer Genome Atlas. The results show that (1) the adaptive sampling method is an effective strategy to deal with WSI without compromising prediction accuracy by obtaining comparative results of a dense sampling (∼6 million of samples in 24 hours) with far fewer samples (∼2,000 samples in 1 minute), and (2) on an independent test dataset, HASHI is effective and robust to data from multiple sites, scanners, and platforms, achieving an average Dice coefficient of 76%. PMID:29795581
Digital communications: Microwave applications
NASA Astrophysics Data System (ADS)
Feher, K.
Transmission concepts and techniques of digital systems are presented; and practical state-of-the-art implementation of digital communications systems by line-of-sight microwaves is described. Particular consideration is given to statistical methods in digital transmission systems analysis, digital modulation methods, microwave amplifiers, system gain, m-ary and QAM microwave systems, correlative techniques and applications to digital radio systems, hybrid systems, digital microwave systems design, diversity and protection switching techniques, measurement techniques, and research and development trends and unsolved problems.
Review of digital holography reconstruction methods
NASA Astrophysics Data System (ADS)
Dovhaliuk, Rostyslav Yu.
2018-01-01
Development of digital holography opened new ways of both transparent and opaque objects non-destructive study. In this paper, a digital hologram reconstruction process is investigated. The advantages and limitations of common wave propagation methods are discussed. The details of a software implementation of a digital hologram reconstruction methods are presented. Finally, the performance of each wave propagation method is evaluated, and recommendations about possible use cases for each of them are given.
Crawford, H.J.; Lindenstruth, V.
1999-06-29
A method of managing digital resources of a digital system includes the step of reserving token values for certain digital resources in the digital system. A selected token value in a free-buffer-queue is then matched to an incoming digital resource request. The selected token value is then moved to a valid-request-queue. The selected token is subsequently removed from the valid-request-queue to allow a digital agent in the digital system to process the incoming digital resource request associated with the selected token. Thereafter, the selected token is returned to the free-buffer-queue. 6 figs.
Crawford, Henry J.; Lindenstruth, Volker
1999-01-01
A method of managing digital resources of a digital system includes the step of reserving token values for certain digital resources in the digital system. A selected token value in a free-buffer-queue is then matched to an incoming digital resource request. The selected token value is then moved to a valid-request-queue. The selected token is subsequently removed from the valid-request-queue to allow a digital agent in the digital system to process the incoming digital resource request associated with the selected token. Thereafter, the selected token is returned to the free-buffer-queue.
Mori, Yutaka; Nomura, Takanori
2013-06-01
In holographic displays, it is undesirable to observe the speckle noises with the reconstructed images. A method for improvement of reconstructed image quality by synthesizing low-coherence digital holograms is proposed. It is possible to obtain speckleless reconstruction of holograms due to low-coherence digital holography. An image sensor records low-coherence digital holograms, and the holograms are synthesized by computational calculation. Two approaches, the threshold-processing and the picking-a-peak methods, are proposed in order to reduce random noise of low-coherence digital holograms. The reconstructed image quality by the proposed methods is compared with the case of high-coherence digital holography. Quantitative evaluation is given to confirm the proposed methods. In addition, the visual evaluation by 15 people is also shown.
Tassy, Olivier; Dauga, Delphine; Daian, Fabrice; Sobral, Daniel; Robin, François; Khoueiry, Pierre; Salgado, David; Fox, Vanessa; Caillol, Danièle; Schiappa, Renaud; Laporte, Baptiste; Rios, Anne; Luxardi, Guillaume; Kusakabe, Takehiro; Joly, Jean-Stéphane; Darras, Sébastien; Christiaen, Lionel; Contensin, Magali; Auger, Hélène; Lamy, Clément; Hudson, Clare; Rothbächer, Ute; Gilchrist, Michael J; Makabe, Kazuhiro W; Hotta, Kohji; Fujiwara, Shigeki; Satoh, Nori; Satou, Yutaka; Lemaire, Patrick
2010-10-01
Developmental biology aims to understand how the dynamics of embryonic shapes and organ functions are encoded in linear DNA molecules. Thanks to recent progress in genomics and imaging technologies, systemic approaches are now used in parallel with small-scale studies to establish links between genomic information and phenotypes, often described at the subcellular level. Current model organism databases, however, do not integrate heterogeneous data sets at different scales into a global view of the developmental program. Here, we present a novel, generic digital system, NISEED, and its implementation, ANISEED, to ascidians, which are invertebrate chordates suitable for developmental systems biology approaches. ANISEED hosts an unprecedented combination of anatomical and molecular data on ascidian development. This includes the first detailed anatomical ontologies for these embryos, and quantitative geometrical descriptions of developing cells obtained from reconstructed three-dimensional (3D) embryos up to the gastrula stages. Fully annotated gene model sets are linked to 30,000 high-resolution spatial gene expression patterns in wild-type and experimentally manipulated conditions and to 528 experimentally validated cis-regulatory regions imported from specialized databases or extracted from 160 literature articles. This highly structured data set can be explored via a Developmental Browser, a Genome Browser, and a 3D Virtual Embryo module. We show how integration of heterogeneous data in ANISEED can provide a system-level understanding of the developmental program through the automatic inference of gene regulatory interactions, the identification of inducing signals, and the discovery and explanation of novel asymmetric divisions.
Different Evolutionary Paths to Complexity for Small and Large Populations of Digital Organisms
2016-01-01
A major aim of evolutionary biology is to explain the respective roles of adaptive versus non-adaptive changes in the evolution of complexity. While selection is certainly responsible for the spread and maintenance of complex phenotypes, this does not automatically imply that strong selection enhances the chance for the emergence of novel traits, that is, the origination of complexity. Population size is one parameter that alters the relative importance of adaptive and non-adaptive processes: as population size decreases, selection weakens and genetic drift grows in importance. Because of this relationship, many theories invoke a role for population size in the evolution of complexity. Such theories are difficult to test empirically because of the time required for the evolution of complexity in biological populations. Here, we used digital experimental evolution to test whether large or small asexual populations tend to evolve greater complexity. We find that both small and large—but not intermediate-sized—populations are favored to evolve larger genomes, which provides the opportunity for subsequent increases in phenotypic complexity. However, small and large populations followed different evolutionary paths towards these novel traits. Small populations evolved larger genomes by fixing slightly deleterious insertions, while large populations fixed rare beneficial insertions that increased genome size. These results demonstrate that genetic drift can lead to the evolution of complexity in small populations and that purifying selection is not powerful enough to prevent the evolution of complexity in large populations. PMID:27923053
Tassy, Olivier; Dauga, Delphine; Daian, Fabrice; Sobral, Daniel; Robin, François; Khoueiry, Pierre; Salgado, David; Fox, Vanessa; Caillol, Danièle; Schiappa, Renaud; Laporte, Baptiste; Rios, Anne; Luxardi, Guillaume; Kusakabe, Takehiro; Joly, Jean-Stéphane; Darras, Sébastien; Christiaen, Lionel; Contensin, Magali; Auger, Hélène; Lamy, Clément; Hudson, Clare; Rothbächer, Ute; Gilchrist, Michael J.; Makabe, Kazuhiro W.; Hotta, Kohji; Fujiwara, Shigeki; Satoh, Nori; Satou, Yutaka; Lemaire, Patrick
2010-01-01
Developmental biology aims to understand how the dynamics of embryonic shapes and organ functions are encoded in linear DNA molecules. Thanks to recent progress in genomics and imaging technologies, systemic approaches are now used in parallel with small-scale studies to establish links between genomic information and phenotypes, often described at the subcellular level. Current model organism databases, however, do not integrate heterogeneous data sets at different scales into a global view of the developmental program. Here, we present a novel, generic digital system, NISEED, and its implementation, ANISEED, to ascidians, which are invertebrate chordates suitable for developmental systems biology approaches. ANISEED hosts an unprecedented combination of anatomical and molecular data on ascidian development. This includes the first detailed anatomical ontologies for these embryos, and quantitative geometrical descriptions of developing cells obtained from reconstructed three-dimensional (3D) embryos up to the gastrula stages. Fully annotated gene model sets are linked to 30,000 high-resolution spatial gene expression patterns in wild-type and experimentally manipulated conditions and to 528 experimentally validated cis-regulatory regions imported from specialized databases or extracted from 160 literature articles. This highly structured data set can be explored via a Developmental Browser, a Genome Browser, and a 3D Virtual Embryo module. We show how integration of heterogeneous data in ANISEED can provide a system-level understanding of the developmental program through the automatic inference of gene regulatory interactions, the identification of inducing signals, and the discovery and explanation of novel asymmetric divisions. PMID:20647237
What can formal methods offer to digital flight control systems design
NASA Technical Reports Server (NTRS)
Good, Donald I.
1990-01-01
Formal methods research begins to produce methods which will enable mathematic modeling of the physical behavior of digital hardware and software systems. The development of these methods directly supports the NASA mission of increasing the scope and effectiveness of flight system modeling capabilities. The conventional, continuous mathematics that is used extensively in modeling flight systems is not adequate for accurate modeling of digital systems. Therefore, the current practice of digital flight control system design has not had the benefits of extensive mathematical modeling which are common in other parts of flight system engineering. Formal methods research shows that by using discrete mathematics, very accurate modeling of digital systems is possible. These discrete modeling methods will bring the traditional benefits of modeling to digital hardware and hardware design. Sound reasoning about accurate mathematical models of flight control systems can be an important part of reducing risk of unsafe flight control.
Spectroscopic analysis and control
Tate; , James D.; Reed, Christopher J.; Domke, Christopher H.; Le, Linh; Seasholtz, Mary Beth; Weber, Andy; Lipp, Charles
2017-04-18
Apparatus for spectroscopic analysis which includes a tunable diode laser spectrometer having a digital output signal and a digital computer for receiving the digital output signal from the spectrometer, the digital computer programmed to process the digital output signal using a multivariate regression algorithm. In addition, a spectroscopic method of analysis using such apparatus. Finally, a method for controlling an ethylene cracker hydrogenator.
Genome-Wide Discovery of Long Non-Coding RNAs in Rainbow Trout.
Al-Tobasei, Rafet; Paneru, Bam; Salem, Mohamed
2016-01-01
The ENCODE project revealed that ~70% of the human genome is transcribed. While only 1-2% of the RNAs encode for proteins, the rest are non-coding RNAs. Long non-coding RNAs (lncRNAs) form a diverse class of non-coding RNAs that are longer than 200 nt. Emerging evidence indicates that lncRNAs play critical roles in various cellular processes including regulation of gene expression. LncRNAs show low levels of gene expression and sequence conservation, which make their computational identification in genomes difficult. In this study, more than two billion Illumina sequence reads were mapped to the genome reference using the TopHat and Cufflinks software. Transcripts shorter than 200 nt, with more than 83-100 amino acids ORF, or with significant homologies to the NCBI nr-protein database were removed. In addition, a computational pipeline was used to filter the remaining transcripts based on a protein-coding-score test. Depending on the filtering stringency conditions, between 31,195 and 54,503 lncRNAs were identified, with only 421 matching known lncRNAs in other species. A digital gene expression atlas revealed 2,935 tissue-specific and 3,269 ubiquitously-expressed lncRNAs. This study annotates the lncRNA rainbow trout genome and provides a valuable resource for functional genomics research in salmonids.
The Global Genome Biodiversity Network (GGBN) Data Standard specification.
Droege, G; Barker, K; Seberg, O; Coddington, J; Benson, E; Berendsohn, W G; Bunk, B; Butler, C; Cawsey, E M; Deck, J; Döring, M; Flemons, P; Gemeinholzer, B; Güntsch, A; Hollowell, T; Kelbert, P; Kostadinov, I; Kottmann, R; Lawlor, R T; Lyal, C; Mackenzie-Dodds, J; Meyer, C; Mulcahy, D; Nussbeck, S Y; O'Tuama, É; Orrell, T; Petersen, G; Robertson, T; Söhngen, C; Whitacre, J; Wieczorek, J; Yilmaz, P; Zetzsche, H; Zhang, Y; Zhou, X
2016-01-01
Genomic samples of non-model organisms are becoming increasingly important in a broad range of studies from developmental biology, biodiversity analyses, to conservation. Genomic sample definition, description, quality, voucher information and metadata all need to be digitized and disseminated across scientific communities. This information needs to be concise and consistent in today's ever-increasing bioinformatic era, for complementary data aggregators to easily map databases to one another. In order to facilitate exchange of information on genomic samples and their derived data, the Global Genome Biodiversity Network (GGBN) Data Standard is intended to provide a platform based on a documented agreement to promote the efficient sharing and usage of genomic sample material and associated specimen information in a consistent way. The new data standard presented here build upon existing standards commonly used within the community extending them with the capability to exchange data on tissue, environmental and DNA sample as well as sequences. The GGBN Data Standard will reveal and democratize the hidden contents of biodiversity biobanks, for the convenience of everyone in the wider biobanking community. Technical tools exist for data providers to easily map their databases to the standard.Database URL: http://terms.tdwg.org/wiki/GGBN_Data_Standard. © The Author(s) 2016. Published by Oxford University Press.
2012-01-01
Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16 traits in the Nordic Holstein population. Methods The data consisted of de-regressed proofs (DRP) for 5 214 genotyped and 9 374 non-genotyped bulls. The bulls were divided into a training and a validation population by birth date, October 1, 2001. Five approaches for genomic prediction were used: 1) a simple GBLUP method, 2) a GBLUP method with a polygenic effect, 3) an adjusted GBLUP method with a polygenic effect, 4) a single-step blending method, and 5) an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted for the difference of scale between the genomic and the pedigree relationship matrices. A set of weights on the pedigree relationship matrix (ranging from 0.05 to 0.40) was used to build the combined relationship matrix in the single-step blending method and the GBLUP method with a polygenetic effect. Results Averaged over the 16 traits, reliabilities of genomic breeding values predicted using the GBLUP method with a polygenic effect (relative weight of 0.20) were 0.3% higher than reliabilities from the simple GBLUP method (without a polygenic effect). The adjusted single-step blending and original single-step blending methods (relative weight of 0.20) had average reliabilities that were 2.1% and 1.8% higher than the simple GBLUP method, respectively. In addition, the GBLUP method with a polygenic effect led to less bias of genomic predictions than the simple GBLUP method, and both single-step blending methods yielded less bias of predictions than all GBLUP methods. Conclusions The single-step blending method is an appealing approach for practical genomic prediction in dairy cattle. Genomic prediction from the single-step blending method can be improved by adjusting the scale of the genomic relationship matrix. PMID:22455934
Statistical Methods in Integrative Genomics
Richardson, Sylvia; Tseng, George C.; Sun, Wei
2016-01-01
Statistical methods in integrative genomics aim to answer important biology questions by jointly analyzing multiple types of genomic data (vertical integration) or aggregating the same type of data across multiple studies (horizontal integration). In this article, we introduce different types of genomic data and data resources, and then review statistical methods of integrative genomics, with emphasis on the motivation and rationale of these methods. We conclude with some summary points and future research directions. PMID:27482531
Eyes wide open: the personal genome project, citizen science and veracity in informed consent
Angrist, Misha
2012-01-01
I am a close observer of the Personal Genome Project (PGP) and one of the original ten participants. The PGP was originally conceived as a way to test novel DNA sequencing technologies on human samples and to begin to build a database of human genomes and traits. However, its founder, Harvard geneticist George Church, was concerned about the fact that DNA is the ultimate digital identifier – individuals and many of their traits can be identified. Therefore, he believed that promising participants privacy and confidentiality would be impractical and disingenuous. Moreover, deidentification of samples would impoverish both genotypic and phenotypic data. As a result, the PGP has arguably become best known for its unprecedented approach to informed consent. All participants must pass an exam testing their knowledge of genomic science and privacy issues and agree to forgo the privacy and confidentiality of their genomic data and personal health records. Church aims to scale up to 100,000 participants. This special report discusses the impetus for the project, its early history and its potential to have a lasting impact on the treatment of human subjects in biomedical research. PMID:22328898
CRISPR-Cas encoding of a digital movie into the genomes of a population of living bacteria.
Shipman, Seth L; Nivala, Jeff; Macklis, Jeffrey D; Church, George M
2017-07-20
DNA is an excellent medium for archiving data. Recent efforts have illustrated the potential for information storage in DNA using synthesized oligonucleotides assembled in vitro. A relatively unexplored avenue of information storage in DNA is the ability to write information into the genome of a living cell by the addition of nucleotides over time. Using the Cas1-Cas2 integrase, the CRISPR-Cas microbial immune system stores the nucleotide content of invading viruses to confer adaptive immunity. When harnessed, this system has the potential to write arbitrary information into the genome. Here we use the CRISPR-Cas system to encode the pixel values of black and white images and a short movie into the genomes of a population of living bacteria. In doing so, we push the technical limits of this information storage system and optimize strategies to minimize those limitations. We also uncover underlying principles of the CRISPR-Cas adaptation system, including sequence determinants of spacer acquisition that are relevant for understanding both the basic biology of bacterial adaptation and its technological applications. This work demonstrates that this system can capture and stably store practical amounts of real data within the genomes of populations of living cells.
Li, Xiang; Tambong, James; Yuan, Kat Xiaoli; Chen, Wen; Xu, Huimin; Lévesque, C André; De Boer, Solke H
2018-01-01
Although the genus Clavibacter was originally proposed to accommodate all phytopathogenic coryneform bacteria containing B2γ diaminobutyrate in the peptidoglycan, reclassification of all but one species into other genera has resulted in the current monospecific status of the genus. The single species in the genus, Clavibacter michiganensis, has multiple subspecies, which are all highly host-specific plant pathogens. Whole genome analysis based on average nucleotide identity and digital DNA-DNA hybridization as well as multi-locus sequence analysis (MLSA) of seven housekeeping genes support raising each of the C. michiganensis subspecies to species status. On the basis of whole genome and MLSA data, we propose the establishment of two new species and three new combinations: Clavibacter capsici sp. nov., comb. nov. and Clavibacter tessellarius sp. nov., comb. nov., and Clavibacter insidiosus comb. nov., Clavibacter nebraskensis comb. nov. and Clavibacter sepedonicus comb. nov.
Li, Xiang; Tambong, James; Yuan, Kat (Xiaoli); Chen, Wen; Xu, Huimin; Lévesque, C. André; De Boer, Solke H.
2018-01-01
Although the genus Clavibacter was originally proposed to accommodate all phytopathogenic coryneform bacteria containing B2γ diaminobutyrate in the peptidoglycan, reclassification of all but one species into other genera has resulted in the current monospecific status of the genus. The single species in the genus, Clavibacter michiganensis, has multiple subspecies, which are all highly host-specific plant pathogens. Whole genome analysis based on average nucleotide identity and digital DNA–DNA hybridization as well as multi-locus sequence analysis (MLSA) of seven housekeeping genes support raising each of the C. michiganensis subspecies to species status. On the basis of whole genome and MLSA data, we propose the establishment of two new species and three new combinations: Clavibacter capsici sp. nov., comb. nov. and Clavibacter tessellarius sp. nov., comb. nov., and Clavibacter insidiosus comb. nov., Clavibacter nebraskensis comb. nov. and Clavibacter sepedonicus comb. nov. PMID:29160202
Study on the system-level test method of digital metering in smart substation
NASA Astrophysics Data System (ADS)
Zhang, Xiang; Yang, Min; Hu, Juan; Li, Fuchao; Luo, Ruixi; Li, Jinsong; Ai, Bing
2017-03-01
Nowadays, the test methods of digital metering system in smart substation are used to test and evaluate the performance of a single device, but these methods can only effectively guarantee the accuracy and reliability of the measurement results of a digital metering device in a single run, it does not completely reflect the performance when each device constitutes a complete system. This paper introduced the shortages of the existing test methods. A system-level test method of digital metering in smart substation was proposed, and the feasibility of the method was proved by the actual test.
Approaches to Fungal Genome Annotation
Haas, Brian J.; Zeng, Qiandong; Pearson, Matthew D.; Cuomo, Christina A.; Wortman, Jennifer R.
2011-01-01
Fungal genome annotation is the starting point for analysis of genome content. This generally involves the application of diverse methods to identify features on a genome assembly such as protein-coding and non-coding genes, repeats and transposable elements, and pseudogenes. Here we describe tools and methods leveraged for eukaryotic genome annotation with a focus on the annotation of fungal nuclear and mitochondrial genomes. We highlight the application of the latest technologies and tools to improve the quality of predicted gene sets. The Broad Institute eukaryotic genome annotation pipeline is described as one example of how such methods and tools are integrated into a sequencing center’s production genome annotation environment. PMID:22059117
Codner, Gemma F; Lindner, Loic; Caulder, Adam; Wattenhofer-Donzé, Marie; Radage, Adam; Mertz, Annelyse; Eisenmann, Benjamin; Mianné, Joffrey; Evans, Edward P; Beechey, Colin V; Fray, Martin D; Birling, Marie-Christine; Hérault, Yann; Pavlovic, Guillaume; Teboul, Lydia
2016-08-05
Karyotypic integrity is essential for the successful germline transmission of alleles mutated in embryonic stem (ES) cells. Classical methods for the identification of aneuploidy involve cytological analyses that are both time consuming and require rare expertise to identify mouse chromosomes. As part of the International Mouse Phenotyping Consortium, we gathered data from over 1,500 ES cell clones and found that the germline transmission (GLT) efficiency of clones is compromised when over 50 % of cells harbour chromosome number abnormalities. In JM8 cells, chromosomes 1, 8, 11 or Y displayed copy number variation most frequently, whilst the remainder generally remain unchanged. We developed protocols employing droplet digital polymerase chain reaction (ddPCR) to accurately quantify the copy number of these four chromosomes, allowing efficient triage of ES clones prior to microinjection. We verified that assessments of aneuploidy, and thus decisions regarding the suitability of clones for microinjection, were concordant between classical cytological and ddPCR-based methods. Finally, we improved the method to include assay multiplexing so that two unstable chromosomes are counted simultaneously (and independently) in one reaction, to enhance throughput and further reduce the cost. We validated a PCR-based method as an alternative to classical karyotype analysis. This technique enables laboratories that are non-specialist, or work with large numbers of clones, to precisely screen ES cells for the most common aneuploidies prior to microinjection to ensure the highest level of germline transmission potential. The application of this method allows early exclusion of aneuploid ES cell clones in the ES cell to mouse conversion process, thus improving the chances of obtaining germline transmission and reducing the number of animals used in failed microinjection attempts. This method can be applied to any other experiments that require accurate analysis of the genome for copy number variation (CNV).
2014-01-01
Linear algebraic concept of subspace plays a significant role in the recent techniques of spectrum estimation. In this article, the authors have utilized the noise subspace concept for finding hidden periodicities in DNA sequence. With the vast growth of genomic sequences, the demand to identify accurately the protein-coding regions in DNA is increasingly rising. Several techniques of DNA feature extraction which involves various cross fields have come up in the recent past, among which application of digital signal processing tools is of prime importance. It is known that coding segments have a 3-base periodicity, while non-coding regions do not have this unique feature. One of the most important spectrum analysis techniques based on the concept of subspace is the least-norm method. The least-norm estimator developed in this paper shows sharp period-3 peaks in coding regions completely eliminating background noise. Comparison of proposed method with existing sliding discrete Fourier transform (SDFT) method popularly known as modified periodogram method has been drawn on several genes from various organisms and the results show that the proposed method has better as well as an effective approach towards gene prediction. Resolution, quality factor, sensitivity, specificity, miss rate, and wrong rate are used to establish superiority of least-norm gene prediction method over existing method. PMID:24386895
Badr, Kamal F.; Dove, Edward S.; Endrenyi, Laszlo; Geraci, Christy Jo; Hotez, Peter J.; Milius, Djims; Neves-Pereira, Maria; Pang, Tikki; Rotimi, Charles N.; Sabra, Ramzi; Sarkissian, Christineh N.; Srivastava, Sanjeeva; Tims, Hesther; Zgheib, Nathalie K.; Kickbusch, Ilona
2013-01-01
Abstract Biomedical science in the 21st century is embedded in, and draws from, a digital commons and “Big Data” created by high-throughput Omics technologies such as genomics. Classic Edisonian metaphors of science and scientists (i.e., “the lone genius” or other narrow definitions of expertise) are ill equipped to harness the vast promises of the 21st century digital commons. Moreover, in medicine and life sciences, experts often under-appreciate the important contributions made by citizen scholars and lead users of innovations to design innovative products and co-create new knowledge. We believe there are a large number of users waiting to be mobilized so as to engage with Big Data as citizen scientists—only if some funding were available. Yet many of these scholars may not meet the meta-criteria used to judge expertise, such as a track record in obtaining large research grants or a traditional academic curriculum vitae. This innovation research article describes a novel idea and action framework: micro-grants, each worth $1000, for genomics and Big Data. Though a relatively small amount at first glance, this far exceeds the annual income of the “bottom one billion”—the 1.4 billion people living below the extreme poverty level defined by the World Bank ($1.25/day). We describe two types of micro-grants. Type 1 micro-grants can be awarded through established funding agencies and philanthropies that create micro-granting programs to fund a broad and highly diverse array of small artisan labs and citizen scholars to connect genomics and Big Data with new models of discovery such as open user innovation. Type 2 micro-grants can be funded by existing or new science observatories and citizen think tanks through crowd-funding mechanisms described herein. Type 2 micro-grants would also facilitate global health diplomacy by co-creating crowd-funded micro-granting programs across nation-states in regions facing political and financial instability, while sharing similar disease burdens, therapeutics, and diagnostic needs. We report the creation of ten Type 2 micro-grants for citizen science and artisan labs to be administered by the nonprofit Data-Enabled Life Sciences Alliance International (DELSA Global, Seattle). Our hope is that these micro-grants will spur novel forms of disruptive innovation and genomics translation by artisan scientists and citizen scholars alike. We conclude with a neglected voice from the global health frontlines, the American University of Iraq in Sulaimani, and suggest that many similar global regions are now poised for micro-grant enabled collective innovation to harness the 21st century digital commons. PMID:23574338
Özdemir, Vural; Badr, Kamal F; Dove, Edward S; Endrenyi, Laszlo; Geraci, Christy Jo; Hotez, Peter J; Milius, Djims; Neves-Pereira, Maria; Pang, Tikki; Rotimi, Charles N; Sabra, Ramzi; Sarkissian, Christineh N; Srivastava, Sanjeeva; Tims, Hesther; Zgheib, Nathalie K; Kickbusch, Ilona
2013-04-01
Biomedical science in the 21(st) century is embedded in, and draws from, a digital commons and "Big Data" created by high-throughput Omics technologies such as genomics. Classic Edisonian metaphors of science and scientists (i.e., "the lone genius" or other narrow definitions of expertise) are ill equipped to harness the vast promises of the 21(st) century digital commons. Moreover, in medicine and life sciences, experts often under-appreciate the important contributions made by citizen scholars and lead users of innovations to design innovative products and co-create new knowledge. We believe there are a large number of users waiting to be mobilized so as to engage with Big Data as citizen scientists-only if some funding were available. Yet many of these scholars may not meet the meta-criteria used to judge expertise, such as a track record in obtaining large research grants or a traditional academic curriculum vitae. This innovation research article describes a novel idea and action framework: micro-grants, each worth $1000, for genomics and Big Data. Though a relatively small amount at first glance, this far exceeds the annual income of the "bottom one billion"-the 1.4 billion people living below the extreme poverty level defined by the World Bank ($1.25/day). We describe two types of micro-grants. Type 1 micro-grants can be awarded through established funding agencies and philanthropies that create micro-granting programs to fund a broad and highly diverse array of small artisan labs and citizen scholars to connect genomics and Big Data with new models of discovery such as open user innovation. Type 2 micro-grants can be funded by existing or new science observatories and citizen think tanks through crowd-funding mechanisms described herein. Type 2 micro-grants would also facilitate global health diplomacy by co-creating crowd-funded micro-granting programs across nation-states in regions facing political and financial instability, while sharing similar disease burdens, therapeutics, and diagnostic needs. We report the creation of ten Type 2 micro-grants for citizen science and artisan labs to be administered by the nonprofit Data-Enabled Life Sciences Alliance International (DELSA Global, Seattle). Our hope is that these micro-grants will spur novel forms of disruptive innovation and genomics translation by artisan scientists and citizen scholars alike. We conclude with a neglected voice from the global health frontlines, the American University of Iraq in Sulaimani, and suggest that many similar global regions are now poised for micro-grant enabled collective innovation to harness the 21(st) century digital commons.
The Sequencing Bead Array (SBA), a Next-Generation Digital Suspension Array
Akhras, Michael S.; Pettersson, Erik; Diamond, Lisa; Unemo, Magnus; Okamoto, Jennifer; Davis, Ronald W.; Pourmand, Nader
2013-01-01
Here we describe the novel Sequencing Bead Array (SBA), a complete assay for molecular diagnostics and typing applications. SBA is a digital suspension array using Next-Generation Sequencing (NGS), to replace conventional optical readout platforms. The technology allows for reducing the number of instruments required in a laboratory setting, where the same NGS instrument could be employed from whole-genome and targeted sequencing to SBA broad-range biomarker detection and genotyping. As proof-of-concept, a model assay was designed that could distinguish ten Human Papillomavirus (HPV) genotypes associated with cervical cancer progression. SBA was used to genotype 20 cervical tumor samples and, when compared with amplicon pyrosequencing, was able to detect two additional co-infections due to increased sensitivity. We also introduce in-house software Sphix, enabling easy accessibility and interpretation of results. The technology offers a multi-parallel, rapid, robust, and scalable system that is readily adaptable for a multitude of microarray diagnostic and typing applications, e.g. genetic signatures, single nucleotide polymorphisms (SNPs), structural variations, and immunoassays. SBA has the potential to dramatically change the way we perform probe-based applications, and allow for a smooth transition towards the technology offered by genomic sequencing. PMID:24116138
Digitization of Electrocardiogram From Telemetry Prior to In-hospital Cardiac Arrest: A Pilot Study.
Attin, Mina; Wang, Lu; Soroushmehr, S M Reza; Lin, Chii-Dean; Lemus, Hector; Spadafore, Maxwell; Najarian, Kayvan
2016-03-01
Analyzing telemetry electrocardiogram (ECG) data over an extended period is often time-consuming because digital records are not widely available at hospitals. Investigating trends and patterns in the ECG data could lead to establishing predictors that would shorten response time to in-hospital cardiac arrest (I-HCA). This study was conducted to validate a novel method of digitizing paper ECG tracings from telemetry systems in order to facilitate the use of heart rate as a diagnostic feature prior to I-HCA. This multicenter study used telemetry to investigate full-disclosure ECG papers of 44 cardiovascular patients obtained within 1 hr of I-HCA with initial rhythms of pulseless electrical activity and asystole. Digital ECGs were available for seven of these patients. An algorithm to digitize the full-disclosure ECG papers was developed using the shortest path method. The heart rate was measured manually (averaging R-R intervals) for ECG papers and automatically for digitized and digital ECGs. Significant correlations were found between manual and automated measurements of digitized ECGs (p < .001) and between digitized and digital ECGs (p < .001). Bland-Altman methods showed bias = .001 s, SD = .0276 s, lower and upper 95% limits of agreement for digitized and digital ECGs = .055 and -.053 s, and percentage error = 0.22%. Root mean square (rms), percentage rms difference, and signal to noise ratio values were in acceptable ranges. The digitization method was validated. Digitized ECG provides an efficient and accurate way of measuring heart rate over an extended period of time. © The Author(s) 2015.
Reljin, Branimir; Milosević, Zorica; Stojić, Tomislav; Reljin, Irini
2009-01-01
Two methods for segmentation and visualization of microcalcifications in digital or digitized mammograms are described. First method is based on modern mathematical morphology, while the second one uses the multifractal approach. In the first method, by using an appropriate combination of some morphological operations, high local contrast enhancement, followed by significant suppression of background tissue, irrespective of its radiology density, is obtained. By iterative procedure, this method highly emphasizes only small bright details, possible microcalcifications. In a multifractal approach, from initial mammogram image, a corresponding multifractal "images" are created, from which a radiologist has a freedom to change the level of segmentation. An appropriate user friendly computer aided visualization (CAV) system with embedded two methods is realized. The interactive approach enables the physician to control the level and the quality of segmentation. Suggested methods were tested through mammograms from MIAS database as a gold standard, and from clinical praxis, using digitized films and digital images from full field digital mammograph.
Alignment-free genome tree inference by learning group-specific distance metrics.
Patil, Kaustubh R; McHardy, Alice C
2013-01-01
Understanding the evolutionary relationships between organisms is vital for their in-depth study. Gene-based methods are often used to infer such relationships, which are not without drawbacks. One can now attempt to use genome-scale information, because of the ever increasing number of genomes available. This opportunity also presents a challenge in terms of computational efficiency. Two fundamentally different methods are often employed for sequence comparisons, namely alignment-based and alignment-free methods. Alignment-free methods rely on the genome signature concept and provide a computationally efficient way that is also applicable to nonhomologous sequences. The genome signature contains evolutionary signal as it is more similar for closely related organisms than for distantly related ones. We used genome-scale sequence information to infer taxonomic distances between organisms without additional information such as gene annotations. We propose a method to improve genome tree inference by learning specific distance metrics over the genome signature for groups of organisms with similar phylogenetic, genomic, or ecological properties. Specifically, our method learns a Mahalanobis metric for a set of genomes and a reference taxonomy to guide the learning process. By applying this method to more than a thousand prokaryotic genomes, we showed that, indeed, better distance metrics could be learned for most of the 18 groups of organisms tested here. Once a group-specific metric is available, it can be used to estimate the taxonomic distances for other sequenced organisms from the group. This study also presents a large scale comparison between 10 methods--9 alignment-free and 1 alignment-based.
Artifacts in Digital Coincidence Timing
Moses, W. W.; Peng, Q.
2014-01-01
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into a time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator. All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e., the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the “optimal” method. The purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization. PMID:25321885
Artifacts in digital coincidence timing
Moses, W. W.; Peng, Q.
2014-10-16
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into amore » time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator.All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e. the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the 'optimal' method. In conclusion, the purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization.« less
Artifacts in digital coincidence timing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moses, W. W.; Peng, Q.
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into amore » time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator.All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e. the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the 'optimal' method. In conclusion, the purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization.« less
Synthetic aperture in terahertz in-line digital holography for resolution enhancement.
Huang, Haochong; Rong, Lu; Wang, Dayong; Li, Weihua; Deng, Qinghua; Li, Bin; Wang, Yunxin; Zhan, Zhiqiang; Wang, Xuemin; Wu, Weidong
2016-01-20
Terahertz digital holography is a combination of terahertz technology and digital holography. In digital holography, the imaging resolution is the key parameter in determining the detailed quality of a reconstructed wavefront. In this paper, the synthetic aperture method is used in terahertz digital holography and the in-line arrangement is built to perform the detection. The resolved capability of previous terahertz digital holographic systems restricts this technique to meet the requirement of practical detection. In contrast, the experimental resolved power of the present method can reach 125 μm, which is the best resolution of terahertz digital holography to date. Furthermore, the basic detection of a biological specimen is conducted to show the practical application. In all, the results of the proposed method demonstrate the enhancement of experimental imaging resolution and that the amplitude and phase distributions of the fine structure of samples can be reconstructed by using terahertz digital holography.
NASA Technical Reports Server (NTRS)
Montgomery, O. L.
1977-01-01
Procedures developed for digitizing the transportation arteries, airports, and dock facilities of Alabama and placing them in a computerized format compatible with the Alabama Resource Information System are described. The time required to digitize by the following methods: (a) manual, (b) Telereadex 29 with film reading and digitizing system, and (c) digitizing tablets was evaluated. A method for digitizing and storing information from the U. T. M. grid cell base which was compatible with the system was developed and tested. The highways, navigable waterways, railroads, airports, and docks in the study area were digitized and the data stored. The manual method of digitizing was shown to be best for small amounts of data, while the graphic input from the digitizing tablets would be the best approach for entering the large amounts of data required for an entire state.
Variation block-based genomics method for crop plants.
Kim, Yul Ho; Park, Hyang Mi; Hwang, Tae-Young; Lee, Seuk Ki; Choi, Man Soo; Jho, Sungwoong; Hwang, Seungwoo; Kim, Hak-Min; Lee, Dongwoo; Kim, Byoung-Chul; Hong, Chang Pyo; Cho, Yun Sung; Kim, Hyunmin; Jeong, Kwang Ho; Seo, Min Jung; Yun, Hong Tai; Kim, Sun Lim; Kwon, Young-Up; Kim, Wook Han; Chun, Hye Kyung; Lim, Sang Jong; Shin, Young-Ah; Choi, Ik-Young; Kim, Young Sun; Yoon, Ho-Sung; Lee, Suk-Ha; Lee, Sunghoon
2014-06-15
In contrast with wild species, cultivated crop genomes consist of reshuffled recombination blocks, which occurred by crossing and selection processes. Accordingly, recombination block-based genomics analysis can be an effective approach for the screening of target loci for agricultural traits. We propose the variation block method, which is a three-step process for recombination block detection and comparison. The first step is to detect variations by comparing the short-read DNA sequences of the cultivar to the reference genome of the target crop. Next, sequence blocks with variation patterns are examined and defined. The boundaries between the variation-containing sequence blocks are regarded as recombination sites. All the assumed recombination sites in the cultivar set are used to split the genomes, and the resulting sequence regions are termed variation blocks. Finally, the genomes are compared using the variation blocks. The variation block method identified recurring recombination blocks accurately and successfully represented block-level diversities in the publicly available genomes of 31 soybean and 23 rice accessions. The practicality of this approach was demonstrated by the identification of a putative locus determining soybean hilum color. We suggest that the variation block method is an efficient genomics method for the recombination block-level comparison of crop genomes. We expect that this method will facilitate the development of crop genomics by bringing genomics technologies to the field of crop breeding.
Aubrey, Wayne; Riley, Michael C; Young, Michael; King, Ross D; Oliver, Stephen G; Clare, Amanda
2015-01-01
Many advances in synthetic biology require the removal of a large number of genomic elements from a genome. Most existing deletion methods leave behind markers, and as there are a limited number of markers, such methods can only be applied a fixed number of times. Deletion methods that recycle markers generally are either imprecise (remove untargeted sequences), or leave scar sequences which can cause genome instability and rearrangements. No existing marker recycling method is automation-friendly. We have developed a novel openly available deletion tool that consists of: 1) a method for deleting genomic elements that can be repeatedly used without limit, is precise, scar-free, and suitable for automation; and 2) software to design the method's primers. Our tool is sequence agnostic and could be used to delete large numbers of coding sequences, promoter regions, transcription factor binding sites, terminators, etc in a single genome. We have validated our tool on the deletion of non-essential open reading frames (ORFs) from S. cerevisiae. The tool is applicable to arbitrary genomes, and we provide primer sequences for the deletion of: 90% of the ORFs from the S. cerevisiae genome, 88% of the ORFs from S. pombe genome, and 85% of the ORFs from the L. lactis genome.
Szeinbaum, Nadia; Kellum, Cailin E; Glass, Jennifer B; Janda, J Michael; DiChristina, Thomas J
2018-04-01
Previously, experimental DNA-DNA hybridization (DDH) between Shewanellahaliotis JCM 14758 T and Shewanellaalgae JCM 21037 T had suggested that the two strains could be considered different species, despite minimal phenotypic differences. The recent isolation of Shewanella sp. MN-01, with 99 % 16S rRNA gene identity to S. algae and S. haliotis, revealed a potential taxonomic problem between these two species. In this study, we reassessed the nomenclature of S. haliotis and S. algae using available whole-genome sequences. The whole-genome sequence of S. haliotis JCM 14758 T and ten S. algae strains showed ≥97.7 % average nucleotide identity and >78.9 % digital DDH, clearly above the recommended species thresholds. According to the rules of priority and in view of the results obtained, S. haliotis is to be considered a later heterotypic synonym of S. algae. Because the whole-genome sequence of Shewanella sp. strain MN-01 shares >99 % ANI with S. algae JCM 14758 T , it can be confidently identified as S. algae.
ERIC Educational Resources Information Center
O'Halloran, Kay L.; Tan, Sabine; Pham, Duc-Son; Bateman, John; Vande Moere, Andrew
2018-01-01
This article demonstrates how a digital environment offers new opportunities for transforming qualitative data into quantitative data in order to use data mining and information visualization for mixed methods research. The digital approach to mixed methods research is illustrated by a framework which combines qualitative methods of multimodal…
ERIC Educational Resources Information Center
Alhajri, Salman
2016-01-01
Purpose: this paper investigates the effectiveness of teaching methods used in graphic design pedagogy in both analogue and digital education systems. Methodology and approach: the paper is based on theoretical study using a qualitative, case study approach. Comparison between the digital teaching methods and traditional teaching methods was…
PGP repository: a plant phenomics and genomics data publication infrastructure.
Arend, Daniel; Junker, Astrid; Scholz, Uwe; Schüler, Danuta; Wylie, Juliane; Lange, Matthias
2016-01-01
Plant genomics and phenomics represents the most promising tools for accelerating yield gains and overcoming emerging crop productivity bottlenecks. However, accessing this wealth of plant diversity requires the characterization of this material using state-of-the-art genomic, phenomic and molecular technologies and the release of subsequent research data via a long-term stable, open-access portal. Although several international consortia and public resource centres offer services for plant research data management, valuable digital assets remains unpublished and thus inaccessible to the scientific community. Recently, the Leibniz Institute of Plant Genetics and Crop Plant Research and the German Plant Phenotyping Network have jointly initiated the Plant Genomics and Phenomics Research Data Repository (PGP) as infrastructure to comprehensively publish plant research data. This covers in particular cross-domain datasets that are not being published in central repositories because of its volume or unsupported data scope, like image collections from plant phenotyping and microscopy, unfinished genomes, genotyping data, visualizations of morphological plant models, data from mass spectrometry as well as software and documents.The repository is hosted at Leibniz Institute of Plant Genetics and Crop Plant Research using e!DAL as software infrastructure and a Hierarchical Storage Management System as data archival backend. A novel developed data submission tool was made available for the consortium that features a high level of automation to lower the barriers of data publication. After an internal review process, data are published as citable digital object identifiers and a core set of technical metadata is registered at DataCite. The used e!DAL-embedded Web frontend generates for each dataset a landing page and supports an interactive exploration. PGP is registered as research data repository at BioSharing.org, re3data.org and OpenAIRE as valid EU Horizon 2020 open data archive. Above features, the programmatic interface and the support of standard metadata formats, enable PGP to fulfil the FAIR data principles-findable, accessible, interoperable, reusable.Database URL:http://edal.ipk-gatersleben.de/repos/pgp/. © The Author(s) 2016. Published by Oxford University Press.
14 CFR 129.20 - Digital flight data recorders.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Digital flight data recorders. 129.20... § 129.20 Digital flight data recorders. No person may operate an aircraft under this part that is... digital method of recording and storing data and a method of readily retrieving that data from the storage...
14 CFR 129.20 - Digital flight data recorders.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Digital flight data recorders. 129.20... § 129.20 Digital flight data recorders. No person may operate an aircraft under this part that is... digital method of recording and storing data and a method of readily retrieving that data from the storage...
Digital Storytelling: A Method for Engaging Students and Increasing Cultural Competency
ERIC Educational Resources Information Center
Grant, Natalie S.; Bolin, Brien L.
2016-01-01
Digital storytelling is explored as a method of engaging students in the development of media literacy and cultural competency. This paper describes the perceptions and experiences of 96 undergraduate students at a large Midwestern university, after completing a digital storytelling project in a semester-long diversity course. Digital storytelling…
Digital processing of radiographic images for print publication.
Cockerill, James W
2002-01-01
Digital imaging of X-rays yields high quality, evenly exposed negatives and prints. This article outlines the method used, materials and methods of this technique and discusses the advantages of digital radiographic images.
A Digital PCR-Based Method for Efficient and Highly Specific Screening of Genome Edited Cells
Berman, Jennifer R.; Postovit, Lynne-Marie
2016-01-01
The rapid adoption of gene editing tools such as CRISPRs and TALENs for research and eventually therapeutics necessitates assays that can rapidly detect and quantitate the desired alterations. Currently, the most commonly used assay employs “mismatch nucleases” T7E1 or “Surveyor” that recognize and cleave heteroduplexed DNA amplicons containing mismatched base-pairs. However, this assay is prone to false positives due to cancer-associated mutations and/or SNPs and requires large amounts of starting material. Here we describe a powerful alternative wherein droplet digital PCR (ddPCR) can be used to decipher homozygous from heterozygous mutations with superior levels of both precision and sensitivity. We use this assay to detect knockout inducing alterations to stem cell associated proteins, NODAL and SFRP1, generated using either TALENs or an “all-in-one” CRISPR/Cas plasmid that we have modified for one-step cloning and blue/white screening of transformants. Moreover, we highlight how ddPCR can be used to assess the efficiency of varying TALEN-based strategies. Collectively, this work highlights how ddPCR-based screening can be paired with CRISPR and TALEN technologies to enable sensitive, specific, and streamlined approaches to gene editing and validation. PMID:27089539
DNA nanomechanics allows direct digital detection of complementary DNA and microRNA targets.
Husale, Sudhir; Persson, Henrik H J; Sahin, Ozgur
2009-12-24
Techniques to detect and quantify DNA and RNA molecules in biological samples have had a central role in genomics research. Over the past decade, several techniques have been developed to improve detection performance and reduce the cost of genetic analysis. In particular, significant advances in label-free methods have been reported. Yet detection of DNA molecules at concentrations below the femtomolar level requires amplified detection schemes. Here we report a unique nanomechanical response of hybridized DNA and RNA molecules that serves as an intrinsic molecular label. Nanomechanical measurements on a microarray surface have sufficient background signal rejection to allow direct detection and counting of hybridized molecules. The digital response of the sensor provides a large dynamic range that is critical for gene expression profiling. We have measured differential expressions of microRNAs in tumour samples; such measurements have been shown to help discriminate between the tissue origins of metastatic tumours. Two hundred picograms of total RNA is found to be sufficient for this analysis. In addition, the limit of detection in pure samples is found to be one attomolar. These results suggest that nanomechanical read-out of microarrays promises attomolar-level sensitivity and large dynamic range for the analysis of gene expression, while eliminating biochemical manipulations, amplification and labelling.
Niu, Yuqing; Hu, Bei; Li, Xiaoquan; Chen, Houbin; Šamaj, Jozef; Xu, Chunxiang
2018-01-01
Banana Fusarium wilt caused by Fusarium oxysporum f. sp. cubense (Foc) is one of the most destructive soil-borne diseases. In this study, young tissue-cultured plantlets of banana (Musa spp. AAA) cultivars differing in Foc susceptibility were used to reveal their differential responses to this pathogen using digital gene expression (DGE). Data were evaluated by various bioinformatic tools (Venn diagrams, gene ontology (GO) annotation and Kyoto encyclopedia of genes and genomes (KEGG) pathway analyses) and immunofluorescence labelling method to support the identification of gene candidates determining the resistance of banana against Foc. Interestingly, we have identified MaWRKY50 as an important gene involved in both constitutive and induced resistance. We also identified new genes involved in the resistance of banana to Foc, including several other transcription factors (TFs), pathogenesis-related (PR) genes and some genes related to the plant cell wall biosynthesis or degradation (e.g., pectinesterases, β-glucosidases, xyloglucan endotransglucosylase/hydrolase and endoglucanase). The resistant banana cultivar shows activation of PR-3 and PR-4 genes as well as formation of different constitutive cell barriers to restrict spreading of the pathogen. These data suggest new mechanisms of banana resistance to Foc. PMID:29364855
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-03
...-Exclusive Licenses: Multi-Focal Structured Illumination Microscopy Systems and Methods AGENCY: National... pertains to a system and method for digital confocal microscopy that rapidly processes enhanced images. In particular, the invention is a method for digital confocal microscopy that includes a digital mirror device...
Inhibition mechanisms of hemoglobin, immunoglobulin G, and whole blood in digital and real-time PCR.
Sidstedt, Maja; Hedman, Johannes; Romsos, Erica L; Waitara, Leticia; Wadsö, Lars; Steffen, Carolyn R; Vallone, Peter M; Rådström, Peter
2018-04-01
Blood samples are widely used for PCR-based DNA analysis in fields such as diagnosis of infectious diseases, cancer diagnostics, and forensic genetics. In this study, the mechanisms behind blood-induced PCR inhibition were evaluated by use of whole blood as well as known PCR-inhibitory molecules in both digital PCR and real-time PCR. Also, electrophoretic mobility shift assay was applied to investigate interactions between inhibitory proteins and DNA, and isothermal titration calorimetry was used to directly measure effects on DNA polymerase activity. Whole blood caused a decrease in the number of positive digital PCR reactions, lowered amplification efficiency, and caused severe quenching of the fluorescence of the passive reference dye 6-carboxy-X-rhodamine as well as the double-stranded DNA binding dye EvaGreen. Immunoglobulin G was found to bind to single-stranded genomic DNA, leading to increased quantification cycle values. Hemoglobin affected the DNA polymerase activity and thus lowered the amplification efficiency. Hemoglobin and hematin were shown to be the molecules in blood responsible for the fluorescence quenching. In conclusion, hemoglobin and immunoglobulin G are the two major PCR inhibitors in blood, where the first affects amplification through a direct effect on the DNA polymerase activity and quenches the fluorescence of free dye molecules, and the latter binds to single-stranded genomic DNA, hindering DNA polymerization in the first few PCR cycles. Graphical abstract PCR inhibition mechanisms of hemoglobin and immunoglobulin G (IgG). Cq quantification cycle, dsDNA double-stranded DNA, ssDNA single-stranded DNA.
Ooka, Tadasuke; Terajima, Jun; Kusumoto, Masahiro; Iguchi, Atsushi; Kurokawa, Ken; Ogura, Yoshitoshi; Asadulghani, Md; Nakayama, Keisuke; Murase, Kazunori; Ohnishi, Makoto; Iyoda, Sunao; Watanabe, Haruo; Hayashi, Tetsuya
2009-09-01
Enterohemorrhagic Escherichia coli O157 (EHEC O157) is a food-borne pathogen that has raised worldwide public health concern. The development of simple and rapid strain-typing methods is crucial for the rapid detection and surveillance of EHEC O157 outbreaks. In the present study, we developed a multiplex PCR-based strain-typing method for EHEC O157, which is based on the variability in genomic location of IS629 among EHEC O157 strains. This method is very simple, in that the procedures are completed within 2 h, the analysis can be performed without the need for special equipment or techniques (requiring only conventional PCR and agarose gel electrophoresis systems), the results can easily be transformed into digital data, and the genes for the major virulence markers of EHEC O157 (the stx(1), stx(2), and eae genes) can be detected simultaneously. Using this method, 201 EHEC O157 strains showing different XbaI digestion patterns in pulsed-field gel electrophoresis (PFGE) analysis were classified into 127 types, and outbreak-related strains showed identical or highly similar banding patterns. Although this method is less discriminatory than PFGE, it may be useful as a primary screening tool for EHEC O157 outbreaks.
Bernard, Guillaume; Chan, Cheong Xin; Ragan, Mark A
2016-07-01
Alignment-free (AF) approaches have recently been highlighted as alternatives to methods based on multiple sequence alignment in phylogenetic inference. However, the sensitivity of AF methods to genome-scale evolutionary scenarios is little known. Here, using simulated microbial genome data we systematically assess the sensitivity of nine AF methods to three important evolutionary scenarios: sequence divergence, lateral genetic transfer (LGT) and genome rearrangement. Among these, AF methods are most sensitive to the extent of sequence divergence, less sensitive to low and moderate frequencies of LGT, and most robust against genome rearrangement. We describe the application of AF methods to three well-studied empirical genome datasets, and introduce a new application of the jackknife to assess node support. Our results demonstrate that AF phylogenomics is computationally scalable to multi-genome data and can generate biologically meaningful phylogenies and insights into microbial evolution.
Development of Coriolis mass flowmeter with digital drive and signal processing technology.
Hou, Qi-Li; Xu, Ke-Jun; Fang, Min; Liu, Cui; Xiong, Wen-Jun
2013-09-01
Coriolis mass flowmeter (CMF) often suffers from two-phase flowrate which may cause flowtube stalling. To solve this problem, a digital drive method and a digital signal processing method of CMF is studied and implemented in this paper. A positive-negative step signal is used to initiate the flowtube oscillation without knowing the natural frequency of the flowtube. A digital zero-crossing detection method based on Lagrange interpolation is adopted to calculate the frequency and phase difference of the sensor output signals in order to synthesize the digital drive signal. The digital drive approach is implemented by a multiplying digital to analog converter (MDAC) and a direct digital synthesizer (DDS). A digital Coriolis mass flow transmitter is developed with a digital signal processor (DSP) to control the digital drive, and realize the signal processing. Water flow calibrations and gas-liquid two-phase flowrate experiments are conducted to examine the performance of the transmitter. The experimental results show that the transmitter shortens the start-up time and can maintain the oscillation of flowtube in two-phase flowrate condition. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Perspectives: Gene Expression in Fisheries Management
Nielsen, Jennifer L.; Pavey, Scott A.
2010-01-01
Functional genes and gene expression have been connected to physiological traits linked to effective production and broodstock selection in aquaculture, selective implications of commercial fish harvest, and adaptive changes reflected in non-commercial fish populations subject to human disturbance and climate change. Gene mapping using single nucleotide polymorphisms (SNPs) to identify functional genes, gene expression (analogue microarrays and real-time PCR), and digital sequencing technologies looking at RNA transcripts present new concepts and opportunities in support of effective and sustainable fisheries. Genomic tools have been rapidly growing in aquaculture research addressing aspects of fish health, toxicology, and early development. Genomic technologies linking effects in functional genes involved in growth, maturation and life history development have been tied to selection resulting from harvest practices. Incorporating new and ever-increasing knowledge of fish genomes is opening a different perspective on local adaptation that will prove invaluable in wild fish conservation and management. Conservation of fish stocks is rapidly incorporating research on critical adaptive responses directed at the effects of human disturbance and climate change through gene expression studies. Genomic studies of fish populations can be generally grouped into three broad categories: 1) evolutionary genomics and biodiversity; 2) adaptive physiological responses to a changing environment; and 3) adaptive behavioral genomics and life history diversity. We review current genomic research in fisheries focusing on those that use microarrays to explore differences in gene expression among phenotypes and within or across populations, information that is critically important to the conservation of fish and their relationship to humans.
Genomic and Genetic Diversity within the Pseudomonas fluorescens Complex
Garrido-Sanz, Daniel; Meier-Kolthoff, Jan P.; Göker, Markus; Martín, Marta; Rivilla, Rafael; Redondo-Nieto, Miguel
2016-01-01
The Pseudomonas fluorescens complex includes Pseudomonas strains that have been taxonomically assigned to more than fifty different species, many of which have been described as plant growth-promoting rhizobacteria (PGPR) with potential applications in biocontrol and biofertilization. So far the phylogeny of this complex has been analyzed according to phenotypic traits, 16S rDNA, MLSA and inferred by whole-genome analysis. However, since most of the type strains have not been fully sequenced and new species are frequently described, correlation between taxonomy and phylogenomic analysis is missing. In recent years, the genomes of a large number of strains have been sequenced, showing important genomic heterogeneity and providing information suitable for genomic studies that are important to understand the genomic and genetic diversity shown by strains of this complex. Based on MLSA and several whole-genome sequence-based analyses of 93 sequenced strains, we have divided the P. fluorescens complex into eight phylogenomic groups that agree with previous works based on type strains. Digital DDH (dDDH) identified 69 species and 75 subspecies within the 93 genomes. The eight groups corresponded to clustering with a threshold of 31.8% dDDH, in full agreement with our MLSA. The Average Nucleotide Identity (ANI) approach showed inconsistencies regarding the assignment to species and to the eight groups. The small core genome of 1,334 CDSs and the large pan-genome of 30,848 CDSs, show the large diversity and genetic heterogeneity of the P. fluorescens complex. However, a low number of strains were enough to explain most of the CDSs diversity at core and strain-specific genomic fractions. Finally, the identification and analysis of group-specific genome and the screening for distinctive characters revealed a phylogenomic distribution of traits among the groups that provided insights into biocontrol and bioremediation applications as well as their role as PGPR. PMID:26915094
Draft genome of the leopard gecko, Eublepharis macularius.
Xiong, Zijun; Li, Fang; Li, Qiye; Zhou, Long; Gamble, Tony; Zheng, Jiao; Kui, Ling; Li, Cai; Li, Shengbin; Yang, Huanming; Zhang, Guojie
2016-10-26
Geckos are among the most species-rich reptile groups and the sister clade to all other lizards and snakes. Geckos possess a suite of distinctive characteristics, including adhesive digits, nocturnal activity, hard, calcareous eggshells, and a lack of eyelids. However, one gecko clade, the Eublepharidae, appears to be the exception to most of these 'rules' and lacks adhesive toe pads, has eyelids, and lays eggs with soft, leathery eggshells. These differences make eublepharids an important component of any investigation into the underlying genomic innovations contributing to the distinctive phenotypes in 'typical' geckos. We report high-depth genome sequencing, assembly, and annotation for a male leopard gecko, Eublepharis macularius (Eublepharidae). Illumina sequence data were generated from seven insert libraries (ranging from 170 to 20 kb), representing a raw sequencing depth of 136X from 303 Gb of data, reduced to 84X and 187 Gb after filtering. The assembled genome of 2.02 Gb was close to the 2.23 Gb estimated by k-mer analysis. Scaffold and contig N50 sizes of 664 and 20 kb, respectively, were comparable to the previously published Gekko japonicus genome. Repetitive elements accounted for 42 % of the genome. Gene annotation yielded 24,755 protein-coding genes, of which 93 % were functionally annotated. CEGMA and BUSCO assessment showed that our assembly captured 91 % (225 of 248) of the core eukaryotic genes, and 76 % of vertebrate universal single-copy orthologs. Assembly of the leopard gecko genome provides a valuable resource for future comparative genomic studies of geckos and other squamate reptiles.
Navigating the Interface Between Landscape Genetics and Landscape Genomics.
Storfer, Andrew; Patton, Austin; Fraik, Alexandra K
2018-01-01
As next-generation sequencing data become increasingly available for non-model organisms, a shift has occurred in the focus of studies of the geographic distribution of genetic variation. Whereas landscape genetics studies primarily focus on testing the effects of landscape variables on gene flow and genetic population structure, landscape genomics studies focus on detecting candidate genes under selection that indicate possible local adaptation. Navigating the transition between landscape genomics and landscape genetics can be challenging. The number of molecular markers analyzed has shifted from what used to be a few dozen loci to thousands of loci and even full genomes. Although genome scale data can be separated into sets of neutral loci for analyses of gene flow and population structure and putative loci under selection for inference of local adaptation, there are inherent differences in the questions that are addressed in the two study frameworks. We discuss these differences and their implications for study design, marker choice and downstream analysis methods. Similar to the rapid proliferation of analysis methods in the early development of landscape genetics, new analytical methods for detection of selection in landscape genomics studies are burgeoning. We focus on genome scan methods for detection of selection, and in particular, outlier differentiation methods and genetic-environment association tests because they are the most widely used. Use of genome scan methods requires an understanding of the potential mismatches between the biology of a species and assumptions inherent in analytical methods used, which can lead to high false positive rates of detected loci under selection. Key to choosing appropriate genome scan methods is an understanding of the underlying demographic structure of study populations, and such data can be obtained using neutral loci from the generated genome-wide data or prior knowledge of a species' phylogeographic history. To this end, we summarize recent simulation studies that test the power and accuracy of genome scan methods under a variety of demographic scenarios and sampling designs. We conclude with a discussion of additional considerations for future method development, and a summary of methods that show promise for landscape genomics studies but are not yet widely used.
Navigating the Interface Between Landscape Genetics and Landscape Genomics
Storfer, Andrew; Patton, Austin; Fraik, Alexandra K.
2018-01-01
As next-generation sequencing data become increasingly available for non-model organisms, a shift has occurred in the focus of studies of the geographic distribution of genetic variation. Whereas landscape genetics studies primarily focus on testing the effects of landscape variables on gene flow and genetic population structure, landscape genomics studies focus on detecting candidate genes under selection that indicate possible local adaptation. Navigating the transition between landscape genomics and landscape genetics can be challenging. The number of molecular markers analyzed has shifted from what used to be a few dozen loci to thousands of loci and even full genomes. Although genome scale data can be separated into sets of neutral loci for analyses of gene flow and population structure and putative loci under selection for inference of local adaptation, there are inherent differences in the questions that are addressed in the two study frameworks. We discuss these differences and their implications for study design, marker choice and downstream analysis methods. Similar to the rapid proliferation of analysis methods in the early development of landscape genetics, new analytical methods for detection of selection in landscape genomics studies are burgeoning. We focus on genome scan methods for detection of selection, and in particular, outlier differentiation methods and genetic-environment association tests because they are the most widely used. Use of genome scan methods requires an understanding of the potential mismatches between the biology of a species and assumptions inherent in analytical methods used, which can lead to high false positive rates of detected loci under selection. Key to choosing appropriate genome scan methods is an understanding of the underlying demographic structure of study populations, and such data can be obtained using neutral loci from the generated genome-wide data or prior knowledge of a species' phylogeographic history. To this end, we summarize recent simulation studies that test the power and accuracy of genome scan methods under a variety of demographic scenarios and sampling designs. We conclude with a discussion of additional considerations for future method development, and a summary of methods that show promise for landscape genomics studies but are not yet widely used. PMID:29593776
Comparison of Pressures Applied by Digital Tourniquets in the Emergency Department
Lahham, Shadi; Tu, Khoa; Ni, Mickey; Tran, Viet; Lotfipour, Shahram; Anderson, Craig L.; Fox, J Christian
2011-01-01
Background: Digital tourniquets used in the emergency department have been scrutinized due to complications associated with their use, including neurovascular injury secondary to excessive tourniquet pressure and digital ischemia caused by a forgotten tourniquet. To minimize these risks, a conspicuous tourniquet that applies the least amount of pressure necessary to maintain hemostasis is recommended. Objective: To evaluate the commonly used tourniquet methods, the Penrose drain, rolled glove, the Tourni-cot and the T-Ring, to determine which applies the lowest pressure while consistently preventing digital perfusion. Methods: We measured the circumference of selected digits of 200 adult males and 200 adult females to determine the adult finger size range. We then measured the pressure applied to four representative finger sizes using a pressure monitor and assessed the ability of each method to prevent digital blood flow with a pulse oximeter. Results: We selected four representative finger sizes: 45mm, 65mm, 70mm, and 85mm to test the different tourniquet methods. All methods consistently prevented digital perfusion. The highest pressure recorded for the Penrose drain was 727 mmHg, the clamped rolled glove 439, the unclamped rolled glove 267, Tourni-cot 246, while the T-Ring had the lowest at 151 mmHg and least variable pressures of all methods. Conclusion: All tested methods provided adequate hemostasis. Only the Tourni-cot and T-Ring provided hemostasis at safe pressures across all digit sizes with the T-Ring having a lower overall average pressure. PMID:21691536
Tapping the promise of genomics in species with complex, nonmodel genomes.
Hirsch, Candice N; Buell, C Robin
2013-01-01
Genomics is enabling a renaissance in all disciplines of plant biology. However, many plant genomes are complex and remain recalcitrant to current genomic technologies. The complexities of these nonmodel plant genomes are attributable to gene and genome duplication, heterozygosity, ploidy, and/or repetitive sequences. Methods are available to simplify the genome and reduce these barriers, including inbreeding and genome reduction, making these species amenable to current sequencing and assembly methods. Some, but not all, of the complexities in nonmodel genomes can be bypassed by sequencing the transcriptome rather than the genome. Additionally, comparative genomics approaches, which leverage phylogenetic relatedness, can aid in the interpretation of complex genomes. Although there are limitations in accessing complex nonmodel plant genomes using current sequencing technologies, genome manipulation and resourceful analyses can allow access to even the most recalcitrant plant genomes.
Arya, Preeti; Kumar, Gulshan; Acharya, Vishal; Singh, Anil K.
2014-01-01
Nucleotide binding site leucine-rich repeats (NBS-LRR) disease resistance proteins play an important role in plant defense against pathogen attack. A number of recent studies have been carried out to identify and characterize NBS-LRR gene families in many important plant species. In this study, we identified NBS-LRR gene family comprising of 1015 NBS-LRRs using highly stringent computational methods. These NBS-LRRs were characterized on the basis of conserved protein motifs, gene duplication events, chromosomal locations, phylogenetic relationships and digital gene expression analysis. Surprisingly, equal distribution of Toll/interleukin-1 receptor (TIR) and coiled coil (CC) (1∶1) was detected in apple while the unequal distribution was reported in majority of all other known plant genome studies. Prediction of gene duplication events intriguingly revealed that not only tandem duplication but also segmental duplication may equally be responsible for the expansion of the apple NBS-LRR gene family. Gene expression profiling using expressed sequence tags database of apple and quantitative real-time PCR (qRT-PCR) revealed the expression of these genes in wide range of tissues and disease conditions, respectively. Taken together, this study will provide a blueprint for future efforts towards improvement of disease resistance in apple. PMID:25232838
Environmental surveillance of viruses by tangential flow filtration and metagenomic reconstruction.
Furtak, Vyacheslav; Roivainen, Merja; Mirochnichenko, Olga; Zagorodnyaya, Tatiana; Laassri, Majid; Zaidi, Sohail Z; Rehman, Lubna; Alam, Muhammad M; Chizhikov, Vladimir; Chumakov, Konstantin
2016-04-14
An approach is proposed for environmental surveillance of poliovirus by concentrating sewage samples with tangential flow filtration (TFF) followed by deep sequencing of viral RNA. Subsequent to testing the method with samples from Finland, samples from Pakistan, a country endemic for poliovirus, were investigated. Genomic sequencing was either performed directly, for unbiased identification of viruses regardless of their ability to grow in cell cultures, or after virus enrichment by cell culture or immunoprecipitation. Bioinformatics enabled separation and determination of individual consensus sequences. Overall, deep sequencing of the entire viral population identified polioviruses, non-polio enteroviruses, and other viruses. In Pakistani sewage samples, adeno-associated virus, unable to replicate autonomously in cell cultures, was the most abundant human virus. The presence of recombinants of wild polioviruses of serotype 1 (WPV1) was also inferred, whereby currently circulating WPV1 of south-Asian (SOAS) lineage comprised two sub-lineages depending on their non-capsid region origin. Complete genome analyses additionally identified point mutants and intertypic recombinants between attenuated Sabin strains in the Pakistani samples, and in one Finnish sample. The approach could allow rapid environmental surveillance of viruses causing human infections. It creates a permanent digital repository of the entire virome potentially useful for retrospective screening of future discovered viruses.
Digital storytelling as a method in health research: a systematic review protocol.
Rieger, Kendra L; West, Christina H; Kenny, Amanda; Chooniedass, Rishma; Demczuk, Lisa; Mitchell, Kim M; Chateau, Joanne; Scott, Shannon D
2018-03-05
Digital storytelling is an arts-based research method with potential to elucidate complex narratives in a compelling manner, increase participant engagement, and enhance the meaning of research findings. This method involves the creation of a 3- to 5-min video that integrates multimedia materials including photos, participant voices, drawings, and music. Given the significant potential of digital storytelling to meaningfully capture and share participants' lived experiences, a systematic review of its use in healthcare research is crucial to develop an in-depth understanding of how researchers have used this method, with an aim to refine and further inform future iterations of its use. We aim to identify and synthesize evidence on the use, impact, and ethical considerations of using digital storytelling in health research. The review questions are as follows: (1) What is known about the purpose, definition, use (processes), and contexts of digital storytelling as part of the research process in health research? (2) What impact does digital storytelling have upon the research process, knowledge development, and healthcare practice? (3) What are the key ethical considerations when using digital storytelling within qualitative, quantitative, and mixed method research studies? Key databases and the grey literature will be searched from 1990 to the present for qualitative, quantitative, and mixed methods studies that utilized digital storytelling as part of the research process. Two independent reviewers will screen and critically appraise relevant articles with established quality appraisal tools. We will extract narrative data from all studies with a standardized data extraction form and conduct a thematic analysis of the data. To facilitate innovative dissemination through social media, we will develop a visual infographic and three digital stories to illustrate the review findings, as well as methodological and ethical implications. In collaboration with national and international experts in digital storytelling, we will synthesize key evidence about digital storytelling that is critical to the development of methodological and ethical expertise about arts-based research methods. We will also develop recommendations for incorporating digital storytelling in a meaningful and ethical manner into the research process. PROSPERO registry number CRD42017068002 .
2011-01-01
Background Malnutrition is a major factor affecting animal health, resistance to disease and survival. In honey bees (Apis mellifera), pollen, which is the main dietary source of proteins, amino acids and lipids, is essential to adult bee physiological development while reducing their susceptibility to parasites and pathogens. However, the molecular mechanisms underlying pollen's nutritive impact on honey bee health remained to be determined. For that purpose, we investigated the influence of pollen nutrients on the transcriptome of worker bees parasitized by the mite Varroa destructor, known for suppressing immunity and decreasing lifespan. The 4 experimental groups (control bees without a pollen diet, control bees fed with pollen, varroa-parasitized bees without a pollen diet and varroa-parasitized bees fed with pollen) were analyzed by performing a digital gene expression (DGE) analysis on bee abdomens. Results Around 36, 000 unique tags were generated per DGE-tag library, which matched about 8, 000 genes (60% of the genes in the honey bee genome). Comparing the transcriptome of bees fed with pollen and sugar and bees restricted to a sugar diet, we found that pollen activates nutrient-sensing and metabolic pathways. In addition, those nutrients had a positive influence on genes affecting longevity and the production of some antimicrobial peptides. However, varroa parasitism caused the development of viral populations and a decrease in metabolism, specifically by inhibiting protein metabolism essential to bee health. This harmful effect was not reversed by pollen intake. Conclusions The DGE-tag profiling methods used in this study proved to be a powerful means for analyzing transcriptome variation related to nutrient intake in honey bees. Ultimately, with such an approach, applying genomics tools to nutrition research, nutrigenomics promises to offer a better understanding of how nutrition influences body homeostasis and may help reduce the susceptibility of bees to (less virulent) pathogens. PMID:21985689
Low, Joyce Siew Yong; Chin, Yoon Ming; Mushiroda, Taisei; Kubo, Michiaki; Govindasamy, Gopala Krishnan; Pua, Kin Choo; Yap, Yoke Yeow; Yap, Lee Fah; Subramaniam, Selva Kumar; Ong, Cheng Ai; Tan, Tee Yong; Khoo, Alan Soo Beng; Ng, Ching Ching
2016-01-01
Background Nasopharyngeal carcinoma (NPC) is a neoplasm of the epithelial lining of the nasopharynx. Despite various reports linking genomic variants to NPC predisposition, very few reports were done on copy number variations (CNV). CNV is an inherent structural variation that has been found to be involved in cancer predisposition. Methods A discovery cohort of Malaysian Chinese descent (NPC patients, n = 140; Healthy controls, n = 256) were genotyped using Illumina® HumanOmniExpress BeadChip. PennCNV and cnvPartition calling algorithms were applied for CNV calling. Taqman CNV assays and digital PCR were used to validate CNV calls and replicate candidate copy number variant region (CNVR) associations in a follow-up Malaysian Chinese (NPC cases, n = 465; and Healthy controls, n = 677) and Malay cohort (NPC cases, n = 114; Healthy controls, n = 124). Results Six putative CNVRs overlapping GRM5, MICA/HCP5/HCG26, LILRB3/LILRA6, DPY19L2, RNase3/RNase2 and GOLPH3 genes were jointly identified by PennCNV and cnvPartition. CNVs overlapping GRM5 and MICA/HCP5/HCG26 were subjected to further validation by Taqman CNV assays and digital PCR. Combined analysis in Malaysian Chinese cohort revealed a strong association at CNVR on chromosome 11q14.3 (Pcombined = 1.54x10-5; odds ratio (OR) = 7.27; 95% CI = 2.96–17.88) overlapping GRM5 and a suggestive association at CNVR on chromosome 6p21.3 (Pcombined = 1.29x10-3; OR = 4.21; 95% CI = 1.75–10.11) overlapping MICA/HCP5/HCG26 genes. Conclusion Our results demonstrated the association of CNVs towards NPC susceptibility, implicating a possible role of CNVs in NPC development. PMID:26730743
Comparing Mycobacterium tuberculosis genomes using genome topology networks.
Jiang, Jianping; Gu, Jianlei; Zhang, Liang; Zhang, Chenyi; Deng, Xiao; Dou, Tonghai; Zhao, Guoping; Zhou, Yan
2015-02-14
Over the last decade, emerging research methods, such as comparative genomic analysis and phylogenetic study, have yielded new insights into genotypes and phenotypes of closely related bacterial strains. Several findings have revealed that genomic structural variations (SVs), including gene gain/loss, gene duplication and genome rearrangement, can lead to different phenotypes among strains, and an investigation of genes affected by SVs may extend our knowledge of the relationships between SVs and phenotypes in microbes, especially in pathogenic bacteria. In this work, we introduce a 'Genome Topology Network' (GTN) method based on gene homology and gene locations to analyze genomic SVs and perform phylogenetic analysis. Furthermore, the concept of 'unfixed ortholog' has been proposed, whose members are affected by SVs in genome topology among close species. To improve the precision of 'unfixed ortholog' recognition, a strategy to detect annotation differences and complete gene annotation was applied. To assess the GTN method, a set of thirteen complete M. tuberculosis genomes was analyzed as a case study. GTNs with two different gene homology-assigning methods were built, the Clusters of Orthologous Groups (COG) method and the orthoMCL clustering method, and two phylogenetic trees were constructed accordingly, which may provide additional insights into whole genome-based phylogenetic analysis. We obtained 24 unfixable COG groups, of which most members were related to immunogenicity and drug resistance, such as PPE-repeat proteins (COG5651) and transcriptional regulator TetR gene family members (COG1309). The GTN method has been implemented in PERL and released on our website. The tool can be downloaded from http://homepage.fudan.edu.cn/zhouyan/gtn/ , and allows re-annotating the 'lost' genes among closely related genomes, analyzing genes affected by SVs, and performing phylogenetic analysis. With this tool, many immunogenic-related and drug resistance-related genes were found to be affected by SVs in M. tuberculosis genomes. We believe that the GTN method will be suitable for the exploration of genomic SVs in connection with biological features of bacterial strains, and that GTN-based phylogenetic analysis will provide additional insights into whole genome-based phylogenetic analysis.
NASA Astrophysics Data System (ADS)
Nakhostin, M.
2015-10-01
In this paper, we have compared the performances of the digital zero-crossing and charge-comparison methods for n/γ discrimination with liquid scintillation detectors at low light outputs. The measurements were performed with a 2″×2″ cylindrical liquid scintillation detector of type BC501A whose outputs were sampled by means of a fast waveform digitizer with 10-bit resolution, 4 GS/s sampling rate and one volt input range. Different light output ranges were measured by operating the photomultiplier tube at different voltages and a new recursive algorithm was developed to implement the digital zero-crossing method. The results of our study demonstrate the superior performance of the digital zero-crossing method at low light outputs when a large dynamic range is measured. However, when the input range of the digitizer is used to measure a narrow range of light outputs, the charge-comparison method slightly outperforms the zero-crossing method. The results are discussed in regard to the effects of the quantization noise and the noise filtration performance of the zero-crossing filter.
Single Color Multiplexed ddPCR Copy Number Measurements and Single Nucleotide Variant Genotyping.
Wood-Bouwens, Christina M; Ji, Hanlee P
2018-01-01
Droplet digital PCR (ddPCR) allows for accurate quantification of genetic events such as copy number variation and single nucleotide variants. Probe-based assays represent the current "gold-standard" for detection and quantification of these genetic events. Here, we introduce a cost-effective single color ddPCR assay that allows for single genome resolution quantification of copy number and single nucleotide variation.
Correlation and agreement of a digital and conventional method to measure arch parameters.
Nawi, Nes; Mohamed, Alizae Marny; Marizan Nor, Murshida; Ashar, Nor Atika
2018-01-01
The aim of the present study was to determine the overall reliability and validity of arch parameters measured digitally compared to conventional measurement. A sample of 111 plaster study models of Down syndrome (DS) patients were digitized using a blue light three-dimensional (3D) scanner. Digital and manual measurements of defined parameters were performed using Geomagic analysis software (Geomagic Studio 2014 software, 3D Systems, Rock Hill, SC, USA) on digital models and with a digital calliper (Tuten, Germany) on plaster study models. Both measurements were repeated twice to validate the intraexaminer reliability based on intraclass correlation coefficients (ICCs) using the independent t test and Pearson's correlation, respectively. The Bland-Altman method of analysis was used to evaluate the agreement of the measurement between the digital and plaster models. No statistically significant differences (p > 0.05) were found between the manual and digital methods when measuring the arch width, arch length, and space analysis. In addition, all parameters showed a significant correlation coefficient (r ≥ 0.972; p < 0.01) between all digital and manual measurements. Furthermore, a positive agreement between digital and manual measurements of the arch width (90-96%), arch length and space analysis (95-99%) were also distinguished using the Bland-Altman method. These results demonstrate that 3D blue light scanning and measurement software are able to precisely produce 3D digital model and measure arch width, arch length, and space analysis. The 3D digital model is valid to be used in various clinical applications.
Aubrey, Wayne; Riley, Michael C.; Young, Michael; King, Ross D.; Oliver, Stephen G.; Clare, Amanda
2015-01-01
Many advances in synthetic biology require the removal of a large number of genomic elements from a genome. Most existing deletion methods leave behind markers, and as there are a limited number of markers, such methods can only be applied a fixed number of times. Deletion methods that recycle markers generally are either imprecise (remove untargeted sequences), or leave scar sequences which can cause genome instability and rearrangements. No existing marker recycling method is automation-friendly. We have developed a novel openly available deletion tool that consists of: 1) a method for deleting genomic elements that can be repeatedly used without limit, is precise, scar-free, and suitable for automation; and 2) software to design the method’s primers. Our tool is sequence agnostic and could be used to delete large numbers of coding sequences, promoter regions, transcription factor binding sites, terminators, etc in a single genome. We have validated our tool on the deletion of non-essential open reading frames (ORFs) from S. cerevisiae. The tool is applicable to arbitrary genomes, and we provide primer sequences for the deletion of: 90% of the ORFs from the S. cerevisiae genome, 88% of the ORFs from S. pombe genome, and 85% of the ORFs from the L. lactis genome. PMID:26630677
Comparison of the spatial landmark scatter of various 3D digitalization methods.
Boldt, Florian; Weinzierl, Christian; Hertrich, Klaus; Hirschfelder, Ursula
2009-05-01
The aim of this study was to compare four different three-dimensional digitalization methods on the basis of the complex anatomical surface of a cleft lip and palate plaster cast, and to ascertain their accuracy when positioning 3D landmarks. A cleft lip and palate plaster cast was digitalized with the SCAN3D photo-optical scanner, the OPTIX 400S laser-optical scanner, the Somatom Sensation 64 computed tomography system and the MicroScribe MLX 3-axis articulated-arm digitizer. First, four examiners appraised by individual visual inspection the surface detail reproduction of the three non-tactile digitalization methods in comparison to the reference plaster cast. The four examiners then localized the landmarks five times at intervals of 2 weeks. This involved simply copying, or spatially tracing, the landmarks from a reference plaster cast to each model digitally reproduced by each digitalization method. Statistical analysis of the landmark distribution specific to each method was performed based on the 3D coordinates of the positioned landmarks. Visual evaluation of surface detail conformity assigned the photo-optical digitalization method an average score of 1.5, the highest subjectively-determined conformity (surpassing computer tomographic and laser-optical methods). The tactile scanning method revealed the lowest degree of 3D landmark scatter, 0.12 mm, and at 1.01 mm the lowest maximum 3D landmark scatter; this was followed by the computer tomographic, photo-optical and laser-optical methods (in that order). This study demonstrates that the landmarks' precision and reproducibility are determined by the complexity of the reference-model surface as well as the digital surface quality and individual ability of each evaluator to capture 3D spatial relationships. The differences in the 3D-landmark scatter values and lowest maximum 3D-landmark scatter between the best and the worst methods showed minor differences. The measurement results in this study reveal that it is not the method's precision but rather the complexity of the object analysis being planned that should determine which method is ultimately employed.
Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm
NASA Astrophysics Data System (ADS)
Gao, X.; Li, M.; Xing, L.; Liu, Y.
2018-04-01
Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.
Single-shot speckle reduction in numerical reconstruction of digitally recorded holograms: comment.
Maycock, Jonathan; Hennelly, Bryan; McDonald, John
2015-09-01
We comment on a recent Letter by Hincapie et al. [Opt. Lett.40, 1623 (2015)], in which the authors proposed a method to reduce the speckle noise in digital holograms. This method was previously published by us in Maycock ["Improving reconstructions of digital holograms," Ph.D. thesis (National University of Ireland, 2012)] and Maycock and Hennelly [Improving Reconstructions of Digital Holograms: Speckle Reduction and Occlusions in Digital Holography (Lambert Academic, 2014)]. We also wish to highlight an important limitation of the method resulting from the superposition of different perspectives of the object/scene, which was not addressed in their Letter.
Digital Versus Conventional Impressions in Fixed Prosthodontics: A Review.
Ahlholm, Pekka; Sipilä, Kirsi; Vallittu, Pekka; Jakonen, Minna; Kotiranta, Ulla
2018-01-01
To conduct a systematic review to evaluate the evidence of possible benefits and accuracy of digital impression techniques vs. conventional impression techniques. Reports of digital impression techniques versus conventional impression techniques were systematically searched for in the following databases: Cochrane Central Register of Controlled Trials, PubMed, and Web of Science. A combination of controlled vocabulary, free-text words, and well-defined inclusion and exclusion criteria guided the search. Digital impression accuracy is at the same level as conventional impression methods in fabrication of crowns and short fixed dental prostheses (FDPs). For fabrication of implant-supported crowns and FDPs, digital impression accuracy is clinically acceptable. In full-arch impressions, conventional impression methods resulted in better accuracy compared to digital impressions. Digital impression techniques are a clinically acceptable alternative to conventional impression methods in fabrication of crowns and short FDPs. For fabrication of implant-supported crowns and FDPs, digital impression systems also result in clinically acceptable fit. Digital impression techniques are faster and can shorten the operation time. Based on this study, the conventional impression technique is still recommended for full-arch impressions. © 2016 by the American College of Prosthodontists.
An Engineering View on Megatrends in Radiology: Digitization to Quantitative Tools of Medicine
Choi, Jaesoon; Yi, Jaeyoun; Choi, Seungwook; Park, Seyoun; Chang, Yongjun; Seo, Joon Beom
2013-01-01
Within six months of the discovery of X-ray in 1895, the technology was used to scan the interior of the human body, paving the way for many innovations in the field of medicine, including an ultrasound device in 1950, a CT scanner in 1972, and MRI in 1980. More recent decades have witnessed developments such as digital imaging using a picture archiving and communication system, computer-aided detection/diagnosis, organ-specific workstations, and molecular, functional, and quantitative imaging. One of the latest technical breakthrough in the field of radiology has been imaging genomics and robotic interventions for biopsy and theragnosis. This review provides an engineering perspective on these developments and several other megatrends in radiology. PMID:23482650
Why Map Issues? On Controversy Analysis as a Digital Method
2015-01-01
This article takes stock of recent efforts to implement controversy analysis as a digital method in the study of science, technology, and society (STS) and beyond and outlines a distinctive approach to address the problem of digital bias. Digital media technologies exert significant influence on the enactment of controversy in online settings, and this risks undermining the substantive focus of controversy analysis conducted by digital means. To address this problem, I propose a shift in thematic focus from controversy analysis to issue mapping. The article begins by distinguishing between three broad frameworks that currently guide the development of controversy analysis as a digital method, namely, demarcationist, discursive, and empiricist. Each has been adopted in STS, but only the last one offers a digital “move beyond impartiality.” I demonstrate this approach by analyzing issues of Internet governance with the aid of the social media platform Twitter. PMID:26336325
A Cyber Security Self-Assessment Method for Nuclear Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glantz, Clifford S.; Coles, Garill A.; Bass, Robert B.
2004-11-01
A cyber security self-assessment method (the Method) has been developed by Pacific Northwest National Laboratory. The development of the Method was sponsored and directed by the U.S. Nuclear Regulatory Commission. Members of the Nuclear Energy Institute Cyber Security Task Force also played a substantial role in developing the Method. The Method's structured approach guides nuclear power plants in scrutinizing their digital systems, assessing the potential consequences to the plant of a cyber exploitation, identifying vulnerabilities, estimating cyber security risks, and adopting cost-effective protective measures. The focus of the Method is on critical digital assets. A critical digital asset is amore » digital device or system that plays a role in the operation, maintenance, or proper functioning of a critical system (i.e., a plant system that can impact safety, security, or emergency preparedness). A critical digital asset may have a direct or indirect connection to a critical system. Direct connections include both wired and wireless communication pathways. Indirect connections include sneaker-net pathways by which software or data are manually transferred from one digital device to another. An indirect connection also may involve the use of instructions or data stored on a critical digital asset to make adjustments to a critical system. The cyber security self-assessment begins with the formation of an assessment team, and is followed by a six-stage process.« less
Genome complexity, robustness and genetic interactions in digital organisms
NASA Astrophysics Data System (ADS)
Lenski, Richard E.; Ofria, Charles; Collier, Travis C.; Adami, Christoph
1999-08-01
Digital organisms are computer programs that self-replicate, mutate and adapt by natural selection. They offer an opportunity to test generalizations about living systems that may extend beyond the organic life that biologists usually study. Here we have generated two classes of digital organism: simple programs selected solely for rapid replication, and complex programs selected to perform mathematical operations that accelerate replication through a set of defined `metabolic' rewards. To examine the differences in their genetic architecture, we introduced millions of single and multiple mutations into each organism and measured the effects on the organism's fitness. The complex organisms are more robust than the simple ones with respect to the average effects of single mutations. Interactions among mutations are common and usually yield higher fitness than predicted from the component mutations assuming multiplicative effects; such interactions are especially important in the complex organisms. Frequent interactions among mutations have also been seen in bacteria, fungi and fruitflies. Our findings support the view that interactions are a general feature of genetic systems.
Genome complexity, robustness and genetic interactions in digital organisms.
Lenski, R E; Ofria, C; Collier, T C; Adami, C
1999-08-12
Digital organisms are computer programs that self-replicate, mutate and adapt by natural selection. They offer an opportunity to test generalizations about living systems that may extend beyond the organic life that biologists usually study. Here we have generated two classes of digital organism: simple programs selected solely for rapid replication, and complex programs selected to perform mathematical operations that accelerate replication through a set of defined 'metabolic' rewards. To examine the differences in their genetic architecture, we introduced millions of single and multiple mutations into each organism and measured the effects on the organism's fitness. The complex organisms are more robust than the simple ones with respect to the average effects of single mutations. Interactions among mutations are common and usually yield higher fitness than predicted from the component mutations assuming multiplicative effects; such interactions are especially important in the complex organisms. Frequent interactions among mutations have also been seen in bacteria, fungi and fruitflies. Our findings support the view that interactions are a general feature of genetic systems.
Digitization and its discontents: future shock in predictive oncology.
Epstein, Richard J
2010-02-01
Clinical cancer care is being transformed by a high-technology informatics revolution fought out between the forces of personalized (biomarker-guided) and depersonalized (bureaucracy-controlled) medicine. Factors triggering this conflict include the online proliferation of treatment algorithms, rising prices of biological drug therapies, increasing sophistication of genomic-based predictive tools, and the growing entrepreneurialism of offshore treatment facilities. The resulting Napster-like forces unleashed within the oncology marketplace will deliver incremental improvements in cost-efficacy to global healthcare consumers. There will also be a price to pay, however, as the rising wave of digitization encourages third-party payers to make more use of biomarkers for tightening reimbursement criteria. Hence, as in other digitally transformed industries, a new paradigm of professional service delivery-less centered on doctor-patient relationships than in the past, and more dependent on pricing and marketing for standardized biomarker-defined indications-seems set to emerge as the unpredicted deliverable from this brave new world of predictive oncology. Copyright 2010 Elsevier Inc. All rights reserved.
Online Farsi digit recognition using their upper half structure
NASA Astrophysics Data System (ADS)
Ghods, Vahid; Sohrabi, Mohammad Karim
2015-03-01
In this paper, we investigated the efficiency of upper half Farsi numerical digit structure. In other words, half of data (upper half of the digit shapes) was exploited for the recognition of Farsi numerical digits. This method can be used for both offline and online recognition. Half of data is more effective in speed process, data transfer and in this application accuracy. Hidden Markov model (HMM) was used to classify online Farsi digits. Evaluation was performed by TMU dataset. This dataset contains more than 1200 samples of online handwritten Farsi digits. The proposed method yielded more accuracy in recognition rate.
Dai, F F; Liu, Y; Xu, T M; Chen, G
2018-04-18
To explore a cone beam computed tomography (CBCT)-independent method for mandibular digital dental cast superimposition to evaluate three-dimensional (3D) mandibular tooth movement after orthodontic treatment in adults, and to evaluate the accuracy of this method. Fifteen post-extraction orthodontic treatment adults from the Department of Orthodontics, Peking University School and Hospital of Stomatology were included. All the patients had four first premolars extracted, and were treated with straight wire appliance. The pre- and post-treatment plaster dental casts and craniofacial CBCT scans were obtained. The plaster dental casts were transferred to digital dental casts by 3D laser scanning, and lateral cephalograms were created from the craniofacial CBCT scans by orthogonal projection. The lateral cephalogram-based mandibular digital dental cast superimposition was achieved by sequential maxillary dental cast superimposition registered on the palatal stable region, occlusal transfer, and adjustment of mandibular rotation and translation obtained from lateral cephalogram superimposition. The accuracy of the lateral cephalogram-based mandibular digital dental cast superimposition method was evaluated with the CBCT-based mandibular digital dental cast superimposition method as the standard reference. After mandibular digital dental cast superimposition using both methods, 3D coordinate system was established, and 3D displacements of the lower bilateral first molars, canines and central incisors were measured. Differences between the two superimposition methods in tooth displacement measurements were assessed using the paired t-test with the level of statistical significance set at P<0.05. No significant differences were found between the lateral cephalogram-based and CBCT-based mandibular digital dental cast superimposition methods in 3D displacements of the lower first molars, and sagittal and vertical displacements of the canines and central incisors; transverse displacements of the canines and central incisors differed by (0.3±0.5) mm with statistical significance. The lateral cephalogram-based mandibular digital dental cast superimposition method has the similar accuracy as the CBCT-based mandibular digital dental cast superimposition method in 3D evaluation of mandibular orthodontic tooth displacement, except for minor differences for the transverse displacements of anterior teeth. This method is applicable to adult patients with conventional orthodontic treatment records, especially the previous precious orthodontic data in the absence of CBCT scans.
MetaPhinder—Identifying Bacteriophage Sequences in Metagenomic Data Sets
Villarroel, Julia; Lund, Ole; Voldby Larsen, Mette; Nielsen, Morten
2016-01-01
Bacteriophages are the most abundant biological entity on the planet, but at the same time do not account for much of the genetic material isolated from most environments due to their small genome sizes. They also show great genetic diversity and mosaic genomes making it challenging to analyze and understand them. Here we present MetaPhinder, a method to identify assembled genomic fragments (i.e.contigs) of phage origin in metagenomic data sets. The method is based on a comparison to a database of whole genome bacteriophage sequences, integrating hits to multiple genomes to accomodate for the mosaic genome structure of many bacteriophages. The method is demonstrated to out-perform both BLAST methods based on single hits and methods based on k-mer comparisons. MetaPhinder is available as a web service at the Center for Genomic Epidemiology https://cge.cbs.dtu.dk/services/MetaPhinder/, while the source code can be downloaded from https://bitbucket.org/genomicepidemiology/metaphinder or https://github.com/vanessajurtz/MetaPhinder. PMID:27684958
MetaPhinder-Identifying Bacteriophage Sequences in Metagenomic Data Sets.
Jurtz, Vanessa Isabell; Villarroel, Julia; Lund, Ole; Voldby Larsen, Mette; Nielsen, Morten
Bacteriophages are the most abundant biological entity on the planet, but at the same time do not account for much of the genetic material isolated from most environments due to their small genome sizes. They also show great genetic diversity and mosaic genomes making it challenging to analyze and understand them. Here we present MetaPhinder, a method to identify assembled genomic fragments (i.e.contigs) of phage origin in metagenomic data sets. The method is based on a comparison to a database of whole genome bacteriophage sequences, integrating hits to multiple genomes to accomodate for the mosaic genome structure of many bacteriophages. The method is demonstrated to out-perform both BLAST methods based on single hits and methods based on k-mer comparisons. MetaPhinder is available as a web service at the Center for Genomic Epidemiology https://cge.cbs.dtu.dk/services/MetaPhinder/, while the source code can be downloaded from https://bitbucket.org/genomicepidemiology/metaphinder or https://github.com/vanessajurtz/MetaPhinder.
Digital cytology: current state of the art and prospects for the future.
Wilbur, David C
2011-01-01
The growth of digital methods in pathology is accelerating. Digital images can be used for a variety of applications in cytology, including rapid interpretations, primary diagnosis and second opinions, continuing education and proficiency testing. All of these functions can be performed using small static digital images, real-time dynamic digital microscopy, or whole-slide images. This review will discuss the general principles of digital pathology, its methods and applications to cytologic specimens. As cytologic specimens have unique features compared to histopathology specimens, the key differences will be discussed. Technical and administrative issues in digital pathology applications and the outlook for the future of the field will be presented. Copyright © 2011 S. Karger AG, Basel.
Digital barcodes of suspension array using laser induced breakdown spectroscopy
He, Qinghua; Liu, Yixi; He, Yonghong; Zhu, Liang; Zhang, Yilong; Shen, Zhiyuan
2016-01-01
We show a coding method of suspension array based on the laser induced breakdown spectroscopy (LIBS), which promotes the barcodes from analog to digital. As the foundation of digital optical barcodes, nanocrystals encoded microspheres are prepared with self-assembly encapsulation method. We confirm that digital multiplexing of LIBS-based coding method becomes feasible since the microsphere can be coded with direct read-out data of wavelengths, and the method can avoid fluorescence signal crosstalk between barcodes and analyte tags, which lead to overall advantages in accuracy and stability to current fluorescent multicolor coding method. This demonstration increases the capability of multiplexed detection and accurate filtrating, expanding more extensive applications of suspension array in life science. PMID:27808270
Coordinates and intervals in graph-based reference genomes.
Rand, Knut D; Grytten, Ivar; Nederbragt, Alexander J; Storvik, Geir O; Glad, Ingrid K; Sandve, Geir K
2017-05-18
It has been proposed that future reference genomes should be graph structures in order to better represent the sequence diversity present in a species. However, there is currently no standard method to represent genomic intervals, such as the positions of genes or transcription factor binding sites, on graph-based reference genomes. We formalize offset-based coordinate systems on graph-based reference genomes and introduce methods for representing intervals on these reference structures. We show the advantage of our methods by representing genes on a graph-based representation of the newest assembly of the human genome (GRCh38) and its alternative loci for regions that are highly variable. More complex reference genomes, containing alternative loci, require methods to represent genomic data on these structures. Our proposed notation for genomic intervals makes it possible to fully utilize the alternative loci of the GRCh38 assembly and potential future graph-based reference genomes. We have made a Python package for representing such intervals on offset-based coordinate systems, available at https://github.com/uio-cels/offsetbasedgraph . An interactive web-tool using this Python package to visualize genes on a graph created from GRCh38 is available at https://github.com/uio-cels/genomicgraphcoords .
Flexible, reconfigurable, power efficient transmitter and method
NASA Technical Reports Server (NTRS)
Bishop, James W. (Inventor); Zaki, Nazrul H. Mohd (Inventor); Newman, David Childress (Inventor); Bundick, Steven N. (Inventor)
2011-01-01
A flexible, reconfigurable, power efficient transmitter device and method is provided. In one embodiment, the method includes receiving outbound data and determining a mode of operation. When operating in a first mode the method may include modulation mapping the outbound data according a modulation scheme to provide first modulation mapped digital data, converting the first modulation mapped digital data to an analog signal that comprises an intermediate frequency (IF) analog signal, upconverting the IF analog signal to produce a first modulated radio frequency (RF) signal based on a local oscillator signal, amplifying the first RF modulated signal to produce a first RF output signal, and outputting the first RF output signal via an isolator. In a second mode of operation method may include modulation mapping the outbound data according a modulation scheme to provide second modulation mapped digital data, converting the second modulation mapped digital data to a first digital baseband signal, conditioning the first digital baseband signal to provide a first analog baseband signal, modulating one or more carriers with the first analog baseband signal to produce a second modulated RF signal based on a local oscillator signal, amplifying the second RF modulated signal to produce a second RF output signal, and outputting the second RF output signal via the isolator. The digital baseband signal may comprise an in-phase (I) digital baseband signal and a quadrature (Q) baseband signal.
Zhu, Pengyu; Fu, Wei; Wang, Chenguang; Du, Zhixin; Huang, Kunlun; Zhu, Shuifang; Xu, Wentao
2016-04-15
The possibility of the absolute quantitation of GMO events by digital PCR was recently reported. However, most absolute quantitation methods based on the digital PCR required pretreatment steps. Meanwhile, singleplex detection could not meet the demand of the absolute quantitation of GMO events that is based on the ratio of foreign fragments and reference genes. Thus, to promote the absolute quantitative detection of different GMO events by digital PCR, we developed a quantitative detection method based on duplex digital PCR without pretreatment. Moreover, we tested 7 GMO events in our study to evaluate the fitness of our method. The optimized combination of foreign and reference primers, limit of quantitation (LOQ), limit of detection (LOD) and specificity were validated. The results showed that the LOQ of our method for different GMO events was 0.5%, while the LOD is 0.1%. Additionally, we found that duplex digital PCR could achieve the detection results with lower RSD compared with singleplex digital PCR. In summary, the duplex digital PCR detection system is a simple and stable way to achieve the absolute quantitation of different GMO events. Moreover, the LOQ and LOD indicated that this method is suitable for the daily detection and quantitation of GMO events. Copyright © 2016 Elsevier B.V. All rights reserved.
Daetwyler, Hans D; Calus, Mario P L; Pong-Wong, Ricardo; de Los Campos, Gustavo; Hickey, John M
2013-02-01
The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals.
Daetwyler, Hans D.; Calus, Mario P. L.; Pong-Wong, Ricardo; de los Campos, Gustavo; Hickey, John M.
2013-01-01
The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals. PMID:23222650
Diagnostic digital cytopathology: Are we ready yet?
House, Jarret C.; Henderson-Jackson, Evita B.; Johnson, Joseph O.; Lloyd, Mark C.; Dhillon, Jasreman; Ahmad, Nazeel; Hakam, Ardeshir; Khalbuss, Walid E.; Leon, Marino E.; Chhieng, David; Zhang, Xiaohui; Centeno, Barbara A.; Bui, Marilyn M.
2013-01-01
Background: The cytology literature relating to diagnostic accuracy using whole slide imaging is scarce. We studied the diagnostic concordance between glass and digital slides among diagnosticians with different profiles to assess the readiness of adopting digital cytology in routine practice. Materials and Methods: This cohort consisted of 22 de-identified previously screened and diagnosed cases, including non-gynecological and gynecological slides using standard preparations. Glass slides were digitalized using Aperio ScanScope XT (×20 and ×40). Cytopathologists with (3) and without (3) digital experience, cytotechnologists (4) and senior pathology residents (2) diagnosed the digital slides independently first and recorded the results. Glass slides were read and recorded separately 1-3 days later. Accuracy of diagnosis, time to diagnosis and diagnostician's profile were analyzed. Results: Among 22 case pairs and four study groups, correct diagnosis (93% vs. 86%) was established using glass versus digital slides. Both methods more (>95%) accurately diagnosed positive cases than negatives. Cytopathologists with no digital experience were the most accurate in digital diagnosis, even the senior members. Cytotechnologists had the fastest diagnosis time (3 min/digital vs. 1.7 min/glass), but not the best accuracy. Digital time was 1.5 min longer than glass-slide time/per case for cytopathologists and cytotechnologists. Senior pathology residents were slower and less accurate with both methods. Cytopathologists with digital experience ranked 2nd fastest in time, yet last in accuracy for digital slides. Conclusions: There was good overall diagnostic agreement between the digital whole-slide images and glass slides. Although glass slide diagnosis was more accurate and faster, the results of technologists and pathologists with no digital cytology experience suggest that solid diagnostic ability is a strong indicator for readiness of digital adoption. PMID:24392242
Eisenberg, David; Marcotte, Edward M.; Pellegrini, Matteo; Thompson, Michael J.; Yeates, Todd O.
2002-10-15
A computational method system, and computer program are provided for inferring functional links from genome sequences. One method is based on the observation that some pairs of proteins A' and B' have homologs in another organism fused into a single protein chain AB. A trans-genome comparison of sequences can reveal these AB sequences, which are Rosetta Stone sequences because they decipher an interaction between A' and B. Another method compares the genomic sequence of two or more organisms to create a phylogenetic profile for each protein indicating its presence or absence across all the genomes. The profile provides information regarding functional links between different families of proteins. In yet another method a combination of the above two methods is used to predict functional links.
Approximation of reliability of direct genomic breeding values
USDA-ARS?s Scientific Manuscript database
Two methods to efficiently approximate theoretical genomic reliabilities are presented. The first method is based on the direct inverse of the left hand side (LHS) of mixed model equations. It uses the genomic relationship matrix for a small subset of individuals with the highest genomic relationshi...
2014-01-01
Background Leptotrombidium pallidum and Leptotrombidium scutellare are the major vector mites for Orientia tsutsugamushi, the causative agent of scrub typhus. Before these organisms can be subjected to whole-genome sequencing, it is necessary to estimate their genome sizes to obtain basic information for establishing the strategies that should be used for genome sequencing and assembly. Method The genome sizes of L. pallidum and L. scutellare were estimated by a method based on quantitative real-time PCR. In addition, a k-mer analysis of the whole-genome sequences obtained through Illumina sequencing was conducted to verify the mutual compatibility and reliability of the results. Results The genome sizes estimated using qPCR were 191 ± 7 Mb for L. pallidum and 262 ± 13 Mb for L. scutellare. The k-mer analysis-based genome lengths were estimated to be 175 Mb for L. pallidum and 286 Mb for L. scutellare. The estimates from these two independent methods were mutually complementary and within a similar range to those of other Acariform mites. Conclusions The estimation method based on qPCR appears to be a useful alternative when the standard methods, such as flow cytometry, are impractical. The relatively small estimated genome sizes should facilitate whole-genome analysis, which could contribute to our understanding of Arachnida genome evolution and provide key information for scrub typhus prevention and mite vector competence. PMID:24947244
NASA Astrophysics Data System (ADS)
Dyomin, V. V.; Polovtsev, I. G.; Davydova, A. Yu.
2018-03-01
The physical principles of a method for determination of geometrical characteristics of particles and particle recognition based on the concepts of digital holography, followed by processing of the particle images reconstructed from the digital hologram, using the morphological parameter are reported. An example of application of this method for fast plankton particle recognition is given.
Tooth-size discrepancy: A comparison between manual and digital methods
Correia, Gabriele Dória Cabral; Habib, Fernando Antonio Lima; Vogel, Carlos Jorge
2014-01-01
Introduction Technological advances in Dentistry have emerged primarily in the area of diagnostic tools. One example is the 3D scanner, which can transform plaster models into three-dimensional digital models. Objective This study aimed to assess the reliability of tooth size-arch length discrepancy analysis measurements performed on three-dimensional digital models, and compare these measurements with those obtained from plaster models. Material and Methods To this end, plaster models of lower dental arches and their corresponding three-dimensional digital models acquired with a 3Shape R700T scanner were used. All of them had lower permanent dentition. Four different tooth size-arch length discrepancy calculations were performed on each model, two of which by manual methods using calipers and brass wire, and two by digital methods using linear measurements and parabolas. Results Data were statistically assessed using Friedman test and no statistically significant differences were found between the two methods (P > 0.05), except for values found by the linear digital method which revealed a slight, non-significant statistical difference. Conclusions Based on the results, it is reasonable to assert that any of these resources used by orthodontists to clinically assess tooth size-arch length discrepancy can be considered reliable. PMID:25279529
An efficient approach to BAC based assembly of complex genomes.
Visendi, Paul; Berkman, Paul J; Hayashi, Satomi; Golicz, Agnieszka A; Bayer, Philipp E; Ruperao, Pradeep; Hurgobin, Bhavna; Montenegro, Juan; Chan, Chon-Kit Kenneth; Staňková, Helena; Batley, Jacqueline; Šimková, Hana; Doležel, Jaroslav; Edwards, David
2016-01-01
There has been an exponential growth in the number of genome sequencing projects since the introduction of next generation DNA sequencing technologies. Genome projects have increasingly involved assembly of whole genome data which produces inferior assemblies compared to traditional Sanger sequencing of genomic fragments cloned into bacterial artificial chromosomes (BACs). While whole genome shotgun sequencing using next generation sequencing (NGS) is relatively fast and inexpensive, this method is extremely challenging for highly complex genomes, where polyploidy or high repeat content confounds accurate assembly, or where a highly accurate 'gold' reference is required. Several attempts have been made to improve genome sequencing approaches by incorporating NGS methods, to variable success. We present the application of a novel BAC sequencing approach which combines indexed pools of BACs, Illumina paired read sequencing, a sequence assembler specifically designed for complex BAC assembly, and a custom bioinformatics pipeline. We demonstrate this method by sequencing and assembling BAC cloned fragments from bread wheat and sugarcane genomes. We demonstrate that our assembly approach is accurate, robust, cost effective and scalable, with applications for complete genome sequencing in large and complex genomes.
Determining protein function and interaction from genome analysis
Eisenberg, David; Marcotte, Edward M.; Thompson, Michael J.; Pellegrini, Matteo; Yeates, Todd O.
2004-08-03
A computational method system, and computer program are provided for inferring functional links from genome sequences. One method is based on the observation that some pairs of proteins A' and B' have homologs in another organism fused into a single protein chain AB. A trans-genome comparison of sequences can reveal these AB sequences, which are Rosetta Stone sequences because they decipher an interaction between A' and B. Another method compares the genomic sequence of two or more organisms to create a phylogenetic profile for each protein indicating its presence or absence across all the genomes. The profile provides information regarding functional links between different families of proteins. In yet another method a combination of the above two methods is used to predict functional links.
Assigning protein functions by comparative genome analysis protein phylogenetic profiles
Pellegrini, Matteo; Marcotte, Edward M.; Thompson, Michael J.; Eisenberg, David; Grothe, Robert; Yeates, Todd O.
2003-05-13
A computational method system, and computer program are provided for inferring functional links from genome sequences. One method is based on the observation that some pairs of proteins A' and B' have homologs in another organism fused into a single protein chain AB. A trans-genome comparison of sequences can reveal these AB sequences, which are Rosetta Stone sequences because they decipher an interaction between A' and B. Another method compares the genomic sequence of two or more organisms to create a phylogenetic profile for each protein indicating its presence or absence across all the genomes. The profile provides information regarding functional links between different families of proteins. In yet another method a combination of the above two methods is used to predict functional links.
Discrete-time modelling of musical instruments
NASA Astrophysics Data System (ADS)
Välimäki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti
2006-01-01
This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed.
Superimposition of 3D digital models: A case report.
José Viñas, María; Pie de Hierro, Verónica; M Ustrell-Torrent, Josep
2018-06-01
Superimposition of digital models may be performed to assess tooth movement in three dimensions. Detailed analysis of changes in tooth position after treatment may be achieved by this method. This article describes the method of superimposing digital models with a clinical case. It emphasizes the difficult procedure of superimposing 3D models in the lower arch. A methodology for superimposing mandibular models acquired with a structured light 3D scanner is discussed. Superimposition of digital models is useful to analyse tooth movement in the three planes of space, presenting advantages over the method of cephalogram superimposition. It seems feasible to superimpose digital models in the lower arch in patients without growth by using a coordinate system based on the palatal rugae and occlusion. The described method aims to advance the difficult procedure of superimposing digital models in the mandibular arch, but further research is nonetheless required in this field. Copyright © 2018 CEO. Published by Elsevier Masson SAS. All rights reserved.
A dictionary based informational genome analysis
2012-01-01
Background In the post-genomic era several methods of computational genomics are emerging to understand how the whole information is structured within genomes. Literature of last five years accounts for several alignment-free methods, arisen as alternative metrics for dissimilarity of biological sequences. Among the others, recent approaches are based on empirical frequencies of DNA k-mers in whole genomes. Results Any set of words (factors) occurring in a genome provides a genomic dictionary. About sixty genomes were analyzed by means of informational indexes based on genomic dictionaries, where a systemic view replaces a local sequence analysis. A software prototype applying a methodology here outlined carried out some computations on genomic data. We computed informational indexes, built the genomic dictionaries with different sizes, along with frequency distributions. The software performed three main tasks: computation of informational indexes, storage of these in a database, index analysis and visualization. The validation was done by investigating genomes of various organisms. A systematic analysis of genomic repeats of several lengths, which is of vivid interest in biology (for example to compute excessively represented functional sequences, such as promoters), was discussed, and suggested a method to define synthetic genetic networks. Conclusions We introduced a methodology based on dictionaries, and an efficient motif-finding software application for comparative genomics. This approach could be extended along many investigation lines, namely exported in other contexts of computational genomics, as a basis for discrimination of genomic pathologies. PMID:22985068
Origin of life in a digital microcosm
NASA Astrophysics Data System (ADS)
C G, Nitash; LaBar, Thomas; Hintze, Arend; Adami, Christoph
2017-11-01
While all organisms on Earth share a common descent, there is no consensus on whether the origin of the ancestral self-replicator was a one-off event or whether it only represented the final survivor of multiple origins. Here, we use the digital evolution system Avida to study the origin of self-replicating computer programs. By using a computational system, we avoid many of the uncertainties inherent in any biochemical system of self-replicators (while running the risk of ignoring a fundamental aspect of biochemistry). We generated the exhaustive set of minimal-genome self-replicators and analysed the network structure of this fitness landscape. We further examined the evolvability of these self-replicators and found that the evolvability of a self-replicator is dependent on its genomic architecture. We also studied the differential ability of replicators to take over the population when competed against each other, akin to a primordial-soup model of biogenesis, and found that the probability of a self-replicator outcompeting the others is not uniform. Instead, progenitor (most-recent common ancestor) genotypes are clustered in a small region of the replicator space. Our results demonstrate how computational systems can be used as test systems for hypotheses concerning the origin of life. This article is part of the themed issue 'Reconceptualizing the origins of life'.
Rius, Nuria; Guillén, Yolanda; Delprat, Alejandra; Kapusta, Aurélie; Feschotte, Cédric; Ruiz, Alfredo
2016-05-10
Many new Drosophila genomes have been sequenced in recent years using new-generation sequencing platforms and assembly methods. Transposable elements (TEs), being repetitive sequences, are often misassembled, especially in the genomes sequenced with short reads. Consequently, the mobile fraction of many of the new genomes has not been analyzed in detail or compared with that of other genomes sequenced with different methods, which could shed light into the understanding of genome and TE evolution. Here we compare the TE content of three genomes: D. buzzatii st-1, j-19, and D. mojavensis. We have sequenced a new D. buzzatii genome (j-19) that complements the D. buzzatii reference genome (st-1) already published, and compared their TE contents with that of D. mojavensis. We found an underestimation of TE sequences in Drosophila genus NGS-genomes when compared to Sanger-genomes. To be able to compare genomes sequenced with different technologies, we developed a coverage-based method and applied it to the D. buzzatii st-1 and j-19 genome. Between 10.85 and 11.16 % of the D. buzzatii st-1 genome is made up of TEs, between 7 and 7,5 % of D. buzzatii j-19 genome, while TEs represent 15.35 % of the D. mojavensis genome. Helitrons are the most abundant order in the three genomes. TEs in D. buzzatii are less abundant than in D. mojavensis, as expected according to the genome size and TE content positive correlation. However, TEs alone do not explain the genome size difference. TEs accumulate in the dot chromosomes and proximal regions of D. buzzatii and D. mojavensis chromosomes. We also report a significantly higher TE density in D. buzzatii and D. mojavensis X chromosomes, which is not expected under the current models. Our easy-to-use correction method allowed us to identify recently active families in D. buzzatii st-1 belonging to the LTR-retrotransposon superfamily Gypsy.
Understanding Evolutionary Potential in Virtual CPU Instruction Set Architectures
Bryson, David M.; Ofria, Charles
2013-01-01
We investigate fundamental decisions in the design of instruction set architectures for linear genetic programs that are used as both model systems in evolutionary biology and underlying solution representations in evolutionary computation. We subjected digital organisms with each tested architecture to seven different computational environments designed to present a range of evolutionary challenges. Our goal was to engineer a general purpose architecture that would be effective under a broad range of evolutionary conditions. We evaluated six different types of architectural features for the virtual CPUs: (1) genetic flexibility: we allowed digital organisms to more precisely modify the function of genetic instructions, (2) memory: we provided an increased number of registers in the virtual CPUs, (3) decoupled sensors and actuators: we separated input and output operations to enable greater control over data flow. We also tested a variety of methods to regulate expression: (4) explicit labels that allow programs to dynamically refer to specific genome positions, (5) position-relative search instructions, and (6) multiple new flow control instructions, including conditionals and jumps. Each of these features also adds complication to the instruction set and risks slowing evolution due to epistatic interactions. Two features (multiple argument specification and separated I/O) demonstrated substantial improvements in the majority of test environments, along with versions of each of the remaining architecture modifications that show significant improvements in multiple environments. However, some tested modifications were detrimental, though most exhibit no systematic effects on evolutionary potential, highlighting the robustness of digital evolution. Combined, these observations enhance our understanding of how instruction architecture impacts evolutionary potential, enabling the creation of architectures that support more rapid evolution of complex solutions to a broad range of challenges. PMID:24376669
Digital methods for the history of psychology: Introduction and resources.
Fox Lee, Shayna
2016-02-01
At the York University Digital History of Psychology Laboratory, we have been working on projects that explore what digital methodologies have to offer historical research in our field. This piece provides perspective on the history and theory of digital history, as well as introductory resources for those who are curious about incorporating these methods into their own work. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Liang, Bin; Tan, Yaoju; Li, Zi; Tian, Xueshan; Du, Chen; Li, Hui; Li, Guoli; Yao, Xiangyang; Wang, Zhongan; Xu, Ye; Li, Qingge
2018-02-01
Detection of heteroresistance of Mycobacterium tuberculosis remains challenging using current genotypic drug susceptibility testing methods. Here, we described a melting curve analysis-based approach, termed DeepMelt, that can detect less-abundant mutants through selective clamping of the wild type in mixed populations. The singleplex DeepMelt assay detected 0.01% katG S315T in 10 5 M. tuberculosis genomes/μl. The multiplex DeepMelt TB/INH detected 1% of mutant species in the four loci associated with isoniazid resistance in 10 4 M. tuberculosis genomes/μl. The DeepMelt TB/INH assay was tested on a panel of DNA extracted from 602 precharacterized clinical isolates. Using the 1% proportion method as the gold standard, the sensitivity was found to be increased from 93.6% (176/188, 95% confidence interval [CI] = 89.2 to 96.3%) to 95.7% (180/188, 95% CI = 91.8 to 97.8%) compared to the MeltPro TB/INH assay. Further evaluation of 109 smear-positive sputum specimens increased the sensitivity from 83.3% (20/24, 95% CI = 64.2 to 93.3%) to 91.7% (22/24, 95% CI = 74.2 to 97.7%). In both cases, the specificity remained nearly unchanged. All heteroresistant samples newly identified by the DeepMelt TB/INH assay were confirmed by DNA sequencing and even partially by digital PCR. The DeepMelt assay may fill the gap between current genotypic and phenotypic drug susceptibility testing for detecting drug-resistant tuberculosis patients. Copyright © 2018 American Society for Microbiology.
Digital redesign of anti-wind-up controller for cascaded analog system.
Chen, Y S; Tsai, J S H; Shieh, L S; Moussighi, M M
2003-01-01
The cascaded conventional anti-wind-up (CAW) design method for integral controller is discussed. Then, the prediction-based digital redesign methodology is utilized to find the new pulse amplitude modulated (PAM) digital controller for effective digital control of the analog plant with input saturation constraint. The desired digital controller is determined from existing or pre-designed CAW analog controller. The proposed method provides a novel methodology for indirect digital design of a continuous-time unity output-feedback system with a cascaded analog controller as in the case of PID controllers for industrial control processes with the presence of actuator saturations. It enables us to implement an existing or pre-designed cascaded CAW analog controller via a digital controller effectively.
Cyberinfrastructure for the digital brain: spatial standards for integrating rodent brain atlases
Zaslavsky, Ilya; Baldock, Richard A.; Boline, Jyl
2014-01-01
Biomedical research entails capture and analysis of massive data volumes and new discoveries arise from data-integration and mining. This is only possible if data can be mapped onto a common framework such as the genome for genomic data. In neuroscience, the framework is intrinsically spatial and based on a number of paper atlases. This cannot meet today's data-intensive analysis and integration challenges. A scalable and extensible software infrastructure that is standards based but open for novel data and resources, is required for integrating information such as signal distributions, gene-expression, neuronal connectivity, electrophysiology, anatomy, and developmental processes. Therefore, the International Neuroinformatics Coordinating Facility (INCF) initiated the development of a spatial framework for neuroscience data integration with an associated Digital Atlasing Infrastructure (DAI). A prototype implementation of this infrastructure for the rodent brain is reported here. The infrastructure is based on a collection of reference spaces to which data is mapped at the required resolution, such as the Waxholm Space (WHS), a 3D reconstruction of the brain generated using high-resolution, multi-channel microMRI. The core standards of the digital atlasing service-oriented infrastructure include Waxholm Markup Language (WaxML): XML schema expressing a uniform information model for key elements such as coordinate systems, transformations, points of interest (POI)s, labels, and annotations; and Atlas Web Services: interfaces for querying and updating atlas data. The services return WaxML-encoded documents with information about capabilities, spatial reference systems (SRSs) and structures, and execute coordinate transformations and POI-based requests. Key elements of INCF-DAI cyberinfrastructure have been prototyped for both mouse and rat brain atlas sources, including the Allen Mouse Brain Atlas, UCSD Cell-Centered Database, and Edinburgh Mouse Atlas Project. PMID:25309417
Cyberinfrastructure for the digital brain: spatial standards for integrating rodent brain atlases.
Zaslavsky, Ilya; Baldock, Richard A; Boline, Jyl
2014-01-01
Biomedical research entails capture and analysis of massive data volumes and new discoveries arise from data-integration and mining. This is only possible if data can be mapped onto a common framework such as the genome for genomic data. In neuroscience, the framework is intrinsically spatial and based on a number of paper atlases. This cannot meet today's data-intensive analysis and integration challenges. A scalable and extensible software infrastructure that is standards based but open for novel data and resources, is required for integrating information such as signal distributions, gene-expression, neuronal connectivity, electrophysiology, anatomy, and developmental processes. Therefore, the International Neuroinformatics Coordinating Facility (INCF) initiated the development of a spatial framework for neuroscience data integration with an associated Digital Atlasing Infrastructure (DAI). A prototype implementation of this infrastructure for the rodent brain is reported here. The infrastructure is based on a collection of reference spaces to which data is mapped at the required resolution, such as the Waxholm Space (WHS), a 3D reconstruction of the brain generated using high-resolution, multi-channel microMRI. The core standards of the digital atlasing service-oriented infrastructure include Waxholm Markup Language (WaxML): XML schema expressing a uniform information model for key elements such as coordinate systems, transformations, points of interest (POI)s, labels, and annotations; and Atlas Web Services: interfaces for querying and updating atlas data. The services return WaxML-encoded documents with information about capabilities, spatial reference systems (SRSs) and structures, and execute coordinate transformations and POI-based requests. Key elements of INCF-DAI cyberinfrastructure have been prototyped for both mouse and rat brain atlas sources, including the Allen Mouse Brain Atlas, UCSD Cell-Centered Database, and Edinburgh Mouse Atlas Project.
Earth BioGenome Project: Sequencing life for the future of life.
Lewin, Harris A; Robinson, Gene E; Kress, W John; Baker, William J; Coddington, Jonathan; Crandall, Keith A; Durbin, Richard; Edwards, Scott V; Forest, Félix; Gilbert, M Thomas P; Goldstein, Melissa M; Grigoriev, Igor V; Hackett, Kevin J; Haussler, David; Jarvis, Erich D; Johnson, Warren E; Patrinos, Aristides; Richards, Stephen; Castilla-Rubio, Juan Carlos; van Sluys, Marie-Anne; Soltis, Pamela S; Xu, Xun; Yang, Huanming; Zhang, Guojie
2018-04-24
Increasing our understanding of Earth's biodiversity and responsibly stewarding its resources are among the most crucial scientific and social challenges of the new millennium. These challenges require fundamental new knowledge of the organization, evolution, functions, and interactions among millions of the planet's organisms. Herein, we present a perspective on the Earth BioGenome Project (EBP), a moonshot for biology that aims to sequence, catalog, and characterize the genomes of all of Earth's eukaryotic biodiversity over a period of 10 years. The outcomes of the EBP will inform a broad range of major issues facing humanity, such as the impact of climate change on biodiversity, the conservation of endangered species and ecosystems, and the preservation and enhancement of ecosystem services. We describe hurdles that the project faces, including data-sharing policies that ensure a permanent, freely available resource for future scientific discovery while respecting access and benefit sharing guidelines of the Nagoya Protocol. We also describe scientific and organizational challenges in executing such an ambitious project, and the structure proposed to achieve the project's goals. The far-reaching potential benefits of creating an open digital repository of genomic information for life on Earth can be realized only by a coordinated international effort.
Real-World Evidence In Support Of Precision Medicine: Clinico-Genomic Cancer Data As A Case Study.
Agarwala, Vineeta; Khozin, Sean; Singal, Gaurav; O'Connell, Claire; Kuk, Deborah; Li, Gerald; Gossai, Anala; Miller, Vincent; Abernethy, Amy P
2018-05-01
The majority of US adult cancer patients today are diagnosed and treated outside the context of any clinical trial (that is, in the real world). Although these patients are not part of a research study, their clinical data are still recorded. Indeed, data captured in electronic health records form an ever-growing, rich digital repository of longitudinal patient experiences, treatments, and outcomes. Likewise, genomic data from tumor molecular profiling are increasingly guiding oncology care. Linking real-world clinical and genomic data, as well as information from other co-occurring data sets, could create study populations that provide generalizable evidence for precision medicine interventions. However, the infrastructure required to link, ensure quality, and rapidly learn from such composite data is complex. We outline the challenges and describe a novel approach to building a real-world clinico-genomic database of patients with cancer. This work represents a case study in how data collected during routine patient care can inform precision medicine efforts for the population at large. We suggest that health policies can promote innovation by defining appropriate uses of real-world evidence, establishing data standards, and incentivizing data sharing.
Formal hardware verification of digital circuits
NASA Technical Reports Server (NTRS)
Joyce, J.; Seger, C.-J.
1991-01-01
The use of formal methods to verify the correctness of digital circuits is less constrained by the growing complexity of digital circuits than conventional methods based on exhaustive simulation. This paper briefly outlines three main approaches to formal hardware verification: symbolic simulation, state machine analysis, and theorem-proving.
Zhou, Weiqiang; Sherwood, Ben; Ji, Hongkai
2017-01-01
Technological advances have led to an explosive growth of high-throughput functional genomic data. Exploiting the correlation among different data types, it is possible to predict one functional genomic data type from other data types. Prediction tools are valuable in understanding the relationship among different functional genomic signals. They also provide a cost-efficient solution to inferring the unknown functional genomic profiles when experimental data are unavailable due to resource or technological constraints. The predicted data may be used for generating hypotheses, prioritizing targets, interpreting disease variants, facilitating data integration, quality control, and many other purposes. This article reviews various applications of prediction methods in functional genomics, discusses analytical challenges, and highlights some common and effective strategies used to develop prediction methods for functional genomic data. PMID:28076869
[Genome editing of industrial microorganism].
Zhu, Linjiang; Li, Qi
2015-03-01
Genome editing is defined as highly-effective and precise modification of cellular genome in a large scale. In recent years, such genome-editing methods have been rapidly developed in the field of industrial strain improvement. The quickly-updating methods thoroughly change the old mode of inefficient genetic modification, which is "one modification, one selection marker, and one target site". Highly-effective modification mode in genome editing have been developed including simultaneous modification of multiplex genes, highly-effective insertion, replacement, and deletion of target genes in the genome scale, cut-paste of a large DNA fragment. These new tools for microbial genome editing will certainly be applied widely, and increase the efficiency of industrial strain improvement, and promote the revolution of traditional fermentation industry and rapid development of novel industrial biotechnology like production of biofuel and biomaterial. The technological principle of these genome-editing methods and their applications were summarized in this review, which can benefit engineering and construction of industrial microorganism.
Digital health technology and diabetes management.
Cahn, Avivit; Akirov, Amit; Raz, Itamar
2018-01-01
Diabetes care is largely dependent on patient self-management and empowerment, given that patients with diabetes must make numerous daily decisions as to what to eat, when to exercise, and determine their insulin dose and timing if required. In addition, patients and providers are generating vast amounts of data from many sources, including electronic medical records, insulin pumps, sensors, glucometers, and other wearables, as well as evolving genomic, proteomic, metabolomics, and microbiomic data. Multiple digital tools and apps have been developed to assist patients to choose wisely, and to enhance their compliance by using motivational tools and incorporating incentives from social media and gaming techniques. Healthcare teams (HCTs) and health administrators benefit from digital developments that sift through the enormous amounts of patient-generated data. Data are acquired, integrated, analyzed, and presented in a self-explanatory manner, highlighting important trends and items that require attention. The use of decision support systems may propose data-driven actions that, for the most, require final approval by the patient or physician before execution and, once implemented, may improve patient outcomes. The digital diabetes clinic aims to incorporate all digital patient data and provide individually tailored virtual or face-to-face visits to those persons who need them most. Digital diabetes care has demonstrated only modest HbA1c reduction in multiple studies and borderline cost-effectiveness, although patient satisfaction appears to be increased. Better understanding of the barriers to digital diabetes care and identification of unmet needs may yield improved utilization of this evolving technology in a safe, effective, and cost-saving manner. © 2017 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and John Wiley & Sons Australia, Ltd.
Rowlands, J A; Hunter, D M; Araj, N
1991-01-01
A new digital image readout method for electrostatic charge images on photoconductive plates is described. The method can be used to read out images on selenium plates similar to those used in xeromammography. The readout method, called the air-gap photoinduced discharge method (PID), discharges the latent image pixel by pixel and measures the charge. The PID readout method, like electrometer methods, is linear. However, the PID method permits much better resolution than scanning electrometers while maintaining quantum limited performance at high radiation exposure levels. Thus the air-gap PID method appears to be uniquely superior for high-resolution digital imaging tasks such as mammography.
Digital carrier demodulator employing components working beyond normal limits
NASA Technical Reports Server (NTRS)
Hurd, William J. (Inventor); Sadr, Ramin (Inventor)
1990-01-01
In a digital device, having an input comprised of a digital sample stream at a frequency F, a method is disclosed for employing a component designed to work at a frequency less than F. The method, in general, is comprised of the following steps: dividing the digital sample stream into odd and even digital samples streams each at a frequency of F/2; passing one of the digital sample streams through the component designed to work at a frequency less than F where the component responds only to the odd or even digital samples in one of the digital sample streams; delaying the other digital sample streams for the time it takes the digital sample stream to pass through the component; and adding the one digital sample stream after passing through the component with the other delayed digital sample streams. In the specific example, the component is a finite impulse response filter of the order ((N + 1)/2) and the delaying step comprised passing the other digital sample streams through a shift register for a time (in sampling periods) of ((N + 1)/2) + r, where r is a pipline delay through the finite impulse response filter.
Measuring Distances Using Digital Cameras
ERIC Educational Resources Information Center
Kendal, Dave
2007-01-01
This paper presents a generic method of calculating accurate horizontal and vertical object distances from digital images taken with any digital camera and lens combination, where the object plane is parallel to the image plane or tilted in the vertical plane. This method was developed for a project investigating the size, density and spatial…
NASA Astrophysics Data System (ADS)
Aisyah Fadhillah Hafni, Dinda; Syaufina, Lailan; Puspaningsih, Nining; Prasasti, Indah
2018-05-01
The study was conducted in three land cover conditions (secondary peat forest, shrub land, and palm plantation) that were burned in the Siak District, Riau Province, Indonesia year 2015. Measurement and calculation carbon emission from soil and vegetation of peatland should be done accurately to be implemented on climate change mitigation or greenhouse gases mitigation. The objective of the study was to estimate the carbon emission caused peatland fires in the Siak District, Riau Province, Indonesia year 2015. Estimated carbon emissions were performed using visual method and digital method. The visual method was a method that uses on-screen digitization assisted by hotspot data, the presence of smoke, and fire suppression data. The digital method was a method that uses the Normalized Burn Ratio (NBR) index. The estimated carbon emissions were calculated using the equation that was developed from IPCC 2006 in Verified Carbon Standard 2015. The results showed that the estimation of carbon emissions from fires from above the peat soil surface were higher than the carbon emissions from the peat soil. Carbon emissions above the peat soil surface of 1376.51 ton C/ha were obtained by visual method while 3984.33 ton C/ha were obtained by digital method. Peatland carbon emissions of 6.6 x 10-4 ton C/ha were obtained by visual method, whereas 2.84 x 10-3 ton C/ha was obtained by digital method. Visual method and digital method using remote sensing must be combined and developed in order to carbon emission values will be more accurate.
Fish genome manipulation and directional breeding.
Ye, Ding; Zhu, ZuoYan; Sun, YongHua
2015-02-01
Aquaculture is one of the fastest developing agricultural industries worldwide. One of the most important factors for sustainable aquaculture is the development of high performing culture strains. Genome manipulation offers a powerful method to achieve rapid and directional breeding in fish. We review the history of fish breeding methods based on classical genome manipulation, including polyploidy breeding and nuclear transfer. Then, we discuss the advances and applications of fish directional breeding based on transgenic technology and recently developed genome editing technologies. These methods offer increased efficiency, precision and predictability in genetic improvement over traditional methods.
Evaluation Digital Elevation Model Generated by Synthetic Aperture Radar Data
NASA Astrophysics Data System (ADS)
Makineci, H. B.; Karabörk, H.
2016-06-01
Digital elevation model, showing the physical and topographical situation of the earth, is defined a tree-dimensional digital model obtained from the elevation of the surface by using of selected an appropriate interpolation method. DEMs are used in many areas such as management of natural resources, engineering and infrastructure projects, disaster and risk analysis, archaeology, security, aviation, forestry, energy, topographic mapping, landslide and flood analysis, Geographic Information Systems (GIS). Digital elevation models, which are the fundamental components of cartography, is calculated by many methods. Digital elevation models can be obtained terrestrial methods or data obtained by digitization of maps by processing the digital platform in general. Today, Digital elevation model data is generated by the processing of stereo optical satellite images, radar images (radargrammetry, interferometry) and lidar data using remote sensing and photogrammetric techniques with the help of improving technology. One of the fundamental components of remote sensing radar technology is very advanced nowadays. In response to this progress it began to be used more frequently in various fields. Determining the shape of topography and creating digital elevation model comes the beginning topics of these areas. It is aimed in this work , the differences of evaluation of quality between Sentinel-1A SAR image ,which is sent by European Space Agency ESA and Interferometry Wide Swath imaging mode and C band type , and DTED-2 (Digital Terrain Elevation Data) and application between them. The application includes RMS static method for detecting precision of data. Results show us to variance of points make a high decrease from mountain area to plane area.
Handwritten digits recognition based on immune network
NASA Astrophysics Data System (ADS)
Li, Yangyang; Wu, Yunhui; Jiao, Lc; Wu, Jianshe
2011-11-01
With the development of society, handwritten digits recognition technique has been widely applied to production and daily life. It is a very difficult task to solve these problems in the field of pattern recognition. In this paper, a new method is presented for handwritten digit recognition. The digit samples firstly are processed and features extraction. Based on these features, a novel immune network classification algorithm is designed and implemented to the handwritten digits recognition. The proposed algorithm is developed by Jerne's immune network model for feature selection and KNN method for classification. Its characteristic is the novel network with parallel commutating and learning. The performance of the proposed method is experimented to the handwritten number datasets MNIST and compared with some other recognition algorithms-KNN, ANN and SVM algorithm. The result shows that the novel classification algorithm based on immune network gives promising performance and stable behavior for handwritten digits recognition.
A new strategy for genome assembly using short sequence reads and reduced representation libraries.
Young, Andrew L; Abaan, Hatice Ozel; Zerbino, Daniel; Mullikin, James C; Birney, Ewan; Margulies, Elliott H
2010-02-01
We have developed a novel approach for using massively parallel short-read sequencing to generate fast and inexpensive de novo genomic assemblies comparable to those generated by capillary-based methods. The ultrashort (<100 base) sequences generated by this technology pose specific biological and computational challenges for de novo assembly of large genomes. To account for this, we devised a method for experimentally partitioning the genome using reduced representation (RR) libraries prior to assembly. We use two restriction enzymes independently to create a series of overlapping fragment libraries, each containing a tractable subset of the genome. Together, these libraries allow us to reassemble the entire genome without the need of a reference sequence. As proof of concept, we applied this approach to sequence and assembled the majority of the 125-Mb Drosophila melanogaster genome. We subsequently demonstrate the accuracy of our assembly method with meaningful comparisons against the current available D. melanogaster reference genome (dm3). The ease of assembly and accuracy for comparative genomics suggest that our approach will scale to future mammalian genome-sequencing efforts, saving both time and money without sacrificing quality.
Automatic Topography Using High Precision Digital Moire Methods
NASA Astrophysics Data System (ADS)
Yatagai, T.; Idesawa, M.; Saito, S.
1983-07-01
Three types of moire topographic methods using digital techniques are proposed. Deformed gratings obtained by projecting a reference grating onto an object under test are subjected to digital analysis. The electronic analysis procedures of deformed gratings described here enable us to distinguish between depression and elevation of the object, so that automatic measurement of 3-D shapes and automatic moire fringe interpolation are performed. Based on the digital moire methods, we have developed a practical measurement system, with a linear photodiode array on a micro-stage as a scanning image sensor. Examples of fringe analysis in medical applications are presented.
High-resolution digital holography with the aid of coherent diffraction imaging.
Jiang, Zhilong; Veetil, Suhas P; Cheng, Jun; Liu, Cheng; Wang, Ling; Zhu, Jianqiang
2015-08-10
The image reconstructed in ordinary digital holography was unable to bring out desired resolution in comparison to photographic materials; thus making it less preferable for many interesting applications. A method is proposed to enhance the resolution of digital holography in all directions by placing a random phase plate between the specimen and the electronic camera and then using an iterative approach to do the reconstruction. With this method, the resolution is improved remarkably in comparison to ordinary digital holography. Theoretical analysis is supported by numerical simulation. The feasibility of the method is also studied experimentally.
Image manipulation: Fraudulence in digital dental records: Study and review
Chowdhry, Aman; Sircar, Keya; Popli, Deepika Bablani; Tandon, Ankita
2014-01-01
Introduction: In present-day times, freely available software allows dentists to tweak their digital records as never before. But, there is a fine line between acceptable enhancements and scientific delinquency. Aims and Objective: To manipulate digital images (used in forensic dentistry) of casts, lip prints, and bite marks in order to highlight tampering techniques and methods of detecting and preventing manipulation of digital images. Materials and Methods: Digital image records of forensic data (casts, lip prints, and bite marks photographed using Samsung Techwin L77 digital camera) were manipulated using freely available software. Results: Fake digital images can be created either by merging two or more digital images, or by altering an existing image. Discussion and Conclusion: Retouched digital images can be used for fraudulent purposes in forensic investigations. However, tools are available to detect such digital frauds, which are extremely difficult to assess visually. Thus, all digital content should mandatorily have attached metadata and preferably watermarking in order to avert their malicious re-use. Also, computer alertness, especially about imaging software's, should be promoted among forensic odontologists/dental professionals. PMID:24696587
Parks, Donovan H.; Imelfort, Michael; Skennerton, Connor T.; Hugenholtz, Philip; Tyson, Gene W.
2015-01-01
Large-scale recovery of genomes from isolates, single cells, and metagenomic data has been made possible by advances in computational methods and substantial reductions in sequencing costs. Although this increasing breadth of draft genomes is providing key information regarding the evolutionary and functional diversity of microbial life, it has become impractical to finish all available reference genomes. Making robust biological inferences from draft genomes requires accurate estimates of their completeness and contamination. Current methods for assessing genome quality are ad hoc and generally make use of a limited number of “marker” genes conserved across all bacterial or archaeal genomes. Here we introduce CheckM, an automated method for assessing the quality of a genome using a broader set of marker genes specific to the position of a genome within a reference genome tree and information about the collocation of these genes. We demonstrate the effectiveness of CheckM using synthetic data and a wide range of isolate-, single-cell-, and metagenome-derived genomes. CheckM is shown to provide accurate estimates of genome completeness and contamination and to outperform existing approaches. Using CheckM, we identify a diverse range of errors currently impacting publicly available isolate genomes and demonstrate that genomes obtained from single cells and metagenomic data vary substantially in quality. In order to facilitate the use of draft genomes, we propose an objective measure of genome quality that can be used to select genomes suitable for specific gene- and genome-centric analyses of microbial communities. PMID:25977477
Parks, Donovan H; Imelfort, Michael; Skennerton, Connor T; Hugenholtz, Philip; Tyson, Gene W
2015-07-01
Large-scale recovery of genomes from isolates, single cells, and metagenomic data has been made possible by advances in computational methods and substantial reductions in sequencing costs. Although this increasing breadth of draft genomes is providing key information regarding the evolutionary and functional diversity of microbial life, it has become impractical to finish all available reference genomes. Making robust biological inferences from draft genomes requires accurate estimates of their completeness and contamination. Current methods for assessing genome quality are ad hoc and generally make use of a limited number of "marker" genes conserved across all bacterial or archaeal genomes. Here we introduce CheckM, an automated method for assessing the quality of a genome using a broader set of marker genes specific to the position of a genome within a reference genome tree and information about the collocation of these genes. We demonstrate the effectiveness of CheckM using synthetic data and a wide range of isolate-, single-cell-, and metagenome-derived genomes. CheckM is shown to provide accurate estimates of genome completeness and contamination and to outperform existing approaches. Using CheckM, we identify a diverse range of errors currently impacting publicly available isolate genomes and demonstrate that genomes obtained from single cells and metagenomic data vary substantially in quality. In order to facilitate the use of draft genomes, we propose an objective measure of genome quality that can be used to select genomes suitable for specific gene- and genome-centric analyses of microbial communities. © 2015 Parks et al.; Published by Cold Spring Harbor Laboratory Press.
Fast reconstruction of off-axis digital holograms based on digital spatial multiplexing.
Sha, Bei; Liu, Xuan; Ge, Xiao-Lu; Guo, Cheng-Shan
2014-09-22
A method for fast reconstruction of off-axis digital holograms based on digital multiplexing algorithm is proposed. Instead of the existed angular multiplexing (AM), the new method utilizes a spatial multiplexing (SM) algorithm, in which four off-axis holograms recorded in sequence are synthesized into one SM function through multiplying each hologram with a tilted plane wave and then adding them up. In comparison with the conventional methods, the SM algorithm simplifies two-dimensional (2-D) Fourier transforms (FTs) of four N*N arrays into a 1.25-D FTs of one N*N arrays. Experimental results demonstrate that, using the SM algorithm, the computational efficiency can be improved and the reconstructed wavefronts keep the same quality as those retrieved based on the existed AM method. This algorithm may be useful in design of a fast preview system of dynamic wavefront imaging in digital holography.
Accurate Prediction of Inducible Transcription Factor Binding Intensities In Vivo
Siepel, Adam; Lis, John T.
2012-01-01
DNA sequence and local chromatin landscape act jointly to determine transcription factor (TF) binding intensity profiles. To disentangle these influences, we developed an experimental approach, called protein/DNA binding followed by high-throughput sequencing (PB–seq), that allows the binding energy landscape to be characterized genome-wide in the absence of chromatin. We applied our methods to the Drosophila Heat Shock Factor (HSF), which inducibly binds a target DNA sequence element (HSE) following heat shock stress. PB–seq involves incubating sheared naked genomic DNA with recombinant HSF, partitioning the HSF–bound and HSF–free DNA, and then detecting HSF–bound DNA by high-throughput sequencing. We compared PB–seq binding profiles with ones observed in vivo by ChIP–seq and developed statistical models to predict the observed departures from idealized binding patterns based on covariates describing the local chromatin environment. We found that DNase I hypersensitivity and tetra-acetylation of H4 were the most influential covariates in predicting changes in HSF binding affinity. We also investigated the extent to which DNA accessibility, as measured by digital DNase I footprinting data, could be predicted from MNase–seq data and the ChIP–chip profiles for many histone modifications and TFs, and found GAGA element associated factor (GAF), tetra-acetylation of H4, and H4K16 acetylation to be the most predictive covariates. Lastly, we generated an unbiased model of HSF binding sequences, which revealed distinct biophysical properties of the HSF/HSE interaction and a previously unrecognized substructure within the HSE. These findings provide new insights into the interplay between the genomic sequence and the chromatin landscape in determining transcription factor binding intensity. PMID:22479205
Novel Digital Driving Method Using Dual Scan for Active Matrix Organic Light-Emitting Diode Displays
NASA Astrophysics Data System (ADS)
Jung, Myoung Hoon; Choi, Inho; Chung, Hoon-Ju; Kim, Ohyun
2008-11-01
A new digital driving method has been developed for low-temperature polycrystalline silicon, transistor-driven, active-matrix organic light-emitting diode (AM-OLED) displays by time-ratio gray-scale expression. This driving method effectively increases the emission ratio and the number of subfields by inserting another subfield set into nondisplay periods in the conventional digital driving method. By employing the proposed modified gravity center coding, this method can be used to effectively compensate for dynamic false contour noise. The operation and performance were verified by current measurement and image simulation. The simulation results using eight test images show that the proposed approach improves the average peak signal-to-noise ratio by 2.61 dB, and the emission ratio by 20.5%, compared with the conventional digital driving method.
A text zero-watermarking method based on keyword dense interval
NASA Astrophysics Data System (ADS)
Yang, Fan; Zhu, Yuesheng; Jiang, Yifeng; Qing, Yin
2017-07-01
Digital watermarking has been recognized as a useful technology for the copyright protection and authentication of digital information. However, rarely did the former methods focus on the key content of digital carrier. The idea based on the protection of key content is more targeted and can be considered in different digital information, including text, image and video. In this paper, we use text as research object and a text zero-watermarking method which uses keyword dense interval (KDI) as the key content is proposed. First, we construct zero-watermarking model by introducing the concept of KDI and giving the method of KDI extraction. Second, we design detection model which includes secondary generation of zero-watermark and the similarity computing method of keyword distribution. Besides, experiments are carried out, and the results show that the proposed method gives better performance than other available methods especially in the attacks of sentence transformation and synonyms substitution.
Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark
1999-01-01
A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.
Ultrafast Comparison of Personal Genomes via Precomputed Genome Fingerprints.
Glusman, Gustavo; Mauldin, Denise E; Hood, Leroy E; Robinson, Max
2017-01-01
We present an ultrafast method for comparing personal genomes. We transform the standard genome representation (lists of variants relative to a reference) into "genome fingerprints" via locality sensitive hashing. The resulting genome fingerprints can be meaningfully compared even when the input data were obtained using different sequencing technologies, processed using different pipelines, represented in different data formats and relative to different reference versions. Furthermore, genome fingerprints are robust to up to 30% missing data. Because of their reduced size, computation on the genome fingerprints is fast and requires little memory. For example, we could compute all-against-all pairwise comparisons among the 2504 genomes in the 1000 Genomes data set in 67 s at high quality (21 μs per comparison, on a single processor), and achieved a lower quality approximation in just 11 s. Efficient computation enables scaling up a variety of important genome analyses, including quantifying relatedness, recognizing duplicative sequenced genomes in a set, population reconstruction, and many others. The original genome representation cannot be reconstructed from its fingerprint, effectively decoupling genome comparison from genome interpretation; the method thus has significant implications for privacy-preserving genome analytics.
Development of a digital microfluidic platform for point of care testing
Sista, Ramakrishna; Hua, Zhishan; Thwar, Prasanna; Sudarsan, Arjun; Srinivasan, Vijay; Eckhardt, Allen; Pollack, Michael; Pamula, Vamsee
2009-01-01
Point of care testing is playing an increasingly important role in improving the clinical outcome in health care management. The salient features of a point of care device are quick results, integrated sample preparation and processing, small sample volumes, portability, multifunctionality and low cost. In this paper, we demonstrate some of these salient features utilizing an electrowetting-based Digital Microfluidic platform. We demonstrate the performance of magnetic bead-based immunoassays (cardiac troponin I) on a digital microfluidic cartridge in less than 8 minutes using whole blood samples. Using the same microfluidic cartridge, a 40-cycle real-time polymerase chain reaction was performed within 12 minutes by shuttling a droplet between two thermal zones. We further demonstrate, on the same cartridge, the capability to perform sample preparation for bacterial and fungal infectious disease pathogens (methicillin-resistance Staphylococcus aureus and Candida albicans) and for human genomic DNA using magnetic beads. In addition to rapid results and integrated sample preparation, electrowetting-based digital microfluidic instruments are highly portable because fluid pumping is performed electronically. All the digital microfluidic chips presented here were fabricated on printed circuit boards utilizing mass production techniques that keep the cost of the chip low. Due to the modularity and scalability afforded by digital microfluidics, multifunctional testing capability, such as combinations within and between immunoassays, DNA amplification, and enzymatic assays, can be brought to the point of care at a relatively low cost because a single chip can be configured in software for different assays required along the path of care. PMID:19023472
Reconstruction of a digital core containing clay minerals based on a clustering algorithm.
He, Yanlong; Pu, Chunsheng; Jing, Cheng; Gu, Xiaoyu; Chen, Qingdong; Liu, Hongzhi; Khan, Nasir; Dong, Qiaoling
2017-10-01
It is difficult to obtain a core sample and information for digital core reconstruction of mature sandstone reservoirs around the world, especially for an unconsolidated sandstone reservoir. Meanwhile, reconstruction and division of clay minerals play a vital role in the reconstruction of the digital cores, although the two-dimensional data-based reconstruction methods are specifically applicable as the microstructure reservoir simulation methods for the sandstone reservoir. However, reconstruction of clay minerals is still challenging from a research viewpoint for the better reconstruction of various clay minerals in the digital cores. In the present work, the content of clay minerals was considered on the basis of two-dimensional information about the reservoir. After application of the hybrid method, and compared with the model reconstructed by the process-based method, the digital core containing clay clusters without the labels of the clusters' number, size, and texture were the output. The statistics and geometry of the reconstruction model were similar to the reference model. In addition, the Hoshen-Kopelman algorithm was used to label various connected unclassified clay clusters in the initial model and then the number and size of clay clusters were recorded. At the same time, the K-means clustering algorithm was applied to divide the labeled, large connecting clusters into smaller clusters on the basis of difference in the clusters' characteristics. According to the clay minerals' characteristics, such as types, textures, and distributions, the digital core containing clay minerals was reconstructed by means of the clustering algorithm and the clay clusters' structure judgment. The distributions and textures of the clay minerals of the digital core were reasonable. The clustering algorithm improved the digital core reconstruction and provided an alternative method for the simulation of different clay minerals in the digital cores.
A method for normalizing pathology images to improve feature extraction for quantitative pathology.
Tam, Allison; Barker, Jocelyn; Rubin, Daniel
2016-01-01
With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. ICHE may be a useful preprocessing step a digital pathology image processing pipeline.
Gregory, T Ryan; Nathwani, Paula; Bonnett, Tiffany R; Huber, Dezene P W
2013-09-01
A study was undertaken to evaluate both a pre-existing method and a newly proposed approach for the estimation of nuclear genome sizes in arthropods. First, concerns regarding the reliability of the well-established method of flow cytometry relating to impacts of rearing conditions on genome size estimates were examined. Contrary to previous reports, a more carefully controlled test found negligible environmental effects on genome size estimates in the fly Drosophila melanogaster. Second, a more recently touted method based on quantitative real-time PCR (qPCR) was examined in terms of ease of use, efficiency, and (most importantly) accuracy using four test species: the flies Drosophila melanogaster and Musca domestica and the beetles Tribolium castaneum and Dendroctonus ponderosa. The results of this analysis demonstrated that qPCR has the tendency to produce substantially different genome size estimates from other established techniques while also being far less efficient than existing methods.
A method for normalizing pathology images to improve feature extraction for quantitative pathology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tam, Allison; Barker, Jocelyn; Rubin, Daniel
Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology imagesmore » by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline.« less
Inter-arch digital model vs. manual cast measurements: Accuracy and reliability.
Kiviahde, Heikki; Bukovac, Lea; Jussila, Päivi; Pesonen, Paula; Sipilä, Kirsi; Raustia, Aune; Pirttiniemi, Pertti
2017-06-28
The purpose of this study was to evaluate the accuracy and reliability of inter-arch measurements using digital dental models and conventional dental casts. Thirty sets of dental casts with permanent dentition were examined. Manual measurements were done with a digital caliper directly on the dental casts, and digital measurements were made on 3D models by two independent examiners. Intra-class correlation coefficients (ICC), a paired sample t-test or Wilcoxon signed-rank test, and Bland-Altman plots were used to evaluate intra- and inter-examiner error and to determine the accuracy and reliability of the measurements. The ICC values were generally good for manual and excellent for digital measurements. The Bland-Altman plots of all the measurements showed good agreement between the manual and digital methods and excellent inter-examiner agreement using the digital method. Inter-arch occlusal measurements on digital models are accurate and reliable and are superior to manual measurements.
Magnified reconstruction of digitally recorded holograms by Fresnel-Bluestein transform.
Restrepo, John F; Garcia-Sucerquia, Jorge
2010-11-20
A method for numerical reconstruction of digitally recorded holograms with variable magnification is presented. The proposed strategy allows for smaller, equal, or larger magnification than that achieved with Fresnel transform by introducing the Bluestein substitution into the Fresnel kernel. The magnification is obtained independent of distance, wavelength, and number of pixels, which enables the method to be applied in color digital holography and metrological applications. The approach is supported by experimental and simulation results in digital holography of objects of comparable dimensions with the recording device and in the reconstruction of holograms from digital in-line holographic microscopy.
Evaluation of the validity of the Bolton Index using cone-beam computed tomography (CBCT)
Llamas, José M.; Cibrián, Rosa; Gandía, José L.; Paredes, Vanessa
2012-01-01
Aims: To evaluate the reliability and reproducibility of calculating the Bolton Index using cone-beam computed tomography (CBCT), and to compare this with measurements obtained using the 2D Digital Method. Material and Methods: Traditional study models were obtained from 50 patients, which were then digitized in order to be able to measure them using the Digital Method. Likewise, CBCTs of those same patients were undertaken using the Dental Picasso Master 3D® and the images obtained were then analysed using the InVivoDental programme. Results: By determining the regression lines for both measurement methods, as well as the difference between both of their values, the two methods are shown to be comparable, despite the fact that the measurements analysed presented statistically significant differences. Conclusions: The three-dimensional models obtained from the CBCT are as accurate and reproducible as the digital models obtained from the plaster study casts for calculating the Bolton Index. The differences existing between both methods were clinically acceptable. Key words:Tooth-size, digital models, bolton index, CBCT. PMID:22549690
NASA Astrophysics Data System (ADS)
Zhao, Yun-wei; Zhu, Zi-qiang; Lu, Guang-yin; Han, Bo
2018-03-01
The sine and cosine transforms implemented with digital filters have been used in the Transient electromagnetic methods for a few decades. Kong (2007) proposed a method of obtaining filter coefficients, which are computed in the sample domain by Hankel transform pair. However, the curve shape of Hankel transform pair changes with a parameter, which usually is set to be 1 or 3 in the process of obtaining the digital filter coefficients of sine and cosine transforms. First, this study investigates the influence of the parameter on the digital filter algorithm of sine and cosine transforms based on the digital filter algorithm of Hankel transform and the relationship between the sine, cosine function and the ±1/2 order Bessel function of the first kind. The results show that the selection of the parameter highly influences the precision of digital filter algorithm. Second, upon the optimal selection of the parameter, it is found that an optimal sampling interval s also exists to achieve the best precision of digital filter algorithm. Finally, this study proposes four groups of sine and cosine transform digital filter coefficients with different length, which may help to develop the digital filter algorithm of sine and cosine transforms, and promote its application.
Evaluation of user input methods for manipulating a tablet personal computer in sterile techniques.
Yamada, Akira; Komatsu, Daisuke; Suzuki, Takeshi; Kurozumi, Masahiro; Fujinaga, Yasunari; Ueda, Kazuhiko; Kadoya, Masumi
2017-02-01
To determine a quick and accurate user input method for manipulating tablet personal computers (PCs) in sterile techniques. We evaluated three different manipulation methods, (1) Computer mouse and sterile system drape, (2) Fingers and sterile system drape, and (3) Digitizer stylus and sterile ultrasound probe cover with a pinhole, in terms of the central processing unit (CPU) performance, manipulation performance, and contactlessness. A significant decrease in CPU score ([Formula: see text]) and an increase in CPU temperature ([Formula: see text]) were observed when a system drape was used. The respective mean times taken to select a target image from an image series (ST) and the mean times for measuring points on an image (MT) were [Formula: see text] and [Formula: see text] s for the computer mouse method, [Formula: see text] and [Formula: see text] s for the finger method, and [Formula: see text] and [Formula: see text] s for the digitizer stylus method, respectively. The ST for the finger method was significantly longer than for the digitizer stylus method ([Formula: see text]). The MT for the computer mouse method was significantly longer than for the digitizer stylus method ([Formula: see text]). The mean success rate for measuring points on an image was significantly lower for the finger method when the diameter of the target was equal to or smaller than 8 mm than for the other methods. No significant difference in the adenosine triphosphate amount at the surface of the tablet PC was observed before, during, or after manipulation via the digitizer stylus method while wearing starch-powdered sterile gloves ([Formula: see text]). Quick and accurate manipulation of tablet PCs in sterile techniques without CPU load is feasible using a digitizer stylus and sterile ultrasound probe cover with a pinhole.
BAC sequencing using pooled methods.
Saski, Christopher A; Feltus, F Alex; Parida, Laxmi; Haiminen, Niina
2015-01-01
Shotgun sequencing and assembly of a large, complex genome can be both expensive and challenging to accurately reconstruct the true genome sequence. Repetitive DNA arrays, paralogous sequences, polyploidy, and heterozygosity are main factors that plague de novo genome sequencing projects that typically result in highly fragmented assemblies and are difficult to extract biological meaning. Targeted, sub-genomic sequencing offers complexity reduction by removing distal segments of the genome and a systematic mechanism for exploring prioritized genomic content through BAC sequencing. If one isolates and sequences the genome fraction that encodes the relevant biological information, then it is possible to reduce overall sequencing costs and efforts that target a genomic segment. This chapter describes the sub-genome assembly protocol for an organism based upon a BAC tiling path derived from a genome-scale physical map or from fine mapping using BACs to target sub-genomic regions. Methods that are described include BAC isolation and mapping, DNA sequencing, and sequence assembly.
De novo assembly of a haplotype-resolved human genome.
Cao, Hongzhi; Wu, Honglong; Luo, Ruibang; Huang, Shujia; Sun, Yuhui; Tong, Xin; Xie, Yinlong; Liu, Binghang; Yang, Hailong; Zheng, Hancheng; Li, Jian; Li, Bo; Wang, Yu; Yang, Fang; Sun, Peng; Liu, Siyang; Gao, Peng; Huang, Haodong; Sun, Jing; Chen, Dan; He, Guangzhu; Huang, Weihua; Huang, Zheng; Li, Yue; Tellier, Laurent C A M; Liu, Xiao; Feng, Qiang; Xu, Xun; Zhang, Xiuqing; Bolund, Lars; Krogh, Anders; Kristiansen, Karsten; Drmanac, Radoje; Drmanac, Snezana; Nielsen, Rasmus; Li, Songgang; Wang, Jian; Yang, Huanming; Li, Yingrui; Wong, Gane Ka-Shu; Wang, Jun
2015-06-01
The human genome is diploid, and knowledge of the variants on each chromosome is important for the interpretation of genomic information. Here we report the assembly of a haplotype-resolved diploid genome without using a reference genome. Our pipeline relies on fosmid pooling together with whole-genome shotgun strategies, based solely on next-generation sequencing and hierarchical assembly methods. We applied our sequencing method to the genome of an Asian individual and generated a 5.15-Gb assembled genome with a haplotype N50 of 484 kb. Our analysis identified previously undetected indels and 7.49 Mb of novel coding sequences that could not be aligned to the human reference genome, which include at least six predicted genes. This haplotype-resolved genome represents the most complete de novo human genome assembly to date. Application of our approach to identify individual haplotype differences should aid in translating genotypes to phenotypes for the development of personalized medicine.
Using Partial Genomic Fosmid Libraries for Sequencing CompleteOrganellar Genomes
DOE Office of Scientific and Technical Information (OSTI.GOV)
McNeal, Joel R.; Leebens-Mack, James H.; Arumuganathan, K.
2005-08-26
Organellar genome sequences provide numerous phylogenetic markers and yield insight into organellar function and molecular evolution. These genomes are much smaller in size than their nuclear counterparts; thus, their complete sequencing is much less expensive than total nuclear genome sequencing, making broader phylogenetic sampling feasible. However, for some organisms it is challenging to isolate plastid DNA for sequencing using standard methods. To overcome these difficulties, we constructed partial genomic libraries from total DNA preparations of two heterotrophic and two autotrophic angiosperm species using fosmid vectors. We then used macroarray screening to isolate clones containing large fragments of plastid DNA. Amore » minimum tiling path of clones comprising the entire genome sequence of each plastid was selected, and these clones were shotgun-sequenced and assembled into complete genomes. Although this method worked well for both heterotrophic and autotrophic plants, nuclear genome size had a dramatic effect on the proportion of screened clones containing plastid DNA and, consequently, the overall number of clones that must be screened to ensure full plastid genome coverage. This technique makes it possible to determine complete plastid genome sequences for organisms that defy other available organellar genome sequencing methods, especially those for which limited amounts of tissue are available.« less
College Students; Justification for Digital Piracy: A Mixed Methods Study
ERIC Educational Resources Information Center
Yu, Szde
2012-01-01
A mixed methods project was devoted to understanding college students' justification for digital piracy. The project consisted of two studies, a qualitative one and a quantitative one. Qualitative interviews were conducted to identify main themes in students' justification for digital piracy, and then the findings were tested in a quantitative…
Digital PI-PD controller design for arbitrary order systems: Dominant pole placement approach.
Dincel, Emre; Söylemez, Mehmet Turan
2018-05-02
In this paper, a digital PI-PD controller design method is proposed for arbitrary order systems with or without time-delay to achieve desired transient response in the closed-loop via dominant pole placement approach. The digital PI-PD controller design problem is solved by converting the original problem to the digital PID controller design problem. Firstly, parametrization of the digital PID controllers which assign dominant poles to desired location is done. After that the subset of digital PID controller parameters in which the remaining poles are located away from the dominant pole pair is found via Chebyshev polynomials. The obtained PID controller parameters are then transformed into the PI-PD controller parameters by considering the closed-loop controller zero and the design is completed. Success of the proposed design method is firstly demonstrated on an example transfer function and compared with the well-known PID controller methods from the literature through simulations. After that the design method is implemented on the fan and plate laboratory system in a real environment. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Choosing a DIVA: a comparison of emerging digital imagery vegetation analysis techniques
Jorgensen, Christopher F.; Stutzman, Ryan J.; Anderson, Lars C.; Decker, Suzanne E.; Powell, Larkin A.; Schacht, Walter H.; Fontaine, Joseph J.
2013-01-01
Question: What is the precision of five methods of measuring vegetation structure using ground-based digital imagery and processing techniques? Location: Lincoln, Nebraska, USA Methods: Vertical herbaceous cover was recorded using digital imagery techniques at two distinct locations in a mixed-grass prairie. The precision of five ground-based digital imagery vegetation analysis (DIVA) methods for measuring vegetation structure was tested using a split-split plot analysis of covariance. Variability within each DIVA technique was estimated using coefficient of variation of mean percentage cover. Results: Vertical herbaceous cover estimates differed among DIVA techniques. Additionally, environmental conditions affected the vertical vegetation obstruction estimates for certain digital imagery methods, while other techniques were more adept at handling various conditions. Overall, percentage vegetation cover values differed among techniques, but the precision of four of the five techniques was consistently high. Conclusions: DIVA procedures are sufficient for measuring various heights and densities of standing herbaceous cover. Moreover, digital imagery techniques can reduce measurement error associated with multiple observers' standing herbaceous cover estimates, allowing greater opportunity to detect patterns associated with vegetation structure.
Operative record using intraoperative digital data in neurosurgery.
Houkin, K; Kuroda, S; Abe, H
2000-01-01
The purpose of this study was to develop a new method for more efficient and accurate operative records using intra-operative digital data in neurosurgery, including macroscopic procedures and microscopic procedures under an operating microscope. Macroscopic procedures were recorded using a digital camera and microscopic procedures were also recorded using a microdigital camera attached to an operating microscope. Operative records were then recorded digitally and filed in a computer using image retouch software and database base software. The time necessary for editing of the digital data and completing the record was less than 30 minutes. Once these operative records are digitally filed, they are easily transferred and used as database. Using digital operative records along with digital photography, neurosurgeons can document their procedures more accurately and efficiently than by the conventional method (handwriting). A complete digital operative record is not only accurate but also time saving. Construction of a database, data transfer and desktop publishing can be achieved using the intra-operative data, including intra-operative photographs.
A BAC clone fingerprinting approach to the detection of human genome rearrangements
Krzywinski, Martin; Bosdet, Ian; Mathewson, Carrie; Wye, Natasja; Brebner, Jay; Chiu, Readman; Corbett, Richard; Field, Matthew; Lee, Darlene; Pugh, Trevor; Volik, Stas; Siddiqui, Asim; Jones, Steven; Schein, Jacquie; Collins, Collin; Marra, Marco
2007-01-01
We present a method, called fingerprint profiling (FPP), that uses restriction digest fingerprints of bacterial artificial chromosome clones to detect and classify rearrangements in the human genome. The approach uses alignment of experimental fingerprint patterns to in silico digests of the sequence assembly and is capable of detecting micro-deletions (1-5 kb) and balanced rearrangements. Our method has compelling potential for use as a whole-genome method for the identification and characterization of human genome rearrangements. PMID:17953769
Digital Refractometry of Piezoelectric Crystals.
Digital Refractometry , Included in the report is a description of the program, classical methods for measuring the refractive index, the foundations of...Digital Refractometry for isotropic and anisotropic materials and the laboratory configuration for Digital Refractometry . In the final section of the
Howard, Jeremy T; Pryce, Jennie E; Baes, Christine; Maltecca, Christian
2017-08-01
Traditionally, pedigree-based relationship coefficients have been used to manage the inbreeding and degree of inbreeding depression that exists within a population. The widespread incorporation of genomic information in dairy cattle genetic evaluations allows for the opportunity to develop and implement methods to manage populations at the genomic level. As a result, the realized proportion of the genome that 2 individuals share can be more accurately estimated instead of using pedigree information to estimate the expected proportion of shared alleles. Furthermore, genomic information allows genome-wide relationship or inbreeding estimates to be augmented to characterize relationships for specific regions of the genome. Region-specific stretches can be used to more effectively manage areas of low genetic diversity or areas that, when homozygous, result in reduced performance across economically important traits. The use of region-specific metrics should allow breeders to more precisely manage the trade-off between the genetic value of the progeny and undesirable side effects associated with inbreeding. Methods tailored toward more effectively identifying regions affected by inbreeding and their associated use to manage the genome at the herd level, however, still need to be developed. We have reviewed topics related to inbreeding, measures of relatedness, genetic diversity and methods to manage populations at the genomic level, and we discuss future challenges related to managing populations through implementing genomic methods at the herd and population levels. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Del Valle, José C; Gallardo-López, Antonio; Buide, Mª Luisa; Whittall, Justen B; Narbona, Eduardo
2018-03-01
Anthocyanin pigments have become a model trait for evolutionary ecology as they often provide adaptive benefits for plants. Anthocyanins have been traditionally quantified biochemically or more recently using spectral reflectance. However, both methods require destructive sampling and can be labor intensive and challenging with small samples. Recent advances in digital photography and image processing make it the method of choice for measuring color in the wild. Here, we use digital images as a quick, noninvasive method to estimate relative anthocyanin concentrations in species exhibiting color variation. Using a consumer-level digital camera and a free image processing toolbox, we extracted RGB values from digital images to generate color indices. We tested petals, stems, pedicels, and calyces of six species, which contain different types of anthocyanin pigments and exhibit different pigmentation patterns. Color indices were assessed by their correlation to biochemically determined anthocyanin concentrations. For comparison, we also calculated color indices from spectral reflectance and tested the correlation with anthocyanin concentration. Indices perform differently depending on the nature of the color variation. For both digital images and spectral reflectance, the most accurate estimates of anthocyanin concentration emerge from anthocyanin content-chroma ratio, anthocyanin content-chroma basic, and strength of green indices. Color indices derived from both digital images and spectral reflectance strongly correlate with biochemically determined anthocyanin concentration; however, the estimates from digital images performed better than spectral reflectance in terms of r 2 and normalized root-mean-square error. This was particularly noticeable in a species with striped petals, but in the case of striped calyces, both methods showed a comparable relationship with anthocyanin concentration. Using digital images brings new opportunities to accurately quantify the anthocyanin concentrations in both floral and vegetative tissues. This method is efficient, completely noninvasive, applicable to both uniform and patterned color, and works with samples of any size.
Single Cell-Based Vector Tracing in Patients with ADA-SCID Treated with Stem Cell Gene Therapy.
Igarashi, Yuka; Uchiyama, Toru; Minegishi, Tomoko; Takahashi, Sirirat; Watanabe, Nobuyuki; Kawai, Toshinao; Yamada, Masafumi; Ariga, Tadashi; Onodera, Masafumi
2017-09-15
Clinical improvement in stem cell gene therapy (SCGT) for primary immunodeficiencies depends on the engraftment levels of genetically corrected cells, and tracing the transgene in each hematopoietic lineage is therefore extremely important in evaluating the efficacy of SCGT. We established a single cell-based droplet digital PCR (sc-ddPCR) method consisting of the encapsulation of a single cell into each droplet, followed by emulsion PCR with primers and probes specific for the transgene. A fluorescent signal in a droplet indicates the presence of a single cell carrying the target gene in its genome, and this system can clearly determine the ratio of transgene-positive cells in the entire population at the genomic level. Using sc-ddPCR, we analyzed the engraftment of vector-transduced cells in two patients with severe combined immunodeficiency (SCID) who were treated with SCGT. Sufficient engraftment of the transduced cells was limited to the T cell lineage in peripheral blood (PB), and a small percentage of CD34 + cells exhibited vector integration in bone marrow, indicating that the transgene-positive cells in PB might have differentiated from a small population of stem cells or lineage-restricted precursor cells. sc-ddPCR is a simplified and powerful tool for the detailed assessment of transgene-positive cell distribution in patients treated with SCGT.
NASA Astrophysics Data System (ADS)
Yoshikazu, Kawata; Shin-Ichi, Yano; Hiroyuki, Kojima
1998-03-01
An efficient and simple method for constructing a genomic DNA library using a TA cloning vector is presented. It is based on the sonicative cleavage of genomic DNA and modification of fragment ends with Taq DNA polymerase, followed by ligation using a TA vector. This method was applied for cloning of the phytoene synthase gene crt B from Spirulina platensis. This method is useful when genomic DNA cannot be efficiently digested with restriction enzymes, a problem often encountered during the construction of a genomic DNA library of cyanobacteria.
Accuracy of digital and analogue cephalometric measurements assessed with the sandwich technique.
Santoro, Margherita; Jarjoura, Karim; Cangialosi, Thomas J
2006-03-01
The purpose of the study was to evaluate the accuracy of cephalometric measurements obtained with digital tracing software compared with equivalent hand-traced measurements. In the sandwich technique, a storage phosphor plate and a conventional radiographic film are placed in the same cassette and exposed simultaneously. The method eliminates positioning errors and potential differences associated with multiple radiographic exposures that affected previous studies. It was used to ensure the equivalence of the digital images to the hard copy radiographs. Cephalometric measurements instead of landmarks were the focus of this investigation in order to acquire data with direct clinical applications. The sample consisted of digital and analog radiographic images from 47 patients after orthodontic treatment. Nine cephalometric landmarks were identified and 13 measurements calculated by 1 operator, both manually and with digital tracing software. Measurement error was assessed for each method by duplicating measurements of 25 randomly selected radiographs and by using Pearson's correlation coefficient. A paired t test was used to detect differences between the manual and digital methods. An overall greater variability in the digital cephalometric measurements was found. Differences between the 2 methods for SNA, ANB, S-Go:N-Me, U1/L1, L1-GoGn, and N-ANS:ANS-Me were statistically significant (P < .05). However, only the U1/L1 and S-Go:N-Me measurements showed differences greater than 2 SE (P < .0001). The 2 tracing methods provide similar clinical results; therefore, efficient digital cephalometric software can be reliably chosen as a routine diagnostic tool. The user-friendly sandwich technique was effective as an option for interoffice communications.
An Exact Algorithm to Compute the Double-Cut-and-Join Distance for Genomes with Duplicate Genes.
Shao, Mingfu; Lin, Yu; Moret, Bernard M E
2015-05-01
Computing the edit distance between two genomes is a basic problem in the study of genome evolution. The double-cut-and-join (DCJ) model has formed the basis for most algorithmic research on rearrangements over the last few years. The edit distance under the DCJ model can be computed in linear time for genomes without duplicate genes, while the problem becomes NP-hard in the presence of duplicate genes. In this article, we propose an integer linear programming (ILP) formulation to compute the DCJ distance between two genomes with duplicate genes. We also provide an efficient preprocessing approach to simplify the ILP formulation while preserving optimality. Comparison on simulated genomes demonstrates that our method outperforms MSOAR in computing the edit distance, especially when the genomes contain long duplicated segments. We also apply our method to assign orthologous gene pairs among human, mouse, and rat genomes, where once again our method outperforms MSOAR.
Gene context analysis in the Integrated Microbial Genomes (IMG) data management system.
Mavromatis, Konstantinos; Chu, Ken; Ivanova, Natalia; Hooper, Sean D; Markowitz, Victor M; Kyrpides, Nikos C
2009-11-24
Computational methods for determining the function of genes in newly sequenced genomes have been traditionally based on sequence similarity to genes whose function has been identified experimentally. Function prediction methods can be extended using gene context analysis approaches such as examining the conservation of chromosomal gene clusters, gene fusion events and co-occurrence profiles across genomes. Context analysis is based on the observation that functionally related genes are often having similar gene context and relies on the identification of such events across phylogenetically diverse collection of genomes. We have used the data management system of the Integrated Microbial Genomes (IMG) as the framework to implement and explore the power of gene context analysis methods because it provides one of the largest available genome integrations. Visualization and search tools to facilitate gene context analysis have been developed and applied across all publicly available archaeal and bacterial genomes in IMG. These computations are now maintained as part of IMG's regular genome content update cycle. IMG is available at: http://img.jgi.doe.gov.
Existing methods for improving the accuracy of digital-to-analog converters
NASA Astrophysics Data System (ADS)
Eielsen, Arnfinn A.; Fleming, Andrew J.
2017-09-01
The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-28
... Subcommittee On Digital I&C Systems The ACRS Subcommittee on Digital Instrumentation and Control (DI&C) Systems... the area of Digital Instrumentation and Control (DI&C) Probabilistic Risk Assessment (PRA). Topics... software reliability methods (QSRMs), NUREG/CR--6997, ``Modeling a Digital Feedwater Control System Using...
A digital repository with an extensible data model for biobanking and genomic analysis management.
Izzo, Massimiliano; Mortola, Francesco; Arnulfo, Gabriele; Fato, Marco M; Varesio, Luigi
2014-01-01
Molecular biology laboratories require extensive metadata to improve data collection and analysis. The heterogeneity of the collected metadata grows as research is evolving in to international multi-disciplinary collaborations and increasing data sharing among institutions. Single standardization is not feasible and it becomes crucial to develop digital repositories with flexible and extensible data models, as in the case of modern integrated biobanks management. We developed a novel data model in JSON format to describe heterogeneous data in a generic biomedical science scenario. The model is built on two hierarchical entities: processes and events, roughly corresponding to research studies and analysis steps within a single study. A number of sequential events can be grouped in a process building up a hierarchical structure to track patient and sample history. Each event can produce new data. Data is described by a set of user-defined metadata, and may have one or more associated files. We integrated the model in a web based digital repository with a data grid storage to manage large data sets located in geographically distinct areas. We built a graphical interface that allows authorized users to define new data types dynamically, according to their requirements. Operators compose queries on metadata fields using a flexible search interface and run them on the database and on the grid. We applied the digital repository to the integrated management of samples, patients and medical history in the BIT-Gaslini biobank. The platform currently manages 1800 samples of over 900 patients. Microarray data from 150 analyses are stored on the grid storage and replicated on two physical resources for preservation. The system is equipped with data integration capabilities with other biobanks for worldwide information sharing. Our data model enables users to continuously define flexible, ad hoc, and loosely structured metadata, for information sharing in specific research projects and purposes. This approach can improve sensitively interdisciplinary research collaboration and allows to track patients' clinical records, sample management information, and genomic data. The web interface allows the operators to easily manage, query, and annotate the files, without dealing with the technicalities of the data grid.
A digital repository with an extensible data model for biobanking and genomic analysis management
2014-01-01
Motivation Molecular biology laboratories require extensive metadata to improve data collection and analysis. The heterogeneity of the collected metadata grows as research is evolving in to international multi-disciplinary collaborations and increasing data sharing among institutions. Single standardization is not feasible and it becomes crucial to develop digital repositories with flexible and extensible data models, as in the case of modern integrated biobanks management. Results We developed a novel data model in JSON format to describe heterogeneous data in a generic biomedical science scenario. The model is built on two hierarchical entities: processes and events, roughly corresponding to research studies and analysis steps within a single study. A number of sequential events can be grouped in a process building up a hierarchical structure to track patient and sample history. Each event can produce new data. Data is described by a set of user-defined metadata, and may have one or more associated files. We integrated the model in a web based digital repository with a data grid storage to manage large data sets located in geographically distinct areas. We built a graphical interface that allows authorized users to define new data types dynamically, according to their requirements. Operators compose queries on metadata fields using a flexible search interface and run them on the database and on the grid. We applied the digital repository to the integrated management of samples, patients and medical history in the BIT-Gaslini biobank. The platform currently manages 1800 samples of over 900 patients. Microarray data from 150 analyses are stored on the grid storage and replicated on two physical resources for preservation. The system is equipped with data integration capabilities with other biobanks for worldwide information sharing. Conclusions Our data model enables users to continuously define flexible, ad hoc, and loosely structured metadata, for information sharing in specific research projects and purposes. This approach can improve sensitively interdisciplinary research collaboration and allows to track patients' clinical records, sample management information, and genomic data. The web interface allows the operators to easily manage, query, and annotate the files, without dealing with the technicalities of the data grid. PMID:25077808
A Distance Measure for Genome Phylogenetic Analysis
NASA Astrophysics Data System (ADS)
Cao, Minh Duc; Allison, Lloyd; Dix, Trevor
Phylogenetic analyses of species based on single genes or parts of the genomes are often inconsistent because of factors such as variable rates of evolution and horizontal gene transfer. The availability of more and more sequenced genomes allows phylogeny construction from complete genomes that is less sensitive to such inconsistency. For such long sequences, construction methods like maximum parsimony and maximum likelihood are often not possible due to their intensive computational requirement. Another class of tree construction methods, namely distance-based methods, require a measure of distances between any two genomes. Some measures such as evolutionary edit distance of gene order and gene content are computational expensive or do not perform well when the gene content of the organisms are similar. This study presents an information theoretic measure of genetic distances between genomes based on the biological compression algorithm expert model. We demonstrate that our distance measure can be applied to reconstruct the consensus phylogenetic tree of a number of Plasmodium parasites from their genomes, the statistical bias of which would mislead conventional analysis methods. Our approach is also used to successfully construct a plausible evolutionary tree for the γ-Proteobacteria group whose genomes are known to contain many horizontally transferred genes.
Comparison of Methods of Detection of Exceptional Sequences in Prokaryotic Genomes.
Rusinov, I S; Ershova, A S; Karyagina, A S; Spirin, S A; Alexeevski, A V
2018-02-01
Many proteins need recognition of specific DNA sequences for functioning. The number of recognition sites and their distribution along the DNA might be of biological importance. For example, the number of restriction sites is often reduced in prokaryotic and phage genomes to decrease the probability of DNA cleavage by restriction endonucleases. We call a sequence an exceptional one if its frequency in a genome significantly differs from one predicted by some mathematical model. An exceptional sequence could be either under- or over-represented, depending on its frequency in comparison with the predicted one. Exceptional sequences could be considered biologically meaningful, for example, as targets of DNA-binding proteins or as parts of abundant repetitive elements. Several methods to predict frequency of a short sequence in a genome, based on actual frequencies of certain its subsequences, are used. The most popular are methods based on Markov chain models. But any rigorous comparison of the methods has not previously been performed. We compared three methods for the prediction of short sequence frequencies: the maximum-order Markov chain model-based method, the method that uses geometric mean of extended Markovian estimates, and the method that utilizes frequencies of all subsequences including discontiguous ones. We applied them to restriction sites in complete genomes of 2500 prokaryotic species and demonstrated that the results depend greatly on the method used: lists of 5% of the most under-represented sites differed by up to 50%. The method designed by Burge and coauthors in 1992, which utilizes all subsequences of the sequence, showed a higher precision than the other two methods both on prokaryotic genomes and randomly generated sequences after computational imitation of selective pressure. We propose this method as the first choice for detection of exceptional sequences in prokaryotic genomes.
A Patient-Centered Framework for Evaluating Digital Maturity of Health Services: A Systematic Review
Callahan, Ryan; Darzi, Ara; Mayer, Erik
2016-01-01
Background Digital maturity is the extent to which digital technologies are used as enablers to deliver a high-quality health service. Extensive literature exists about how to assess the components of digital maturity, but it has not been used to design a comprehensive framework for evaluation. Consequently, the measurement systems that do exist are limited to evaluating digital programs within one service or care setting, meaning that digital maturity evaluation is not accounting for the needs of patients across their care pathways. Objective The objective of our study was to identify the best methods and metrics for evaluating digital maturity and to create a novel, evidence-based tool for evaluating digital maturity across patient care pathways. Methods We systematically reviewed the literature to find the best methods and metrics for evaluating digital maturity. We searched the PubMed database for all papers relevant to digital maturity evaluation. Papers were selected if they provided insight into how to appraise digital systems within the health service and if they indicated the factors that constitute or facilitate digital maturity. Papers were analyzed to identify methodology for evaluating digital maturity and indicators of digitally mature systems. We then used the resulting information about methodology to design an evaluation framework. Following that, the indicators of digital maturity were extracted and grouped into increasing levels of maturity and operationalized as metrics within the evaluation framework. Results We identified 28 papers as relevant to evaluating digital maturity, from which we derived 5 themes. The first theme concerned general evaluation methodology for constructing the framework (7 papers). The following 4 themes were the increasing levels of digital maturity: resources and ability (6 papers), usage (7 papers), interoperability (3 papers), and impact (5 papers). The framework includes metrics for each of these levels at each stage of the typical patient care pathway. Conclusions The framework uses a patient-centric model that departs from traditional service-specific measurements and allows for novel insights into how digital programs benefit patients across the health system. Trial Registration N/A PMID:27080852
Detecting Copy Move Forgery In Digital Images
NASA Astrophysics Data System (ADS)
Gupta, Ashima; Saxena, Nisheeth; Vasistha, S. K.
2012-03-01
In today's world several image manipulation software's are available. Manipulation of digital images has become a serious problem nowadays. There are many areas like medical imaging, digital forensics, journalism, scientific publications, etc, where image forgery can be done very easily. To determine whether a digital image is original or doctored is a big challenge. To find the marks of tampering in a digital image is a challenging task. The detection methods can be very useful in image forensics which can be used as a proof for the authenticity of a digital image. In this paper we propose the method to detect region duplication forgery by dividing the image into overlapping block and then perform searching to find out the duplicated region in the image.
NASA Astrophysics Data System (ADS)
Li, Da; Cheung, Chifai; Zhao, Xing; Ren, Mingjun; Zhang, Juan; Zhou, Liqiu
2016-10-01
Autostereoscopy based three-dimensional (3D) digital reconstruction has been widely applied in the field of medical science, entertainment, design, industrial manufacture, precision measurement and many other areas. The 3D digital model of the target can be reconstructed based on the series of two-dimensional (2D) information acquired by the autostereoscopic system, which consists multiple lens and can provide information of the target from multiple angles. This paper presents a generalized and precise autostereoscopic three-dimensional (3D) digital reconstruction method based on Direct Extraction of Disparity Information (DEDI) which can be used to any transform autostereoscopic systems and provides accurate 3D reconstruction results through error elimination process based on statistical analysis. The feasibility of DEDI method has been successfully verified through a series of optical 3D digital reconstruction experiments on different autostereoscopic systems which is highly efficient to perform the direct full 3D digital model construction based on tomography-like operation upon every depth plane with the exclusion of the defocused information. With the absolute focused information processed by DEDI method, the 3D digital model of the target can be directly and precisely formed along the axial direction with the depth information.
Spectral colors capture and reproduction based on digital camera
NASA Astrophysics Data System (ADS)
Chen, Defen; Huang, Qingmei; Li, Wei; Lu, Yang
2018-01-01
The purpose of this work is to develop a method for the accurate reproduction of the spectral colors captured by digital camera. The spectral colors being the purest color in any hue, are difficult to reproduce without distortion on digital devices. In this paper, we attempt to achieve accurate hue reproduction of the spectral colors by focusing on two steps of color correction: the capture of the spectral colors and the color characterization of digital camera. Hence it determines the relationship among the spectral color wavelength, the RGB color space of the digital camera device and the CIEXYZ color space. This study also provides a basis for further studies related to the color spectral reproduction on digital devices. In this paper, methods such as wavelength calibration of the spectral colors and digital camera characterization were utilized. The spectrum was obtained through the grating spectroscopy system. A photo of a clear and reliable primary spectrum was taken by adjusting the relative parameters of the digital camera, from which the RGB values of color spectrum was extracted in 1040 equally-divided locations. Calculated using grating equation and measured by the spectrophotometer, two wavelength values were obtained from each location. The polynomial fitting method for the camera characterization was used to achieve color correction. After wavelength calibration, the maximum error between the two sets of wavelengths is 4.38nm. According to the polynomial fitting method, the average color difference of test samples is 3.76. This has satisfied the application needs of the spectral colors in digital devices such as display and transmission.
An instructional guide for leaf color analysis using digital imaging software
Paula F. Murakami; Michelle R. Turner; Abby K. van den Berg; Paul G. Schaberg
2005-01-01
Digital color analysis has become an increasingly popular and cost-effective method utilized by resource managers and scientists for evaluating foliar nutrition and health in response to environmental stresses. We developed and tested a new method of digital image analysis that uses Scion Image or NIH image public domain software to quantify leaf color. This...
ERIC Educational Resources Information Center
Gallagher, Kathleen; Freeman, Barry
2011-01-01
This article explores the possibilities and frustrations of using digital methods in a multi-sited ethnographic research project. The project, "Urban School Performances: The interplay, through live and digital drama, of local-global knowledge about student engagement", is a study of youth and teachers in drama classrooms in contexts of…
Handwritten digits recognition using HMM and PSO based on storks
NASA Astrophysics Data System (ADS)
Yan, Liao; Jia, Zhenhong; Yang, Jie; Pang, Shaoning
2010-07-01
A new method for handwritten digits recognition based on hidden markov model (HMM) and particle swarm optimization (PSO) is proposed. This method defined 24 strokes with the sense of directional, to make up for the shortage that is sensitive in choice of stating point in traditional methods, but also reduce the ambiguity caused by shakes. Make use of excellent global convergence of PSO; improving the probability of finding the optimum and avoiding local infinitesimal obviously. Experimental results demonstrate that compared with the traditional methods, the proposed method can make most of the recognition rate of handwritten digits improved.
Galaev, A V; Babaiants, L T; Sivolap, Iu M
2003-01-01
Comparative analysis of introgressive and parental forms of wheat was carried out to reveal the sites of donor genome with new loci of resistance to fungal diseases. By ISSR-method 124 ISSR-loci were detected in the genomes of 18 individual plants of introgressive line 5/20-91; 17 of them have been related to introgressive fragments of Ae. cylindrica genome in T. aestivum. It was shown that ISSR-method is effective for detection of the variability caused by introgression of alien genetic material to T. aestivum genome.
Fast ancestral gene order reconstruction of genomes with unequal gene content.
Feijão, Pedro; Araujo, Eloi
2016-11-11
During evolution, genomes are modified by large scale structural events, such as rearrangements, deletions or insertions of large blocks of DNA. Of particular interest, in order to better understand how this type of genomic evolution happens, is the reconstruction of ancestral genomes, given a phylogenetic tree with extant genomes at its leaves. One way of solving this problem is to assume a rearrangement model, such as Double Cut and Join (DCJ), and find a set of ancestral genomes that minimizes the number of events on the input tree. Since this problem is NP-hard for most rearrangement models, exact solutions are practical only for small instances, and heuristics have to be used for larger datasets. This type of approach can be called event-based. Another common approach is based on finding conserved structures between the input genomes, such as adjacencies between genes, possibly also assigning weights that indicate a measure of confidence or probability that this particular structure is present on each ancestral genome, and then finding a set of non conflicting adjacencies that optimize some given function, usually trying to maximize total weight and minimizing character changes in the tree. We call this type of methods homology-based. In previous work, we proposed an ancestral reconstruction method that combines homology- and event-based ideas, using the concept of intermediate genomes, that arise in DCJ rearrangement scenarios. This method showed better rate of correctly reconstructed adjacencies than other methods, while also being faster, since the use of intermediate genomes greatly reduces the search space. Here, we generalize the intermediate genome concept to genomes with unequal gene content, extending our method to account for gene insertions and deletions of any length. In many of the simulated datasets, our proposed method had better results than MLGO and MGRA, two state-of-the-art algorithms for ancestral reconstruction with unequal gene content, while running much faster, making it more scalable to larger datasets. Studing ancestral reconstruction problems under a new light, using the concept of intermediate genomes, allows the design of very fast algorithms by greatly reducing the solution search space, while also giving very good results. The algorithms introduced in this paper were implemented in an open-source software called RINGO (ancestral Reconstruction with INtermediate GenOmes), available at https://github.com/pedrofeijao/RINGO .
Ultrafast Comparison of Personal Genomes via Precomputed Genome Fingerprints
Glusman, Gustavo; Mauldin, Denise E.; Hood, Leroy E.; Robinson, Max
2017-01-01
We present an ultrafast method for comparing personal genomes. We transform the standard genome representation (lists of variants relative to a reference) into “genome fingerprints” via locality sensitive hashing. The resulting genome fingerprints can be meaningfully compared even when the input data were obtained using different sequencing technologies, processed using different pipelines, represented in different data formats and relative to different reference versions. Furthermore, genome fingerprints are robust to up to 30% missing data. Because of their reduced size, computation on the genome fingerprints is fast and requires little memory. For example, we could compute all-against-all pairwise comparisons among the 2504 genomes in the 1000 Genomes data set in 67 s at high quality (21 μs per comparison, on a single processor), and achieved a lower quality approximation in just 11 s. Efficient computation enables scaling up a variety of important genome analyses, including quantifying relatedness, recognizing duplicative sequenced genomes in a set, population reconstruction, and many others. The original genome representation cannot be reconstructed from its fingerprint, effectively decoupling genome comparison from genome interpretation; the method thus has significant implications for privacy-preserving genome analytics. PMID:29018478
An interactive method for digitizing zone maps
NASA Technical Reports Server (NTRS)
Giddings, L. E.; Thompson, E. J.
1975-01-01
A method is presented for digitizing maps that consist of zones, such as contour or climatic zone maps. A color-coded map is prepared by any convenient process. The map is then read into memory of an Image 100 computer by means of its table scanner, using colored filters. Zones are separated and stored in themes, using standard classification procedures. Thematic data are written on magnetic tape and these data, appropriately coded, are combined to make a digitized image on tape. Step-by-step procedures are given for digitization of crop moisture index maps with this procedure. In addition, a complete example of the digitization of a climatic zone map is given.
[Comparison of digital and visual methods for Ki-67 assessment in invasive breast carcinomas].
Kushnarev, V A; Artemyeva, E S; Kudaybergenova, A G
2018-01-01
to compare two methods for quantitative assessment of the proliferative activity index (PAI): a visual estimation method by several investigators and digital image analysis (DIA). The use of the Ki-67 index in the daily clinical practice of a Morbid Anatomy Department is associated with the problem of reproducibility of quantitative assessment of the Ki-67 PAI. Due to the development of digital imaging techniques in morphology, new methods for PAI evaluation using the DIA are proposed. The Ki-67 PAI data obtained during visual assessment and digital image analysis were compared in 104 cases of grades 2-3 breast carcinoma. The histological sections were scanned using a Panoramic III scanner (3D Histech, Hungary) and digital images were obtained. DIA was carried out using the software 3D Histech QuantCenter (3D Histech, Hungary), by marking 3-10 zones. Evaluation of the obtained sections was done independently by two investigators engaged in cancer pathology. The level of agreement between visual and digital methods did not differ significantly (p>0.001). The authors selected a gray area in the range of 10-35% IPA, where the Ki-67 index showed a weak relationship between the analyzed groups (ICC, 0.47). The Ki67 index below 10% and above 35% showed a sufficient reproducibility in the same laboratory. The authors consider that the scanned digital form of a histological section, which can be evaluated using automated software analysis modules, is an independent and objective method to assess proliferative activity for Ki-67 index validation.
Prinz, I; Nubel, K; Gross, M
2002-09-01
Until now, the assumed benefits of digital hearing aids are reflected only in subjective descriptions by patients with hearing aids, but cannot be documented adequately by routine diagnostic methods. Seventeen schoolchildren with moderate severe bilateral symmetrical sensorineural hearing loss were examined in a double-blinded crossover study. Differences in performance between a fully digital hearing aid (DigiFocus compact/Oticon) and an analogous digitally programmable two-channel hearing aid were evaluated. Of the 17 children, 13 choose the digital and 4 the analogous hearing aid. In contrast to the clear subjective preferences for the fully digital hearing aid, we could not obtain any significant results with routine diagnostic methods. Using the "virtual hearing aid," a subjective comparison and speech recognition performance task yielded significant differences. The virtual hearing aid proved to be suitable for a direct comparison of different hearing aids and can be used for double-blind testing in a pediatric population.
Anonymizing patient genomic data for public sharing association studies.
Fernandez-Lozano, Carlos; Lopez-Campos, Guillermo; Seoane, Jose A; Lopez-Alonso, Victoria; Dorado, Julian; Martín-Sanchez, Fernando; Pazos, Alejandro
2013-01-01
The development of personalized medicine is tightly linked with the correct exploitation of molecular data, especially those associated with the genome sequence along with these use of genomic data there is an increasing demand to share these data for research purposes. Transition of clinical data to research is based in the anonymization of these data so the patient cannot be identified, the use of genomic data poses a great challenge because its nature of identifying data. In this work we have analyzed current methods for genome anonymization and propose a one way encryption method that may enable the process of genomic data sharing accessing only to certain regions of genomes for research purposes.
Methods of Genomic Competency Integration in Practice
Jenkins, Jean; Calzone, Kathleen A.; Caskey, Sarah; Culp, Stacey; Weiner, Marsha; Badzek, Laurie
2015-01-01
Purpose Genomics is increasingly relevant to health care, necessitating support for nurses to incorporate genomic competencies into practice. The primary aim of this project was to develop, implement, and evaluate a year-long genomic education intervention that trained, supported, and supervised institutional administrator and educator champion dyads to increase nursing capacity to integrate genomics through assessments of program satisfaction and institutional achieved outcomes. Design Longitudinal study of 23 Magnet Recognition Program® Hospitals (21 intervention, 2 controls) participating in a 1-year new competency integration effort aimed at increasing genomic nursing competency and overcoming barriers to genomics integration in practice. Methods Champion dyads underwent genomic training consisting of one in-person kick-off training meeting followed by monthly education webinars. Champion dyads designed institution-specific action plans detailing objectives, methods or strategies used to engage and educate nursing staff, timeline for implementation, and outcomes achieved. Action plans focused on a minimum of seven genomic priority areas: champion dyad personal development; practice assessment; policy content assessment; staff knowledge needs assessment; staff development; plans for integration; and anticipated obstacles and challenges. Action plans were updated quarterly, outlining progress made as well as inclusion of new methods or strategies. Progress was validated through virtual site visits with the champion dyads and chief nursing officers. Descriptive data were collected on all strategies or methods utilized, and timeline for achievement. Descriptive data were analyzed using content analysis. Findings The complexity of the competency content and the uniqueness of social systems and infrastructure resulted in a significant variation of champion dyad interventions. Conclusions Nursing champions can facilitate change in genomic nursing capacity through varied strategies but require substantial training in order to design and implement interventions. Clinical Relevance Genomics is critical to the practice of all nurses. There is a great opportunity and interest to address genomic knowledge deficits in the practicing nurse workforce as a strategy to improve patient outcomes. Exemplars of champion dyad interventions designed to increase nursing capacity focus on improving education, policy, and healthcare services. PMID:25808828
Grundlingh, A A; Grossman, E S; Shrivastava, S; Witcomb, M J
2013-10-01
This study compared digital and visual colour tooth colour assessment methods in a sample of 99 teeth consisting of incisors, canines and pre-molars. The teeth were equally divided between Control, Ozicure Oxygen Activator bleach and Opalescence Quick bleach and subjected to three treatments. Colour readings were recorded at nine intervals by two assessment methods, VITA Easyshade and VITAPAN 3D MASTER TOOTH GUIDE, giving a total of 1782 colour readings. Descriptive and statistical analysis was undertaken using a GLM test for Analysis of Variance for a Fractional Design set at a significance of P < 0.05. Atomic force micros copy was used to examine treated ename surfaces and establish surface roughness. Visual tooth colour assessment showed significance for the independent variables of treatment, number of treatments, tooth type and the combination tooth type and treatment. Digital colour assessment indicated treatment and tooth type to be of significance in tooth colour change. Poor agreement was found between visual and digital colour assessment methods for Control and Ozicure Oxygen Activator treatments. Surface roughness values increased two-fold for Opalescence Quick specimens over the two other treatments, implying that increased light scattering improved digital colour reading. Both digital and visual colour matching methods should be used in tooth bleaching studies to complement each other and to compensate for deficiencies.
Zahid, Sarwar; Peeler, Crandall; Khan, Naheed; Davis, Joy; Mahmood, Mahdi; Heckenlively, John; Jayasundera, Thiran
2015-01-01
Purpose To develop a reliable and efficient digital method to quantify planimetric Goldmann visual field (GVF) data to monitor disease course and treatment responses in retinal degenerative diseases. Methods A novel method to digitally quantify GVF using Adobe Photoshop CS3 was developed for comparison to traditional digital planimetry (Placom 45C digital planimeter; EngineerSupply, Lynchburg, Virginia, USA). GVFs from 20 eyes from 10 patients with Stargardt disease were quantified to assess the difference between the two methods (a total of 230 measurements per method). This quantification approach was also applied to 13 patients with X-linked retinitis pigmentosa (XLRP) with mutations in RPGR. Results Overall, measurements using Adobe Photoshop were more rapidly performed than those using conventional planimetry. Photoshop measurements also exhibited less inter- and intra-observer variability. GVF areas for the I4e isopter in patients with the same mutation in RPGR who were nearby in age had similar qualitative and quantitative areas. Conclusions Quantification of GVF using Adobe Photoshop is quicker, more reliable, and less-user dependent than conventional digital planimetry. It will be a useful tool for both retrospective and prospective studies of disease course as well as for monitoring treatment response in clinical trials for retinal degenerative diseases. PMID:24664690
Digital signal processor and processing method for GPS receivers
NASA Technical Reports Server (NTRS)
Thomas, Jr., Jess B. (Inventor)
1989-01-01
A digital signal processor and processing method therefor for use in receivers of the NAVSTAR/GLOBAL POSITIONING SYSTEM (GPS) employs a digital carrier down-converter, digital code correlator and digital tracking processor. The digital carrier down-converter and code correlator consists of an all-digital, minimum bit implementation that utilizes digital chip and phase advancers, providing exceptional control and accuracy in feedback phase and in feedback delay. Roundoff and commensurability errors can be reduced to extremely small values (e.g., less than 100 nanochips and 100 nanocycles roundoff errors and 0.1 millichip and 1 millicycle commensurability errors). The digital tracking processor bases the fast feedback for phase and for group delay in the C/A, P.sub.1, and P.sub.2 channels on the L.sub.1 C/A carrier phase thereby maintaining lock at lower signal-to-noise ratios, reducing errors in feedback delays, reducing the frequency of cycle slips and in some cases obviating the need for quadrature processing in the P channels. Simple and reliable methods are employed for data bit synchronization, data bit removal and cycle counting. Improved precision in averaged output delay values is provided by carrier-aided data-compression techniques. The signal processor employs purely digital operations in the sense that exactly the same carrier phase and group delay measurements are obtained, to the last decimal place, every time the same sampled data (i.e., exactly the same bits) are processed.
Ooi, Delicia Shu Qin; Tan, Verena Ming Hui; Ong, Siong Gim; Chan, Yiong Huak; Heng, Chew Kiat; Lee, Yung Seng
2017-01-01
The human salivary (AMY1) gene, encoding salivary α-amylase, has variable copy number variants (CNVs) in the human genome. We aimed to determine if real-time quantitative polymerase chain reaction (qPCR) and the more recently available Droplet Digital PCR (ddPCR) can provide a precise quantification of the AMY1 gene copy number in blood, buccal cells and saliva samples derived from the same individual. Seven participants were recruited and DNA was extracted from the blood, buccal cells and saliva samples provided by each participant. Taqman assay real-time qPCR and ddPCR were conducted to quantify AMY1 gene copy numbers. Statistical analysis was carried out to determine the difference in AMY1 gene copy number between the different biological specimens and different assay methods. We found significant within-individual difference (p<0.01) in AMY1 gene copy number between different biological samples as determined by qPCR. However, there was no significant within-individual difference in AMY1 gene copy number between different biological samples as determined by ddPCR. We also found that AMY1 gene copy number of blood samples were comparable between qPCR and ddPCR, while there is a significant difference (p<0.01) between AMY1 gene copy numbers measured by qPCR and ddPCR for both buccal swab and saliva samples. Despite buccal cells and saliva samples being possible sources of DNA, it is pertinent that ddPCR or a single biological sample, preferably blood sample, be used for determining highly polymorphic gene copy numbers like AMY1, due to the large within-individual variability between different biological samples if real time qPCR is employed.
Efficient digitalization method for dental restorations using micro-CT data
NASA Astrophysics Data System (ADS)
Kim, Changhwan; Baek, Seung Hoon; Lee, Taewon; Go, Jonggun; Kim, Sun Young; Cho, Seungryong
2017-03-01
The objective of this study was to demonstrate the feasibility of using micro-CT scan of dental impressions for fabricating dental restorations and to compare the dimensional accuracy of dental models generated from various methods. The key idea of the proposed protocol is that dental impression of patients can be accurately digitized by micro-CT scan and that one can make digital cast model from micro-CT data directly. As air regions of the micro-CT scan data of dental impression are equivalent to the real teeth and surrounding structures, one can segment the air regions and fabricate digital cast model in the STL format out of them. The proposed method was validated by a phantom study using a typodont with prepared teeth. Actual measurement and deviation map analysis were performed after acquiring digital cast models for each restoration methods. Comparisons of the milled restorations were also performed by placing them on the prepared teeth of typodont. The results demonstrated that an efficient fabrication of precise dental restoration is achievable by use of the proposed method.
Bevilacqua, Elisa; Jani, Jacques C; Letourneau, Alexandra; Duiella, Silvia F; Kleinfinger, Pascale; Lohmann, Laurence; Resta, Serena; Cos Sanchez, Teresa; Fils, Jean-François; Mirra, Marilyn; Benachi, Alexandra; Costa, Jean-Marc
2018-06-13
To evaluate the failure rate and performance of cell-free DNA (cfDNA) testing, mainly in terms of detection rates for trisomy 21, performed by 2 laboratories using different analytical methods. cfDNA testing was performed on 2,870 pregnancies with the HarmonyTM Prenatal Test using the targeted digital analysis of selected regions (DANSR) method, and on 2,635 pregnancies with the "Cerba test" using the genome-wide massively parallel sequencing (GW-MPS) method, with available outcomes. Propensity score analysis was used to match patients between the 2 groups. A comparison of the detection rates for trisomy 21 between the 2 laboratories was made. In all, 2,811 patients in the Harmony group and 2,530 patients in the Cerba group had no trisomy 21, 18, or 13. Postmatched comparisons of the patient characteristics indicated a higher no-result rate in the Harmony group (1.30%) than in the Cerba group (0.75%; p = 0.039). All 41 cases of trisomy 21 in the Harmony group and 93 cases in the Cerba group were detected. Both methods of cfDNA testing showed low no-result rates and a comparable performance in detecting trisomy 21; yet GW-MPS had a slightly lower no-result rate than the DANSR method. © 2018 S. Karger AG, Basel.
Optimization of digitization procedures in cultural heritage preservation
NASA Astrophysics Data System (ADS)
Martínez, Bea; Mitjà, Carles; Escofet, Jaume
2013-11-01
The digitization of both volumetric and flat objects is the nowadays-preferred method in order to preserve cultural heritage items. High quality digital files obtained from photographic plates, films and prints, paintings, drawings, gravures, fabrics and sculptures, allows not only for a wider diffusion and on line transmission, but also for the preservation of the original items from future handling. Early digitization procedures used scanners for flat opaque or translucent objects and camera only for volumetric or flat highly texturized materials. The technical obsolescence of the high-end scanners and the improvement achieved by professional cameras has result in a wide use of cameras with digital back to digitize any kind of cultural heritage item. Since the lens, the digital back, the software controlling the camera and the digital image processing provide a wide range of possibilities, there is necessary to standardize the methods used in the reproduction work leading to preserve as high as possible the original item properties. This work presents an overview about methods used for camera system characterization, as well as the best procedures in order to identify and counteract the effect of the lens residual aberrations, sensor aliasing, image illumination, color management and image optimization by means of parametric image processing. As a corollary, the work shows some examples of reproduction workflow applied to the digitization of valuable art pieces and glass plate photographic black and white negatives.
Finding the Genomic Basis of Local Adaptation: Pitfalls, Practical Solutions, and Future Directions.
Hoban, Sean; Kelley, Joanna L; Lotterhos, Katie E; Antolin, Michael F; Bradburd, Gideon; Lowry, David B; Poss, Mary L; Reed, Laura K; Storfer, Andrew; Whitlock, Michael C
2016-10-01
Uncovering the genetic and evolutionary basis of local adaptation is a major focus of evolutionary biology. The recent development of cost-effective methods for obtaining high-quality genome-scale data makes it possible to identify some of the loci responsible for adaptive differences among populations. Two basic approaches for identifying putatively locally adaptive loci have been developed and are broadly used: one that identifies loci with unusually high genetic differentiation among populations (differentiation outlier methods) and one that searches for correlations between local population allele frequencies and local environments (genetic-environment association methods). Here, we review the promises and challenges of these genome scan methods, including correcting for the confounding influence of a species' demographic history, biases caused by missing aspects of the genome, matching scales of environmental data with population structure, and other statistical considerations. In each case, we make suggestions for best practices for maximizing the accuracy and efficiency of genome scans to detect the underlying genetic basis of local adaptation. With attention to their current limitations, genome scan methods can be an important tool in finding the genetic basis of adaptive evolutionary change.
NASA Astrophysics Data System (ADS)
Yulkifli; Afandi, Zurian; Yohandri
2018-04-01
Development of gravitation acceleration measurement using simple harmonic motion pendulum method, digital technology and photogate sensor has been done. Digital technology is more practical and optimizes the time of experimentation. The pendulum method is a method of calculating the acceleration of gravity using a solid ball that connected to a rope attached to a stative pole. The pendulum is swung at a small angle resulted a simple harmonic motion. The measurement system consists of a power supply, Photogate sensors, Arduino pro mini and seven segments. The Arduino pro mini receives digital data from the photogate sensor and processes the digital data into the timing data of the pendulum oscillation. The calculation result of the pendulum oscillation time is displayed on seven segments. Based on measured data, the accuracy and precision of the experiment system are 98.76% and 99.81%, respectively. Based on experiment data, the system can be operated in physics experiment especially in determination of the gravity acceleration.
A tailing genome walking method suitable for genomes with high local GC content.
Liu, Taian; Fang, Yongxiang; Yao, Wenjuan; Guan, Qisai; Bai, Gang; Jing, Zhizhong
2013-10-15
The tailing genome walking strategies are simple and efficient. However, they sometimes can be restricted due to the low stringency of homo-oligomeric primers. Here we modified their conventional tailing step by adding polythymidine and polyguanine to the target single-stranded DNA (ssDNA). The tailed ssDNA was then amplified exponentially with a specific primer in the known region and a primer comprising 5' polycytosine and 3' polyadenosine. The successful application of this novel method for identifying integration sites mediated by φC31 integrase in goat genome indicates that the method is more suitable for genomes with high complexity and local GC content. Copyright © 2013 Elsevier Inc. All rights reserved.
Simultaneous gene finding in multiple genomes.
König, Stefanie; Romoth, Lars W; Gerischer, Lizzy; Stanke, Mario
2016-11-15
As the tree of life is populated with sequenced genomes ever more densely, the new challenge is the accurate and consistent annotation of entire clades of genomes. We address this problem with a new approach to comparative gene finding that takes a multiple genome alignment of closely related species and simultaneously predicts the location and structure of protein-coding genes in all input genomes, thereby exploiting negative selection and sequence conservation. The model prefers potential gene structures in the different genomes that are in agreement with each other, or-if not-where the exon gains and losses are plausible given the species tree. We formulate the multi-species gene finding problem as a binary labeling problem on a graph. The resulting optimization problem is NP hard, but can be efficiently approximated using a subgradient-based dual decomposition approach. The proposed method was tested on whole-genome alignments of 12 vertebrate and 12 Drosophila species. The accuracy was evaluated for human, mouse and Drosophila melanogaster and compared to competing methods. Results suggest that our method is well-suited for annotation of (a large number of) genomes of closely related species within a clade, in particular, when RNA-Seq data are available for many of the genomes. The transfer of existing annotations from one genome to another via the genome alignment is more accurate than previous approaches that are based on protein-spliced alignments, when the genomes are at close to medium distances. The method is implemented in C ++ as part of Augustus and available open source at http://bioinf.uni-greifswald.de/augustus/ CONTACT: stefaniekoenig@ymail.com or mario.stanke@uni-greifswald.deSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Phylogenic study of Lemnoideae (duckweeds) through complete chloroplast genomes for eight accessions
Ding, Yanqiang; Fang, Yang; Guo, Ling; Li, Zhidan; He, Kaize
2017-01-01
Background Phylogenetic relationship within different genera of Lemnoideae, a kind of small aquatic monocotyledonous plants, was not well resolved, using either morphological characters or traditional markers. Given that rich genetic information in chloroplast genome makes them particularly useful for phylogenetic studies, we used chloroplast genomes to clarify the phylogeny within Lemnoideae. Methods DNAs were sequenced with next-generation sequencing. The duckweeds chloroplast genomes were indirectly filtered from the total DNA data, or directly obtained from chloroplast DNA data. To test the reliability of assembling the chloroplast genome based on the filtration of the total DNA, two methods were used to assemble the chloroplast genome of Landoltia punctata strain ZH0202. A phylogenetic tree was built on the basis of the whole chloroplast genome sequences using MrBayes v.3.2.6 and PhyML 3.0. Results Eight complete duckweeds chloroplast genomes were assembled, with lengths ranging from 165,775 bp to 171,152 bp, and each contains 80 protein-coding sequences, four rRNAs, 30 tRNAs and two pseudogenes. The identity of L. punctata strain ZH0202 chloroplast genomes assembled through two methods was 100%, and their sequences and lengths were completely identical. The chloroplast genome comparison demonstrated that the differences in chloroplast genome sizes among the Lemnoideae primarily resulted from variation in non-coding regions, especially from repeat sequence variation. The phylogenetic analysis demonstrated that the different genera of Lemnoideae are derived from each other in the following order: Spirodela, Landoltia, Lemna, Wolffiella, and Wolffia. Discussion This study demonstrates potential of whole chloroplast genome DNA as an effective option for phylogenetic studies of Lemnoideae. It also showed the possibility of using chloroplast DNA data to elucidate those phylogenies which were not yet solved well by traditional methods even in plants other than duckweeds. PMID:29302399
Genome-Wide Profiling of DNA Double-Strand Breaks by the BLESS and BLISS Methods.
Mirzazadeh, Reza; Kallas, Tomasz; Bienko, Magda; Crosetto, Nicola
2018-01-01
DNA double-strand breaks (DSBs) are major DNA lesions that are constantly formed during physiological processes such as DNA replication, transcription, and recombination, or as a result of exogenous agents such as ionizing radiation, radiomimetic drugs, and genome editing nucleases. Unrepaired DSBs threaten genomic stability by leading to the formation of potentially oncogenic rearrangements such as translocations. In past few years, several methods based on next-generation sequencing (NGS) have been developed to study the genome-wide distribution of DSBs or their conversion to translocation events. We developed Breaks Labeling, Enrichment on Streptavidin, and Sequencing (BLESS), which was the first method for direct labeling of DSBs in situ followed by their genome-wide mapping at nucleotide resolution (Crosetto et al., Nat Methods 10:361-365, 2013). Recently, we have further expanded the quantitative nature, applicability, and scalability of BLESS by developing Breaks Labeling In Situ and Sequencing (BLISS) (Yan et al., Nat Commun 8:15058, 2017). Here, we first present an overview of existing methods for genome-wide localization of DSBs, and then focus on the BLESS and BLISS methods, discussing different assay design options depending on the sample type and application.
Omniview motionless camera orientation system
NASA Technical Reports Server (NTRS)
Martin, H. Lee (Inventor); Kuban, Daniel P. (Inventor); Zimmermann, Steven D. (Inventor); Busko, Nicholas (Inventor)
2010-01-01
An apparatus and method is provided for converting digital images for use in an imaging system. The apparatus includes a data memory which stores digital data representing an image having a circular or spherical field of view such as an image captured by a fish-eye lens, a control input for receiving a signal for selecting a portion of the image, and a converter responsive to the control input for converting digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. Various methods include the steps of storing digital data representing an image having a circular or spherical field of view, selecting a portion of the image, and converting the stored digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. In various embodiments, the data converter and data conversion step may use an orthogonal set of transformation algorithms.
Signal digitizing system and method based on amplitude-to-time optical mapping
Chou, Jason; Bennett, Corey V; Hernandez, Vince
2015-01-13
A signal digitizing system and method based on analog-to-time optical mapping, optically maps amplitude information of an analog signal of interest first into wavelength information using an amplitude tunable filter (ATF) to impress spectral changes induced by the amplitude of the analog signal onto a carrier signal, i.e. a train of optical pulses, and next from wavelength information to temporal information using a dispersive element so that temporal information representing the amplitude information is encoded in the time domain in the carrier signal. Optical-to-electrical conversion of the optical pulses into voltage waveforms and subsequently digitizing the voltage waveforms into a digital image enables the temporal information to be resolved and quantized in the time domain. The digital image may them be digital signal processed to digitally reconstruct the analog signal based on the temporal information with high fidelity.
The Box-and-Dot Method: A Simple Strategy for Counting Significant Figures
NASA Astrophysics Data System (ADS)
Stephenson, W. Kirk
2009-08-01
A visual method for counting significant digits is presented. This easy-to-learn (and easy-to-teach) method, designated the box-and-dot method, uses the device of "boxing" significant figures based on two simple rules, then counting the number of digits in the boxes.
Zhou, Bin; Irwanto, Astrid; Guo, Yun-Miao; Bei, Jin-Xin; Wu, Qiao; Chen, Ge; Zhang, Tai-Ping; Lei, Jin-Jv; Feng, Qi-Sheng; Chen, Li-Zhen; Liu, Jianjun; Zhao, Yu-Pei
2012-08-01
Pancreatic ductal adenocarcinoma (PDAC) is one of the most malignant cancers with more than 94% mortality rate mainly due to the widespread metastases. To find out the somatically mutated genes related to the metastasis of PDAC, we analyzed the matched tumor and normal tissue samples from a patient diagnosed with liver metastatic PDAC using intensive exome capture-sequencing analysis (> 170× coverage). Searching for the somatic mutations that drive the clonal expansion of metastasis, we identified 12 genes with higher allele frequencies (AFs) of functional mutations in the metastatic tumor, including known genes KRAS and TP53 for metastasis. Of the 10 candidate genes, 6 (ADRB1, DCLK1, KCNH2, NOP14, SIGLEC1, and ZC3H7A), together with KRAS and TP53, were clustered into a single network (p value = 1 × 10(-22)) that is related to cancer development. Moreover, these candidate genes showed abnormal expression in PDAC tissues and functional impacts on the migration, proliferation, and colony formation abilities of pancreatic cancer cell lines. Furthermore, through digital PCR analysis, we revealed potential genomic mechanisms for the KRAS and TP53 mutations in the metastatic tumor. Taken together, our study shows the possibility for such personalized genomic profiling to provide new biological insight into the metastasis of PDAC.
Song, Hao; Yu, Zheng-Lin; Sun, Li-Na; Xue, Dong-Xiu; Zhang, Tao; Wang, Hai-Yan
2016-07-07
During the life cycle of shellfish, larval development, especially metamorphosis, has a vital influence on the dynamics, distribution, and recruitment of natural populations, as well as seed breeding. Rapana venosa, a carnivorous gastropod, is an important commercial shellfish in China, and is an ecological invader in the United States, Argentina, and France. However, information about the mechanism of its early development is still limited, because research in this area has long suffered from a lack of genomic resources. In this study, 15 digital gene expression (DGE) libraries from five developmental stages of R. venosa were constructed and sequenced on the IIIumina Hi-Sequation 2500 platform. Bioinformaticsanalysis identified numerous differentially and specifically expressed genes, which revealed that genes associated with growth, nervous system, digestive system, immune system, and apoptosis participate in important developmental processes. The functional analysis of differentially expressed genes was further implemented by gene ontology, and Kyoto encyclopedia of genes and genomes enrichment. DGE profiling provided a general picture of the transcriptomic activities during the early development of R. venosa, which may provide interesting hints for further study. Our data represent the first comparative transcriptomic information available for the early development of R. venosa, which is a prerequisite for a better understanding of the physiological traits controlling development. Copyright © 2016 Song et al.
Roberts, Megan C; Clyne, Mindy; Kennedy, Amy E; Chambers, David A; Khoury, Muin J
2017-10-26
PurposeImplementation science offers methods to evaluate the translation of genomic medicine research into practice. The extent to which the National Institutes of Health (NIH) human genomics grant portfolio includes implementation science is unknown. This brief report's objective is to describe recently funded implementation science studies in genomic medicine in the NIH grant portfolio, and identify remaining gaps.MethodsWe identified investigator-initiated NIH research grants on implementation science in genomic medicine (funding initiated 2012-2016). A codebook was adapted from the literature, three authors coded grants, and descriptive statistics were calculated for each code.ResultsForty-two grants fit the inclusion criteria (~1.75% of investigator-initiated genomics grants). The majority of included grants proposed qualitative and/or quantitative methods with cross-sectional study designs, and described clinical settings and primarily white, non-Hispanic study populations. Most grants were in oncology and examined genetic testing for risk assessment. Finally, grants lacked the use of implementation science frameworks, and most examined uptake of genomic medicine and/or assessed patient-centeredness.ConclusionWe identified large gaps in implementation science studies in genomic medicine in the funded NIH portfolio over the past 5 years. To move the genomics field forward, investigator-initiated research grants should employ rigorous implementation science methods within diverse settings and populations.Genetics in Medicine advance online publication, 26 October 2017; doi:10.1038/gim.2017.180.
A genome-wide 3C-method for characterizing the three-dimensional architectures of genomes.
Duan, Zhijun; Andronescu, Mirela; Schutz, Kevin; Lee, Choli; Shendure, Jay; Fields, Stanley; Noble, William S; Anthony Blau, C
2012-11-01
Accumulating evidence demonstrates that the three-dimensional (3D) organization of chromosomes within the eukaryotic nucleus reflects and influences genomic activities, including transcription, DNA replication, recombination and DNA repair. In order to uncover structure-function relationships, it is necessary first to understand the principles underlying the folding and the 3D arrangement of chromosomes. Chromosome conformation capture (3C) provides a powerful tool for detecting interactions within and between chromosomes. A high throughput derivative of 3C, chromosome conformation capture on chip (4C), executes a genome-wide interrogation of interaction partners for a given locus. We recently developed a new method, a derivative of 3C and 4C, which, similar to Hi-C, is capable of comprehensively identifying long-range chromosome interactions throughout a genome in an unbiased fashion. Hence, our method can be applied to decipher the 3D architectures of genomes. Here, we provide a detailed protocol for this method. Published by Elsevier Inc.
Rizzardi, Anthony E; Zhang, Xiaotun; Vogel, Rachel Isaksson; Kolb, Suzanne; Geybels, Milan S; Leung, Yuet-Kin; Henriksen, Jonathan C; Ho, Shuk-Mei; Kwak, Julianna; Stanford, Janet L; Schmechel, Stephen C
2016-07-11
Digital image analysis offers advantages over traditional pathologist visual scoring of immunohistochemistry, although few studies examining the correlation and reproducibility of these methods have been performed in prostate cancer. We evaluated the correlation between digital image analysis (continuous variable data) and pathologist visual scoring (quasi-continuous variable data), reproducibility of each method, and association of digital image analysis methods with outcomes using prostate cancer tissue microarrays (TMAs) stained for estrogen receptor-β2 (ERβ2). Prostate cancer TMAs were digitized and evaluated by pathologist visual scoring versus digital image analysis for ERβ2 staining within tumor epithelium. Two independent analysis runs were performed to evaluate reproducibility. Image analysis data were evaluated for associations with recurrence-free survival and disease specific survival following radical prostatectomy. We observed weak/moderate Spearman correlation between digital image analysis and pathologist visual scores of tumor nuclei (Analysis Run A: 0.42, Analysis Run B: 0.41), and moderate/strong correlation between digital image analysis and pathologist visual scores of tumor cytoplasm (Analysis Run A: 0.70, Analysis Run B: 0.69). For the reproducibility analysis, there was high Spearman correlation between pathologist visual scores generated for individual TMA spots across Analysis Runs A and B (Nuclei: 0.84, Cytoplasm: 0.83), and very high correlation between digital image analysis for individual TMA spots across Analysis Runs A and B (Nuclei: 0.99, Cytoplasm: 0.99). Further, ERβ2 staining was significantly associated with increased risk of prostate cancer-specific mortality (PCSM) when quantified by cytoplasmic digital image analysis (HR 2.16, 95 % CI 1.02-4.57, p = 0.045), nuclear image analysis (HR 2.67, 95 % CI 1.20-5.96, p = 0.016), and total malignant epithelial area analysis (HR 5.10, 95 % CI 1.70-15.34, p = 0.004). After adjusting for clinicopathologic factors, only total malignant epithelial area ERβ2 staining was significantly associated with PCSM (HR 4.08, 95 % CI 1.37-12.15, p = 0.012). Digital methods of immunohistochemical quantification are more reproducible than pathologist visual scoring in prostate cancer, suggesting that digital methods are preferable and especially warranted for studies involving large sample sizes.
Integration of Digital Dental Casts in Cone-Beam Computed Tomography Scans
Rangel, Frits A.; Maal, Thomas J. J.; Bergé, Stefaan J.; Kuijpers-Jagtman, Anne Marie
2012-01-01
Cone-beam computed tomography (CBCT) is widely used in maxillofacial surgery. The CBCT image of the dental arches, however, is of insufficient quality to use in digital planning of orthognathic surgery. Several authors have described methods to integrate digital dental casts into CBCT scans, but all reported methods have drawbacks. The aim of this feasibility study is to present a new simplified method to integrate digital dental casts into CBCT scans. In a patient scheduled for orthognathic surgery, titanium markers were glued to the gingiva. Next, a CBCT scan and dental impressions were made. During the impression-taking procedure, the titanium markers were transferred to the impression. The impressions were scanned, and all CBCT datasets were exported in DICOM format. The two datasets were matched, and the dentition derived from the scanned impressions was transferred to the CBCT of the patient. After matching the two datasets, the average distance between the corresponding markers was 0.1 mm. This novel method allows for the integration of digital dental casts into CBCT scans, overcoming problems such as unwanted extra radiation exposure, distortion of soft tissues due to the use of bite jigs, and time-consuming digital data handling. PMID:23050159
Securing Digital Audio using Complex Quadratic Map
NASA Astrophysics Data System (ADS)
Suryadi, MT; Satria Gunawan, Tjandra; Satria, Yudi
2018-03-01
In This digital era, exchanging data are common and easy to do, therefore it is vulnerable to be attacked and manipulated from unauthorized parties. One data type that is vulnerable to attack is digital audio. So, we need data securing method that is not vulnerable and fast. One of the methods that match all of those criteria is securing the data using chaos function. Chaos function that is used in this research is complex quadratic map (CQM). There are some parameter value that causing the key stream that is generated by CQM function to pass all 15 NIST test, this means that the key stream that is generated using this CQM is proven to be random. In addition, samples of encrypted digital sound when tested using goodness of fit test are proven to be uniform, so securing digital audio using this method is not vulnerable to frequency analysis attack. The key space is very huge about 8.1×l031 possible keys and the key sensitivity is very small about 10-10, therefore this method is also not vulnerable against brute-force attack. And finally, the processing speed for both encryption and decryption process on average about 450 times faster that its digital audio duration.
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Sergey N.
2015-01-01
Digital holography is technique which includes recording of interference pattern with digital photosensor, processing of obtained holographic data and reconstruction of object wavefront. Increase of signal-to-noise ratio (SNR) of reconstructed digital holograms is especially important in such fields as image encryption, pattern recognition, static and dynamic display of 3D scenes, and etc. In this paper compensation of photosensor light spatial noise portrait (LSNP) for increase of SNR of reconstructed digital holograms is proposed. To verify the proposed method, numerical experiments with computer generated Fresnel holograms with resolution equal to 512×512 elements were performed. Simulation of shots registration with digital camera Canon EOS 400D was performed. It is shown that solo use of the averaging over frames method allows to increase SNR only up to 4 times, and further increase of SNR is limited by spatial noise. Application of the LSNP compensation method in conjunction with the averaging over frames method allows for 10 times SNR increase. This value was obtained for LSNP measured with 20 % error. In case of using more accurate LSNP, SNR can be increased up to 20 times.
Simpson, V; Hughes, M; Wilkinson, J; Herrick, A L; Dinsdale, G
2018-03-01
Digital ulcers are a major problem in patients with systemic sclerosis (SSc), causing severe pain and impairment of hand function. In addition, digital ulcers heal slowly and sometimes become infected, which can lead to gangrene and necessitate amputation if appropriate intervention is not taken. A reliable, objective method for assessing digital ulcer healing or progression is needed in both the clinical and research arenas. This study was undertaken to compare 2 computer-assisted planimetry methods of measurement of digital ulcer area on photographs (ellipse and freehand regions of interest [ROIs]), and to assess the reliability of photographic calibration and the 2 methods of area measurement. Photographs were taken of 107 digital ulcers in 36 patients with SSc spectrum disease. Three raters assessed the photographs. Custom software allowed raters to calibrate photograph dimensions and draw ellipse or freehand ROIs. The shapes and dimensions of the ROIs were saved for further analysis. Calibration (by a single rater performing 5 repeats per image) produced an intraclass correlation coefficient (intrarater reliability) of 0.99. The mean ± SD areas of digital ulcers assessed using ellipse and freehand ROIs were 18.7 ± 20.2 mm 2 and 17.6 ± 19.3 mm 2 , respectively. Intrarater and interrater reliability of the ellipse ROI were 0.97 and 0.77, respectively. For the freehand ROI, the intrarater and interrater reliability were 0.98 and 0.76, respectively. Our findings indicate that computer-assisted planimetry methods applied to SSc-related digital ulcers can be extremely reliable. Further work is needed to move toward applying these methods as outcome measures for clinical trials and in clinical settings. © 2017, American College of Rheumatology.
NASA Technical Reports Server (NTRS)
Patterson, G.
1973-01-01
The data processing procedures and the computer programs were developed to predict structural responses using the Impulse Transfer Function (ITF) method. There are three major steps in the process: (1) analog-to-digital (A-D) conversion of the test data to produce Phase I digital tapes (2) processing of the Phase I digital tapes to extract ITF's and storing them in a permanent data bank, and (3) predicting structural responses to a set of applied loads. The analog to digital conversion is performed by a standard package which will be described later in terms of the contents of the resulting Phase I digital tape. Two separate computer programs have been developed to perform the digital processing.
Large scale genomic reorganization of topological domains at the HoxD locus.
Fabre, Pierre J; Leleu, Marion; Mormann, Benjamin H; Lopez-Delisle, Lucille; Noordermeer, Daan; Beccari, Leonardo; Duboule, Denis
2017-08-07
The transcriptional activation of HoxD genes during mammalian limb development involves dynamic interactions with two topologically associating domains (TADs) flanking the HoxD cluster. In particular, the activation of the most posterior HoxD genes in developing digits is controlled by regulatory elements located in the centromeric TAD (C-DOM) through long-range contacts. To assess the structure-function relationships underlying such interactions, we measured compaction levels and TAD discreteness using a combination of chromosome conformation capture (4C-seq) and DNA FISH. We assessed the robustness of the TAD architecture by using a series of genomic deletions and inversions that impact the integrity of this chromatin domain and that remodel long-range contacts. We report multi-partite associations between HoxD genes and up to three enhancers. We find that the loss of native chromatin topology leads to the remodeling of TAD structure following distinct parameters. Our results reveal that the recomposition of TAD architectures after large genomic re-arrangements is dependent on a boundary-selection mechanism in which CTCF mediates the gating of long-range contacts in combination with genomic distance and sequence specificity. Accordingly, the building of a recomposed TAD at this locus depends on distinct functional and constitutive parameters.
Sasheva, Pavlina; Grossniklaus, Ueli
2017-01-01
Over the last years, it has become increasingly clear that environmental influences can affect the epigenomic landscape and that some epigenetic variants can have heritable, phenotypic effects. While there are a variety of methods to perform genome-wide analyses of DNA methylation in model organisms, this is still a challenging task for non-model organisms without a reference genome. Differentially methylated region-representational difference analysis (DMR-RDA) is a sensitive and powerful PCR-based technique that isolates DNA fragments that are differentially methylated between two otherwise identical genomes. The technique does not require special equipment and is independent of prior knowledge about the genome. It is even applicable to genomes that have high complexity and a large size, being the method of choice for the analysis of plant non-model systems.
Sedlar, Karel; Kolek, Jan; Provaznik, Ivo; Patakova, Petra
2017-02-20
The complete genome sequence of non-type strain Clostridium pasteurianum NRRL B-598 was introduced last year; it is an oxygen tolerant, spore-forming, mesophilic heterofermentative bacterium with high hydrogen production and acetone-butanol fermentation ability. The basic genome statistics have shown its similarity to C. beijerinckii rather than the C. pasteurianum species. Here, we present a comparative analysis of the strain with several other complete clostridial genome sequences. Besides a 16S rRNA gene sequence comparison, digital DNA-DNA hybridization (dDDH) and phylogenomic analysis confirmed an inaccuracy of the taxonomic status of strain Clostridium pasteurianum NRRL B-598. Therefore, we suggest its reclassification to be Clostridium beijerinckii NRRL B-598. This is a specific strain and is not identical to other C. beijerinckii strains. This misclassification explains its unexpected behavior, different from other C. pasteurianum strains; it also permits better understanding of the bacterium for a future genetic manipulation that might increase its biofuel production potential. Copyright © 2017 Elsevier B.V. All rights reserved.
GStream: Improving SNP and CNV Coverage on Genome-Wide Association Studies
Alonso, Arnald; Marsal, Sara; Tortosa, Raül; Canela-Xandri, Oriol; Julià, Antonio
2013-01-01
We present GStream, a method that combines genome-wide SNP and CNV genotyping in the Illumina microarray platform with unprecedented accuracy. This new method outperforms previous well-established SNP genotyping software. More importantly, the CNV calling algorithm of GStream dramatically improves the results obtained by previous state-of-the-art methods and yields an accuracy that is close to that obtained by purely CNV-oriented technologies like Comparative Genomic Hybridization (CGH). We demonstrate the superior performance of GStream using microarray data generated from HapMap samples. Using the reference CNV calls generated by the 1000 Genomes Project (1KGP) and well-known studies on whole genome CNV characterization based either on CGH or genotyping microarray technologies, we show that GStream can increase the number of reliably detected variants up to 25% compared to previously developed methods. Furthermore, the increased genome coverage provided by GStream allows the discovery of CNVs in close linkage disequilibrium with SNPs, previously associated with disease risk in published Genome-Wide Association Studies (GWAS). These results could provide important insights into the biological mechanism underlying the detected disease risk association. With GStream, large-scale GWAS will not only benefit from the combined genotyping of SNPs and CNVs at an unprecedented accuracy, but will also take advantage of the computational efficiency of the method. PMID:23844243
Nguyen, Quan H; Tellam, Ross L; Naval-Sanchez, Marina; Porto-Neto, Laercio R; Barendse, William; Reverter, Antonio; Hayes, Benjamin; Kijas, James; Dalrymple, Brian P
2018-01-01
Abstract Genome sequences for hundreds of mammalian species are available, but an understanding of their genomic regulatory regions, which control gene expression, is only beginning. A comprehensive prediction of potential active regulatory regions is necessary to functionally study the roles of the majority of genomic variants in evolution, domestication, and animal production. We developed a computational method to predict regulatory DNA sequences (promoters, enhancers, and transcription factor binding sites) in production animals (cows and pigs) and extended its broad applicability to other mammals. The method utilizes human regulatory features identified from thousands of tissues, cell lines, and experimental assays to find homologous regions that are conserved in sequences and genome organization and are enriched for regulatory elements in the genome sequences of other mammalian species. Importantly, we developed a filtering strategy, including a machine learning classification method, to utilize a very small number of species-specific experimental datasets available to select for the likely active regulatory regions. The method finds the optimal combination of sensitivity and accuracy to unbiasedly predict regulatory regions in mammalian species. Furthermore, we demonstrated the utility of the predicted regulatory datasets in cattle for prioritizing variants associated with multiple production and climate change adaptation traits and identifying potential genome editing targets. PMID:29618048
Nguyen, Quan H; Tellam, Ross L; Naval-Sanchez, Marina; Porto-Neto, Laercio R; Barendse, William; Reverter, Antonio; Hayes, Benjamin; Kijas, James; Dalrymple, Brian P
2018-03-01
Genome sequences for hundreds of mammalian species are available, but an understanding of their genomic regulatory regions, which control gene expression, is only beginning. A comprehensive prediction of potential active regulatory regions is necessary to functionally study the roles of the majority of genomic variants in evolution, domestication, and animal production. We developed a computational method to predict regulatory DNA sequences (promoters, enhancers, and transcription factor binding sites) in production animals (cows and pigs) and extended its broad applicability to other mammals. The method utilizes human regulatory features identified from thousands of tissues, cell lines, and experimental assays to find homologous regions that are conserved in sequences and genome organization and are enriched for regulatory elements in the genome sequences of other mammalian species. Importantly, we developed a filtering strategy, including a machine learning classification method, to utilize a very small number of species-specific experimental datasets available to select for the likely active regulatory regions. The method finds the optimal combination of sensitivity and accuracy to unbiasedly predict regulatory regions in mammalian species. Furthermore, we demonstrated the utility of the predicted regulatory datasets in cattle for prioritizing variants associated with multiple production and climate change adaptation traits and identifying potential genome editing targets.
Strategies and tools for whole genome alignments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couronne, Olivier; Poliakov, Alexander; Bray, Nicolas
2002-11-25
The availability of the assembled mouse genome makespossible, for the first time, an alignment and comparison of two largevertebrate genomes. We have investigated different strategies ofalignment for the subsequent analysis of conservation of genomes that areeffective for different quality assemblies. These strategies were appliedto the comparison of the working draft of the human genome with the MouseGenome Sequencing Consortium assembly, as well as other intermediatemouse assemblies. Our methods are fast and the resulting alignmentsexhibit a high degree of sensitivity, covering more than 90 percent ofknown coding exons in the human genome. We have obtained such coveragewhile preserving specificity. With amore » view towards the end user, we havedeveloped a suite of tools and websites for automatically aligning, andsubsequently browsing and working with whole genome comparisons. Wedescribe the use of these tools to identify conserved non-coding regionsbetween the human and mouse genomes, some of which have not beenidentified by other methods.« less
Digital storytelling: an innovative tool for practice, education, and research.
Lal, Shalini; Donnelly, Catherine; Shin, Jennifer
2015-01-01
Digital storytelling is a method of using storytelling, group work, and modern technology to facilitate the creation of 2-3 minute multi-media video clips to convey personal or community stories. Digital storytelling is being used within the health care field; however, there has been limited documentation of its application within occupational therapy. This paper introduces digital storytelling and proposes how it can be applied in occupational therapy clinical practice, education, and research. The ethical and methodological challenges in relation to using the method are also discussed.
Digital data storage systems, computers, and data verification methods
Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.
2005-12-27
Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.
Research on coding and decoding method for digital levels.
Tu, Li-fen; Zhong, Si-dong
2011-01-20
A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1 mm when the measuring range is between 2 m and 100 m, which can meet practical needs.
NASA Astrophysics Data System (ADS)
Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan
2015-03-01
Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.
Lee, Chi-Ching; Chen, Yi-Ping Phoebe; Yao, Tzu-Jung; Ma, Cheng-Yu; Lo, Wei-Cheng; Lyu, Ping-Chiang; Tang, Chuan Yi
2013-04-10
Sequencing of microbial genomes is important because of microbial-carrying antibiotic and pathogenetic activities. However, even with the help of new assembling software, finishing a whole genome is a time-consuming task. In most bacteria, pathogenetic or antibiotic genes are carried in genomic islands. Therefore, a quick genomic island (GI) prediction method is useful for ongoing sequencing genomes. In this work, we built a Web server called GI-POP (http://gipop.life.nthu.edu.tw) which integrates a sequence assembling tool, a functional annotation pipeline, and a high-performance GI predicting module, in a support vector machine (SVM)-based method called genomic island genomic profile scanning (GI-GPS). The draft genomes of the ongoing genome projects in contigs or scaffolds can be submitted to our Web server, and it provides the functional annotation and highly probable GI-predicting results. GI-POP is a comprehensive annotation Web server designed for ongoing genome project analysis. Researchers can perform annotation and obtain pre-analytic information include possible GIs, coding/non-coding sequences and functional analysis from their draft genomes. This pre-analytic system can provide useful information for finishing a genome sequencing project. Copyright © 2012 Elsevier B.V. All rights reserved.
A New Statistics-Based Online Baseline Restorer for a High Count-Rate Fully Digital System.
Li, Hongdi; Wang, Chao; Baghaei, Hossain; Zhang, Yuxuan; Ramirez, Rocio; Liu, Shitao; An, Shaohui; Wong, Wai-Hoi
2010-04-01
The goal of this work is to develop a novel, accurate, real-time digital baseline restorer using online statistical processing for a high count-rate digital system such as positron emission tomography (PET). In high count-rate nuclear instrumentation applications, analog signals are DC-coupled for better performance. However, the detectors, pre-amplifiers and other front-end electronics would cause a signal baseline drift in a DC-coupling system, which will degrade the performance of energy resolution and positioning accuracy. Event pileups normally exist in a high-count rate system and the baseline drift will create errors in the event pileup-correction. Hence, a baseline restorer (BLR) is required in a high count-rate system to remove the DC drift ahead of the pileup correction. Many methods have been reported for BLR from classic analog methods to digital filter solutions. However a single channel BLR with analog method can only work under 500 kcps count-rate, and normally an analog front-end application-specific integrated circuits (ASIC) is required for the application involved hundreds BLR such as a PET camera. We have developed a simple statistics-based online baseline restorer (SOBLR) for a high count-rate fully digital system. In this method, we acquire additional samples, excluding the real gamma pulses, from the existing free-running ADC in the digital system, and perform online statistical processing to generate a baseline value. This baseline value will be subtracted from the digitized waveform to retrieve its original pulse with zero-baseline drift. This method can self-track the baseline without a micro-controller involved. The circuit consists of two digital counter/timers, one comparator, one register and one subtraction unit. Simulation shows a single channel works at 30 Mcps count-rate with pileup condition. 336 baseline restorer circuits have been implemented into 12 field-programmable-gate-arrays (FPGA) for our new fully digital PET system.
McMahon, Tanis C.; Blais, Burton W.; Wong, Alex; Carrillo, Catherine D.
2017-01-01
Foodborne illness attributed to enterohemorrhagic E. coli (EHEC), a highly pathogenic subset of Shiga toxin-producing E. coli (STEC), is increasingly recognized as a significant public health issue. Current microbiological methods for identification of EHEC in foods often use PCR-based approaches to screen enrichment broth cultures for characteristic gene markers [i.e., Shiga toxin (stx) and intimin (eae)]. However, false positives arise when complex food matrices, such as beef, contain mixtures of eae-negative STEC and eae-positive E. coli, but no EHEC with both markers in a single cell. To reduce false-positive detection of EHEC in food enrichment samples, a Multiplexed, Single Intact Cell droplet digital PCR (MuSIC ddPCR) assay capable of detecting the co-occurrence of the stx and eae genes in a single bacterial cell was developed. This method requires: (1) dispersal of intact bacteria into droplets; (2) release of genomic DNA (gDNA) by heat lysis; and (3) amplification and detection of genetic targets (stx and eae) using standard TaqMan chemistries with ddPCR. Performance of the method was tested with panels of EHEC and non-target E. coli. By determining the linkage (i.e., the proportion of droplets in which stx and eae targets were both amplified), samples containing EHEC (typically greater than 20% linkage) could be distinguished from samples containing mixtures of eae-negative STEC and eae-positive E. coli (0–2% linkage). The use of intact cells was necessary as this linkage was not observed with gDNA extracts. EHEC could be accurately identified in enrichment broth cultures containing excess amounts of background E. coli and in enrichment cultures derived from ground beef/pork and leafy-green produce samples. To our knowledge, this is the first report of dual-target detection in single bacterial cells using ddPCR. The application of MuSIC ddPCR to enrichment-culture screening would reduce false-positives, thereby improving the cost, speed, and accuracy of current methods for EHEC detection in foods. PMID:28303131
McMahon, Tanis C; Blais, Burton W; Wong, Alex; Carrillo, Catherine D
2017-01-01
Foodborne illness attributed to enterohemorrhagic E. coli (EHEC), a highly pathogenic subset of Shiga toxin-producing E. coli (STEC), is increasingly recognized as a significant public health issue. Current microbiological methods for identification of EHEC in foods often use PCR-based approaches to screen enrichment broth cultures for characteristic gene markers [i.e., Shiga toxin ( stx ) and intimin ( eae )]. However, false positives arise when complex food matrices, such as beef, contain mixtures of eae -negative STEC and eae -positive E. coli , but no EHEC with both markers in a single cell. To reduce false-positive detection of EHEC in food enrichment samples, a Multiplexed, Single Intact Cell droplet digital PCR (MuSIC ddPCR) assay capable of detecting the co-occurrence of the stx and eae genes in a single bacterial cell was developed. This method requires: (1) dispersal of intact bacteria into droplets; (2) release of genomic DNA (gDNA) by heat lysis; and (3) amplification and detection of genetic targets ( stx and eae ) using standard TaqMan chemistries with ddPCR. Performance of the method was tested with panels of EHEC and non-target E. coli . By determining the linkage (i.e., the proportion of droplets in which stx and eae targets were both amplified), samples containing EHEC (typically greater than 20% linkage) could be distinguished from samples containing mixtures of eae -negative STEC and eae -positive E. coli (0-2% linkage). The use of intact cells was necessary as this linkage was not observed with gDNA extracts. EHEC could be accurately identified in enrichment broth cultures containing excess amounts of background E. coli and in enrichment cultures derived from ground beef/pork and leafy-green produce samples. To our knowledge, this is the first report of dual-target detection in single bacterial cells using ddPCR. The application of MuSIC ddPCR to enrichment-culture screening would reduce false-positives, thereby improving the cost, speed, and accuracy of current methods for EHEC detection in foods.
Measurement methods to build up the digital optical twin
NASA Astrophysics Data System (ADS)
Prochnau, Marcel; Holzbrink, Michael; Wang, Wenxin; Holters, Martin; Stollenwerk, Jochen; Loosen, Peter
2018-02-01
The realization of the Digital Optical Twin (DOT), which is in short the digital representation of the physical state of an optical system, is particularly useful in the context of an automated assembly process of optical systems. During the assembly process, the physical system status of the optical system is continuously measured and compared with the digital model. In case of deviations between physical state and the digital model, the latter one is adapted to match the physical state. To reach the goal described above, in a first step measurement/characterization technologies concerning their suitability to generate a precise digital twin of an existing optical system have to be identified and evaluated. This paper gives an overview of possible characterization methods and, finally, shows first results of evaluated, compared methods (e.g. spot-radius, MTF, Zernike-polynomials), to create a DOT. The focus initially lies on the unequivocalness of the optimization results as well as on the computational time required for the optimization to reach the characterized system state. Possible sources of error are the measurement accuracy (to characterize the system) , execution time of the measurement, time needed to map the digital to the physical world (optimization step) as well as interface possibilities to integrate the measurement tool into an assembly cell. Moreover, it is to be discussed whether the used measurement methods are suitable for a `seamless' integration into an assembly cell.
Formal methods and digital systems validation for airborne systems
NASA Technical Reports Server (NTRS)
Rushby, John
1993-01-01
This report has been prepared to supplement a forthcoming chapter on formal methods in the FAA Digital Systems Validation Handbook. Its purpose is as follows: to outline the technical basis for formal methods in computer science; to explain the use of formal methods in the specification and verification of software and hardware requirements, designs, and implementations; to identify the benefits, weaknesses, and difficulties in applying these methods to digital systems used on board aircraft; and to suggest factors for consideration when formal methods are offered in support of certification. These latter factors assume the context for software development and assurance described in RTCA document DO-178B, 'Software Considerations in Airborne Systems and Equipment Certification,' Dec. 1992.
Image analysis and machine learning in digital pathology: Challenges and opportunities.
Madabhushi, Anant; Lee, George
2016-10-01
With the rise in whole slide scanner technology, large numbers of tissue slides are being scanned and represented and archived digitally. While digital pathology has substantial implications for telepathology, second opinions, and education there are also huge research opportunities in image computing with this new source of "big data". It is well known that there is fundamental prognostic data embedded in pathology images. The ability to mine "sub-visual" image features from digital pathology slide images, features that may not be visually discernible by a pathologist, offers the opportunity for better quantitative modeling of disease appearance and hence possibly improved prediction of disease aggressiveness and patient outcome. However the compelling opportunities in precision medicine offered by big digital pathology data come with their own set of computational challenges. Image analysis and computer assisted detection and diagnosis tools previously developed in the context of radiographic images are woefully inadequate to deal with the data density in high resolution digitized whole slide images. Additionally there has been recent substantial interest in combining and fusing radiologic imaging and proteomics and genomics based measurements with features extracted from digital pathology images for better prognostic prediction of disease aggressiveness and patient outcome. Again there is a paucity of powerful tools for combining disease specific features that manifest across multiple different length scales. The purpose of this review is to discuss developments in computational image analysis tools for predictive modeling of digital pathology images from a detection, segmentation, feature extraction, and tissue classification perspective. We discuss the emergence of new handcrafted feature approaches for improved predictive modeling of tissue appearance and also review the emergence of deep learning schemes for both object detection and tissue classification. We also briefly review some of the state of the art in fusion of radiology and pathology images and also combining digital pathology derived image measurements with molecular "omics" features for better predictive modeling. The review ends with a brief discussion of some of the technical and computational challenges to be overcome and reflects on future opportunities for the quantitation of histopathology. Copyright © 2016 Elsevier B.V. All rights reserved.
Feasibility and Accuracy of Digitizing Edentulous Maxillectomy Defects: A Comparative Study.
Elbashti, Mahmoud E; Hattori, Mariko; Patzelt, Sebastian Bm; Schulze, Dirk; Sumita, Yuka I; Taniguchi, Hisashi
The aim of this study was to evaluate the feasibility and accuracy of using an intraoral scanner to digitize edentulous maxillectomy defects. A total of 20 maxillectomy models with two defect types were digitized using cone beam computed tomography. Conventional and digital impressions were made using silicone impression material and a laboratory optical scanner as well as a chairside intraoral scanner. The 3D datasets were analyzed using 3D evaluation software. Two-way analysis of variance revealed no interaction between defect types and impression methods, and the accuracy of the impression methods was significantly different (P = .0374). Digitizing edentulous maxillectomy defect models using a chairside intraoral scanner appears to be feasible and accurate.
Cherepy, Nerine Jane; Payne, Stephen Anthony; Drury, Owen B; Sturm, Benjamin W
2014-11-11
A scintillator radiation detector system according to one embodiment includes a scintillator; and a processing device for processing pulse traces corresponding to light pulses from the scintillator, wherein pulse digitization is used to improve energy resolution of the system. A scintillator radiation detector system according to another embodiment includes a processing device for fitting digitized scintillation waveforms to an algorithm based on identifying rise and decay times and performing a direct integration of fit parameters. A method according to yet another embodiment includes processing pulse traces corresponding to light pulses from a scintillator, wherein pulse digitization is used to improve energy resolution of the system. A method in a further embodiment includes fitting digitized scintillation waveforms to an algorithm based on identifying rise and decay times; and performing a direct integration of fit parameters. Additional systems and methods are also presented.
2010-01-01
Background An important focus of genomic science is the discovery and characterization of all functional elements within genomes. In silico methods are used in genome studies to discover putative regulatory genomic elements (called words or motifs). Although a number of methods have been developed for motif discovery, most of them lack the scalability needed to analyze large genomic data sets. Methods This manuscript presents WordSeeker, an enumerative motif discovery toolkit that utilizes multi-core and distributed computational platforms to enable scalable analysis of genomic data. A controller task coordinates activities of worker nodes, each of which (1) enumerates a subset of the DNA word space and (2) scores words with a distributed Markov chain model. Results A comprehensive suite of performance tests was conducted to demonstrate the performance, speedup and efficiency of WordSeeker. The scalability of the toolkit enabled the analysis of the entire genome of Arabidopsis thaliana; the results of the analysis were integrated into The Arabidopsis Gene Regulatory Information Server (AGRIS). A public version of WordSeeker was deployed on the Glenn cluster at the Ohio Supercomputer Center. Conclusion WordSeeker effectively utilizes concurrent computing platforms to enable the identification of putative functional elements in genomic data sets. This capability facilitates the analysis of the large quantity of sequenced genomic data. PMID:21210985
NASA Astrophysics Data System (ADS)
Kurnia, H.; Noerhadi, N. A. I.
2017-08-01
Three-dimensional digital study models were introduced following advances in digital technology. This study was carried out to assess the reliability of digital study models scanned by a laser scanning device newly assembled. The aim of this study was to compare the digital study models and conventional models. Twelve sets of dental impressions were taken from patients with mild-to-moderate crowding. The impressions were taken twice, one with alginate and the other with polyvinylsiloxane. The alginate impressions were made into conventional models, and the polyvinylsiloxane impressions were scanned to produce digital models. The mesiodistal tooth width and Little’s irregularity index (LII) were measured manually with digital calipers on the conventional models and digitally on the digital study models. Bolton analysis was performed on each study models. Each method was carried out twice to check for intra-observer variability. The reproducibility (comparison of the methods) was assessed using independent-sample t-tests. The mesiodistal tooth width between conventional and digital models did not significantly differ (p > 0.05). Independent-sample t-tests did not identify statistically significant differences for Bolton analysis and LII (p = 0.603 for Bolton and p = 0894 for LII). The measurements of the digital study models are as accurate as those of the conventional models.
Steps to achieve quantitative measurements of microRNA using two step droplet digital PCR.
Stein, Erica V; Duewer, David L; Farkas, Natalia; Romsos, Erica L; Wang, Lili; Cole, Kenneth D
2017-01-01
Droplet digital PCR (ddPCR) is being advocated as a reference method to measure rare genomic targets. It has consistently been proven to be more sensitive and direct at discerning copy numbers of DNA than other quantitative methods. However, one of the largest obstacles to measuring microRNA (miRNA) using ddPCR is that reverse transcription efficiency depends upon the target, meaning small RNA nucleotide composition directly effects primer specificity in a manner that prevents traditional quantitation optimization strategies. Additionally, the use of reagents that are optimized for miRNA measurements using quantitative real-time PCR (qRT-PCR) appear to either cause false positive or false negative detection of certain targets when used with traditional ddPCR quantification methods. False readings are often related to using inadequate enzymes, primers and probes. Given that two-step miRNA quantification using ddPCR relies solely on reverse transcription and uses proprietary reagents previously optimized only for qRT-PCR, these barriers are substantial. Therefore, here we outline essential controls, optimization techniques, and an efficacy model to improve the quality of ddPCR miRNA measurements. We have applied two-step principles used for miRNA qRT-PCR measurements and leveraged the use of synthetic miRNA targets to evaluate ddPCR following cDNA synthesis with four different commercial kits. We have identified inefficiencies and limitations as well as proposed ways to circumvent identified obstacles. Lastly, we show that we can apply these criteria to a model system to confidently quantify miRNA copy number. Our measurement technique is a novel way to quantify specific miRNA copy number in a single sample, without using standard curves for individual experiments. Our methodology can be used for validation and control measurements, as well as a diagnostic technique that allows scientists, technicians, clinicians, and regulators to base miRNA measures on a single unit of measurement rather than a ratio of values.
Steps to achieve quantitative measurements of microRNA using two step droplet digital PCR
Duewer, David L.; Farkas, Natalia; Romsos, Erica L.; Wang, Lili; Cole, Kenneth D.
2017-01-01
Droplet digital PCR (ddPCR) is being advocated as a reference method to measure rare genomic targets. It has consistently been proven to be more sensitive and direct at discerning copy numbers of DNA than other quantitative methods. However, one of the largest obstacles to measuring microRNA (miRNA) using ddPCR is that reverse transcription efficiency depends upon the target, meaning small RNA nucleotide composition directly effects primer specificity in a manner that prevents traditional quantitation optimization strategies. Additionally, the use of reagents that are optimized for miRNA measurements using quantitative real-time PCR (qRT-PCR) appear to either cause false positive or false negative detection of certain targets when used with traditional ddPCR quantification methods. False readings are often related to using inadequate enzymes, primers and probes. Given that two-step miRNA quantification using ddPCR relies solely on reverse transcription and uses proprietary reagents previously optimized only for qRT-PCR, these barriers are substantial. Therefore, here we outline essential controls, optimization techniques, and an efficacy model to improve the quality of ddPCR miRNA measurements. We have applied two-step principles used for miRNA qRT-PCR measurements and leveraged the use of synthetic miRNA targets to evaluate ddPCR following cDNA synthesis with four different commercial kits. We have identified inefficiencies and limitations as well as proposed ways to circumvent identified obstacles. Lastly, we show that we can apply these criteria to a model system to confidently quantify miRNA copy number. Our measurement technique is a novel way to quantify specific miRNA copy number in a single sample, without using standard curves for individual experiments. Our methodology can be used for validation and control measurements, as well as a diagnostic technique that allows scientists, technicians, clinicians, and regulators to base miRNA measures on a single unit of measurement rather than a ratio of values. PMID:29145448
Digital Libraries--Methods and Applications
ERIC Educational Resources Information Center
Huang, Kuo Hung, Ed.
2011-01-01
Digital library is commonly seen as a type of information retrieval system which stores and accesses digital content remotely via computer networks. However, the vision of digital libraries is not limited to technology or management, but user experience. This book is an attempt to share the practical experiences of solutions to the operation of…
Dewey, Colin N
2012-01-01
Whole-genome alignment (WGA) is the prediction of evolutionary relationships at the nucleotide level between two or more genomes. It combines aspects of both colinear sequence alignment and gene orthology prediction, and is typically more challenging to address than either of these tasks due to the size and complexity of whole genomes. Despite the difficulty of this problem, numerous methods have been developed for its solution because WGAs are valuable for genome-wide analyses, such as phylogenetic inference, genome annotation, and function prediction. In this chapter, we discuss the meaning and significance of WGA and present an overview of the methods that address it. We also examine the problem of evaluating whole-genome aligners and offer a set of methodological challenges that need to be tackled in order to make the most effective use of our rapidly growing databases of whole genomes.
Transcriptome Assembly, Gene Annotation and Tissue Gene Expression Atlas of the Rainbow Trout
Salem, Mohamed; Paneru, Bam; Al-Tobasei, Rafet; Abdouni, Fatima; Thorgaard, Gary H.; Rexroad, Caird E.; Yao, Jianbo
2015-01-01
Efforts to obtain a comprehensive genome sequence for rainbow trout are ongoing and will be complemented by transcriptome information that will enhance genome assembly and annotation. Previously, transcriptome reference sequences were reported using data from different sources. Although the previous work added a great wealth of sequences, a complete and well-annotated transcriptome is still needed. In addition, gene expression in different tissues was not completely addressed in the previous studies. In this study, non-normalized cDNA libraries were sequenced from 13 different tissues of a single doubled haploid rainbow trout from the same source used for the rainbow trout genome sequence. A total of ~1.167 billion paired-end reads were de novo assembled using the Trinity RNA-Seq assembler yielding 474,524 contigs > 500 base-pairs. Of them, 287,593 had homologies to the NCBI non-redundant protein database. The longest contig of each cluster was selected as a reference, yielding 44,990 representative contigs. A total of 4,146 contigs (9.2%), including 710 full-length sequences, did not match any mRNA sequences in the current rainbow trout genome reference. Mapping reads to the reference genome identified an additional 11,843 transcripts not annotated in the genome. A digital gene expression atlas revealed 7,678 housekeeping and 4,021 tissue-specific genes. Expression of about 16,000–32,000 genes (35–71% of the identified genes) accounted for basic and specialized functions of each tissue. White muscle and stomach had the least complex transcriptomes, with high percentages of their total mRNA contributed by a small number of genes. Brain, testis and intestine, in contrast, had complex transcriptomes, with a large numbers of genes involved in their expression patterns. This study provides comprehensive de novo transcriptome information that is suitable for functional and comparative genomics studies in rainbow trout, including annotation of the genome. PMID:25793877
Meier-Kolthoff, Jan P; Hahnke, Richard L; Petersen, Jörn; Scheuner, Carmen; Michael, Victoria; Fiebig, Anne; Rohde, Christine; Rohde, Manfred; Fartmann, Berthold; Goodwin, Lynne A; Chertkov, Olga; Reddy, Tbk; Pati, Amrita; Ivanova, Natalia N; Markowitz, Victor; Kyrpides, Nikos C; Woyke, Tanja; Göker, Markus; Klenk, Hans-Peter
2014-01-01
Although Escherichia coli is the most widely studied bacterial model organism and often considered to be the model bacterium per se, its type strain was until now forgotten from microbial genomics. As a part of the G enomic E ncyclopedia of B acteria and A rchaea project, we here describe the features of E. coli DSM 30083(T) together with its genome sequence and annotation as well as novel aspects of its phenotype. The 5,038,133 bp containing genome sequence includes 4,762 protein-coding genes and 175 RNA genes as well as a single plasmid. Affiliation of a set of 250 genome-sequenced E. coli strains, Shigella and outgroup strains to the type strain of E. coli was investigated using digital DNA:DNA-hybridization (dDDH) similarities and differences in genomic G+C content. As in the majority of previous studies, results show Shigella spp. embedded within E. coli and in most cases forming a single subgroup of it. Phylogenomic trees also recover the proposed E. coli phylotypes as monophyla with minor exceptions and place DSM 30083(T) in phylotype B2 with E. coli S88 as its closest neighbor. The widely used lab strain K-12 is not only genomically but also physiologically strongly different from the type strain. The phylotypes do not express a uniform level of character divergence as measured using dDDH, however, thus an alternative arrangement is proposed and discussed in the context of bacterial subspecies. Analyses of the genome sequences of a large number of E. coli strains and of strains from > 100 other bacterial genera indicate a value of 79-80% dDDH as the most promising threshold for delineating subspecies, which in turn suggests the presence of five subspecies within E. coli.
Efficient engineering of a bacteriophage genome using the type I-E CRISPR-Cas system.
Kiro, Ruth; Shitrit, Dror; Qimron, Udi
2014-01-01
The clustered regularly interspaced short palindromic repeats (CRISPR)-CRISPR-associated (Cas) system has recently been used to engineer genomes of various organisms, but surprisingly, not those of bacteriophages (phages). Here we present a method to genetically engineer the Escherichia coli phage T7 using the type I-E CRISPR-Cas system. T7 phage genome is edited by homologous recombination with a DNA sequence flanked by sequences homologous to the desired location. Non-edited genomes are targeted by the CRISPR-Cas system, thus enabling isolation of the desired recombinant phages. This method broadens CRISPR Cas-based editing to phages and uses a CRISPR-Cas type other than type II. The method may be adjusted to genetically engineer any bacteriophage genome.
The Box-and-Dot Method: A Simple Strategy for Counting Significant Figures
ERIC Educational Resources Information Center
Stephenson, W. Kirk
2009-01-01
A visual method for counting significant digits is presented. This easy-to-learn (and easy-to-teach) method, designated the box-and-dot method, uses the device of "boxing" significant figures based on two simple rules, then counting the number of digits in the boxes. (Contains 4 notes.)
Method and Apparatus for Improving the Resolution of Digitally Sampled Analog Data
NASA Technical Reports Server (NTRS)
Liaghati, Amir L. (Inventor)
2017-01-01
A system and method is described for converting an analog signal into a digital signal. The gain and offset of an ADC is dynamically adjusted so that the N-bits of input data are assigned to a narrower channel instead of the entire input range of the ADC. This provides greater resolution in the range of interest without generating longer digital data strings.
2012-01-01
Background Efficient, robust, and accurate genotype imputation algorithms make large-scale application of genomic selection cost effective. An algorithm that imputes alleles or allele probabilities for all animals in the pedigree and for all genotyped single nucleotide polymorphisms (SNP) provides a framework to combine all pedigree, genomic, and phenotypic information into a single-stage genomic evaluation. Methods An algorithm was developed for imputation of genotypes in pedigreed populations that allows imputation for completely ungenotyped animals and for low-density genotyped animals, accommodates a wide variety of pedigree structures for genotyped animals, imputes unmapped SNP, and works for large datasets. The method involves simple phasing rules, long-range phasing and haplotype library imputation and segregation analysis. Results Imputation accuracy was high and computational cost was feasible for datasets with pedigrees of up to 25 000 animals. The resulting single-stage genomic evaluation increased the accuracy of estimated genomic breeding values compared to a scenario in which phenotypes on relatives that were not genotyped were ignored. Conclusions The developed imputation algorithm and software and the resulting single-stage genomic evaluation method provide powerful new ways to exploit imputation and to obtain more accurate genetic evaluations. PMID:22462519
Comparison of different methods for isolation of bacterial DNA from retail oyster tissues
USDA-ARS?s Scientific Manuscript database
Oysters are filter-feeders that bio-accumulate bacteria in water while feeding. To evaluate the bacterial genomic DNA extracted from retail oyster tissues, including the gills and digestive glands, four isolation methods were used. Genomic DNA extraction was performed using the Allmag™ Blood Genomic...
Argimón, Silvia; Konganti, Kranti; Chen, Hao; Alekseyenko, Alexander V.; Brown, Stuart; Caufield, Page W.
2014-01-01
Comparative genomics is a popular method for the identification of microbial virulence determinants, especially since the sequencing of a large number of whole bacterial genomes from pathogenic and non-pathogenic strains has become relatively inexpensive. The bioinformatics pipelines for comparative genomics usually include gene prediction and annotation and can require significant computer power. To circumvent this, we developed a rapid method for genome-scale in silico subtractive hybridization, based on blastn and independent of feature identification and annotation. Whole genome comparisons by in silico genome subtraction were performed to identify genetic loci specific to Streptococcus mutans strains associated with severe early childhood caries (S-ECC), compared to strains isolated from caries-free (CF) children. The genome similarity of the 20 S. mutans strains included in this study, calculated by Simrank k-mer sharing, ranged from 79.5 to 90.9%, confirming this is a genetically heterogeneous group of strains. We identified strain-specific genetic elements in 19 strains, with sizes ranging from 200 bp to 39 kb. These elements contained protein-coding regions with functions mostly associated with mobile DNA. We did not, however, identify any genetic loci consistently associated with dental caries, i.e., shared by all the S-ECC strains and absent in the CF strains. Conversely, we did not identify any genetic loci specific with the healthy group. Comparison of previously published genomes from pathogenic and carriage strains of Neisseria meningitidis with our in silico genome subtraction yielded the same set of genes specific to the pathogenic strains, thus validating our method. Our results suggest that S. mutans strains derived from caries active or caries free dentitions cannot be differentiated based on the presence or absence of specific genetic elements. Our in silico genome subtraction method is available as the Microbial Genome Comparison (MGC) tool, with a user-friendly JAVA graphical interface. PMID:24291226
Toward the automated generation of genome-scale metabolic networks in the SEED.
DeJongh, Matthew; Formsma, Kevin; Boillot, Paul; Gould, John; Rycenga, Matthew; Best, Aaron
2007-04-26
Current methods for the automated generation of genome-scale metabolic networks focus on genome annotation and preliminary biochemical reaction network assembly, but do not adequately address the process of identifying and filling gaps in the reaction network, and verifying that the network is suitable for systems level analysis. Thus, current methods are only sufficient for generating draft-quality networks, and refinement of the reaction network is still largely a manual, labor-intensive process. We have developed a method for generating genome-scale metabolic networks that produces substantially complete reaction networks, suitable for systems level analysis. Our method partitions the reaction space of central and intermediary metabolism into discrete, interconnected components that can be assembled and verified in isolation from each other, and then integrated and verified at the level of their interconnectivity. We have developed a database of components that are common across organisms, and have created tools for automatically assembling appropriate components for a particular organism based on the metabolic pathways encoded in the organism's genome. This focuses manual efforts on that portion of an organism's metabolism that is not yet represented in the database. We have demonstrated the efficacy of our method by reverse-engineering and automatically regenerating the reaction network from a published genome-scale metabolic model for Staphylococcus aureus. Additionally, we have verified that our method capitalizes on the database of common reaction network components created for S. aureus, by using these components to generate substantially complete reconstructions of the reaction networks from three other published metabolic models (Escherichia coli, Helicobacter pylori, and Lactococcus lactis). We have implemented our tools and database within the SEED, an open-source software environment for comparative genome annotation and analysis. Our method sets the stage for the automated generation of substantially complete metabolic networks for over 400 complete genome sequences currently in the SEED. With each genome that is processed using our tools, the database of common components grows to cover more of the diversity of metabolic pathways. This increases the likelihood that components of reaction networks for subsequently processed genomes can be retrieved from the database, rather than assembled and verified manually.
Zmienko, Agnieszka; Samelak-Czajka, Anna; Kozlowski, Piotr; Szymanska, Maja; Figlerowicz, Marek
2016-11-08
Intraspecies copy number variations (CNVs), defined as unbalanced structural variations of specific genomic loci, ≥1 kb in size, are present in the genomes of animals and plants. A growing number of examples indicate that CNVs may have functional significance and contribute to phenotypic diversity. In the model plant Arabidopsis thaliana at least several hundred protein-coding genes might display CNV; however, locus-specific genotyping studies in this plant have not been conducted. We analyzed the natural CNVs in the region overlapping MSH2 gene that encodes the DNA mismatch repair protein, and AT3G18530 and AT3G18535 genes that encode poorly characterized proteins. By applying multiplex ligation-dependent probe amplification and droplet digital PCR we genotyped those genes in 189 A. thaliana accessions. We found that AT3G18530 and AT3G18535 were duplicated (2-14 times) in 20 and deleted in 101 accessions. MSH2 was duplicated in 12 accessions (up to 12-14 copies) but never deleted. In all but one case, the MSH2 duplications were associated with those of AT3G18530 and AT3G18535. Considering the structure of the CNVs, we distinguished 5 genotypes for this region, determined their frequency and geographical distribution. We defined the CNV breakpoints in 35 accessions with AT3G18530 and AT3G18535 deletions and tandem duplications and showed that they were reciprocal events, resulting from non-allelic homologous recombination between 99 %-identical sequences flanking these genes. The widespread geographical distribution of the deletions supported by the SNP and linkage disequilibrium analyses of the genomic sequence confirmed the recurrent nature of this CNV. We characterized in detail for the first time the complex multiallelic CNV in Arabidopsis genome. The region encoding MSH2, AT3G18530 and AT3G18535 genes shows enormous variation of copy numbers among natural ecotypes, being a remarkable example of high Arabidopsis genome plasticity. We provided the molecular insight into the mechanism underlying the recurrent nature of AT3G18530-AT3G18535 duplications/deletions. We also performed the first direct comparison of the two leading experimental methods, suitable for assessing the DNA copy number status. Our comprehensive case study provides foundation information for further analyses of CNV evolution in Arabidopsis and other plants, and their possible use in plant breeding.
Optimized gene editing technology for Drosophila melanogaster using germ line-specific Cas9.
Ren, Xingjie; Sun, Jin; Housden, Benjamin E; Hu, Yanhui; Roesel, Charles; Lin, Shuailiang; Liu, Lu-Ping; Yang, Zhihao; Mao, Decai; Sun, Lingzhu; Wu, Qujie; Ji, Jun-Yuan; Xi, Jianzhong; Mohr, Stephanie E; Xu, Jiang; Perrimon, Norbert; Ni, Jian-Quan
2013-11-19
The ability to engineer genomes in a specific, systematic, and cost-effective way is critical for functional genomic studies. Recent advances using the CRISPR-associated single-guide RNA system (Cas9/sgRNA) illustrate the potential of this simple system for genome engineering in a number of organisms. Here we report an effective and inexpensive method for genome DNA editing in Drosophila melanogaster whereby plasmid DNAs encoding short sgRNAs under the control of the U6b promoter are injected into transgenic flies in which Cas9 is specifically expressed in the germ line via the nanos promoter. We evaluate the off-targets associated with the method and establish a Web-based resource, along with a searchable, genome-wide database of predicted sgRNAs appropriate for genome engineering in flies. Finally, we discuss the advantages of our method in comparison with other recently published approaches.
Recovering complete and draft population genomes from metagenome datasets
Sangwan, Naseer; Xia, Fangfang; Gilbert, Jack A.
2016-03-08
Assembly of metagenomic sequence data into microbial genomes is of fundamental value to improving our understanding of microbial ecology and metabolism by elucidating the functional potential of hard-to-culture microorganisms. Here, we provide a synthesis of available methods to bin metagenomic contigs into species-level groups and highlight how genetic diversity, sequencing depth, and coverage influence binning success. Despite the computational cost on application to deeply sequenced complex metagenomes (e.g., soil), covarying patterns of contig coverage across multiple datasets significantly improves the binning process. We also discuss and compare current genome validation methods and reveal how these methods tackle the problem ofmore » chimeric genome bins i.e., sequences from multiple species. Finally, we explore how population genome assembly can be used to uncover biogeographic trends and to characterize the effect of in situ functional constraints on the genome-wide evolution.« less
Parallel Continuous Flow: A Parallel Suffix Tree Construction Tool for Whole Genomes
Farreras, Montse
2014-01-01
Abstract The construction of suffix trees for very long sequences is essential for many applications, and it plays a central role in the bioinformatic domain. With the advent of modern sequencing technologies, biological sequence databases have grown dramatically. Also the methodologies required to analyze these data have become more complex everyday, requiring fast queries to multiple genomes. In this article, we present parallel continuous flow (PCF), a parallel suffix tree construction method that is suitable for very long genomes. We tested our method for the suffix tree construction of the entire human genome, about 3GB. We showed that PCF can scale gracefully as the size of the input genome grows. Our method can work with an efficiency of 90% with 36 processors and 55% with 172 processors. We can index the human genome in 7 minutes using 172 processes. PMID:24597675
Recovering complete and draft population genomes from metagenome datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sangwan, Naseer; Xia, Fangfang; Gilbert, Jack A.
Assembly of metagenomic sequence data into microbial genomes is of fundamental value to improving our understanding of microbial ecology and metabolism by elucidating the functional potential of hard-to-culture microorganisms. Here, we provide a synthesis of available methods to bin metagenomic contigs into species-level groups and highlight how genetic diversity, sequencing depth, and coverage influence binning success. Despite the computational cost on application to deeply sequenced complex metagenomes (e.g., soil), covarying patterns of contig coverage across multiple datasets significantly improves the binning process. We also discuss and compare current genome validation methods and reveal how these methods tackle the problem ofmore » chimeric genome bins i.e., sequences from multiple species. Finally, we explore how population genome assembly can be used to uncover biogeographic trends and to characterize the effect of in situ functional constraints on the genome-wide evolution.« less
Fast and Accurate Approximation to Significance Tests in Genome-Wide Association Studies
Zhang, Yu; Liu, Jun S.
2011-01-01
Genome-wide association studies commonly involve simultaneous tests of millions of single nucleotide polymorphisms (SNP) for disease association. The SNPs in nearby genomic regions, however, are often highly correlated due to linkage disequilibrium (LD, a genetic term for correlation). Simple Bonferonni correction for multiple comparisons is therefore too conservative. Permutation tests, which are often employed in practice, are both computationally expensive for genome-wide studies and limited in their scopes. We present an accurate and computationally efficient method, based on Poisson de-clumping heuristics, for approximating genome-wide significance of SNP associations. Compared with permutation tests and other multiple comparison adjustment approaches, our method computes the most accurate and robust p-value adjustments for millions of correlated comparisons within seconds. We demonstrate analytically that the accuracy and the efficiency of our method are nearly independent of the sample size, the number of SNPs, and the scale of p-values to be adjusted. In addition, our method can be easily adopted to estimate false discovery rate. When applied to genome-wide SNP datasets, we observed highly variable p-value adjustment results evaluated from different genomic regions. The variation in adjustments along the genome, however, are well conserved between the European and the African populations. The p-value adjustments are significantly correlated with LD among SNPs, recombination rates, and SNP densities. Given the large variability of sequence features in the genome, we further discuss a novel approach of using SNP-specific (local) thresholds to detect genome-wide significant associations. This article has supplementary material online. PMID:22140288
SIDR: simultaneous isolation and parallel sequencing of genomic DNA and total RNA from single cells.
Han, Kyung Yeon; Kim, Kyu-Tae; Joung, Je-Gun; Son, Dae-Soon; Kim, Yeon Jeong; Jo, Areum; Jeon, Hyo-Jeong; Moon, Hui-Sung; Yoo, Chang Eun; Chung, Woosung; Eum, Hye Hyeon; Kim, Sangmin; Kim, Hong Kwan; Lee, Jeong Eon; Ahn, Myung-Ju; Lee, Hae-Ock; Park, Donghyun; Park, Woong-Yang
2018-01-01
Simultaneous sequencing of the genome and transcriptome at the single-cell level is a powerful tool for characterizing genomic and transcriptomic variation and revealing correlative relationships. However, it remains technically challenging to analyze both the genome and transcriptome in the same cell. Here, we report a novel method for simultaneous isolation of genomic DNA and total RNA (SIDR) from single cells, achieving high recovery rates with minimal cross-contamination, as is crucial for accurate description and integration of the single-cell genome and transcriptome. For reliable and efficient separation of genomic DNA and total RNA from single cells, the method uses hypotonic lysis to preserve nuclear lamina integrity and subsequently captures the cell lysate using antibody-conjugated magnetic microbeads. Evaluating the performance of this method using real-time PCR demonstrated that it efficiently recovered genomic DNA and total RNA. Thorough data quality assessments showed that DNA and RNA simultaneously fractionated by the SIDR method were suitable for genome and transcriptome sequencing analysis at the single-cell level. The integration of single-cell genome and transcriptome sequencing by SIDR (SIDR-seq) showed that genetic alterations, such as copy-number and single-nucleotide variations, were more accurately captured by single-cell SIDR-seq compared with conventional single-cell RNA-seq, although copy-number variations positively correlated with the corresponding gene expression levels. These results suggest that SIDR-seq is potentially a powerful tool to reveal genetic heterogeneity and phenotypic information inferred from gene expression patterns at the single-cell level. © 2018 Han et al.; Published by Cold Spring Harbor Laboratory Press.
SIDR: simultaneous isolation and parallel sequencing of genomic DNA and total RNA from single cells
Han, Kyung Yeon; Kim, Kyu-Tae; Joung, Je-Gun; Son, Dae-Soon; Kim, Yeon Jeong; Jo, Areum; Jeon, Hyo-Jeong; Moon, Hui-Sung; Yoo, Chang Eun; Chung, Woosung; Eum, Hye Hyeon; Kim, Sangmin; Kim, Hong Kwan; Lee, Jeong Eon; Ahn, Myung-Ju; Lee, Hae-Ock; Park, Donghyun; Park, Woong-Yang
2018-01-01
Simultaneous sequencing of the genome and transcriptome at the single-cell level is a powerful tool for characterizing genomic and transcriptomic variation and revealing correlative relationships. However, it remains technically challenging to analyze both the genome and transcriptome in the same cell. Here, we report a novel method for simultaneous isolation of genomic DNA and total RNA (SIDR) from single cells, achieving high recovery rates with minimal cross-contamination, as is crucial for accurate description and integration of the single-cell genome and transcriptome. For reliable and efficient separation of genomic DNA and total RNA from single cells, the method uses hypotonic lysis to preserve nuclear lamina integrity and subsequently captures the cell lysate using antibody-conjugated magnetic microbeads. Evaluating the performance of this method using real-time PCR demonstrated that it efficiently recovered genomic DNA and total RNA. Thorough data quality assessments showed that DNA and RNA simultaneously fractionated by the SIDR method were suitable for genome and transcriptome sequencing analysis at the single-cell level. The integration of single-cell genome and transcriptome sequencing by SIDR (SIDR-seq) showed that genetic alterations, such as copy-number and single-nucleotide variations, were more accurately captured by single-cell SIDR-seq compared with conventional single-cell RNA-seq, although copy-number variations positively correlated with the corresponding gene expression levels. These results suggest that SIDR-seq is potentially a powerful tool to reveal genetic heterogeneity and phenotypic information inferred from gene expression patterns at the single-cell level. PMID:29208629
Puppa, Giacomo; Risio, Mauro; Sheahan, Kieran; Vieth, Michael; Zlobec, Inti; Lugli, Alessandro; Pecori, Sara; Wang, Lai Mun; Langner, Cord; Mitomi, Hiroyuki; Nakamura, Takatoshi; Watanabe, Masahiko; Ueno, Hideki; Chasle, Jacques; Senore, Carlo; Conley, Stephen A; Herlin, Paulette; Lauwers, Gregory Y
2011-01-01
In histopathology, the quantitative assessment of various morphologic features is based on methods originally conceived on specific areas observed through the microscope used. Failure to reproduce the same reference field of view using a different microscope will change the score assessed. Visualization of a digital slide on a screen through a dedicated viewer allows selection of the magnification. However, the field of view is rectangular, unlike the circular field of optical microscopy. In addition, the size of the selected area is not evident, and must be calculated. A digital slide morphometric system was conceived to reproduce the various methods published for assessing tumor budding in colorectal cancer. Eighteen international experts in colorectal cancer were invited to participate in a web-based study by assessing tumor budding with five different methods in 100 digital slides. The specific areas to be tested by each method were marked by colored circles. The areas were grouped in a target-like pattern and then saved as an .xml file. When a digital slide was opened, the .xml file was imported in order to perform the measurements. Since the morphometric tool is composed of layers that can be freely moved on top of the digital slide, the technique was named digital slide dynamic morphometry. Twelve investigators completed the task, the majority of them performing the multiple evaluations of each of the cases in less than 12 minutes. Digital slide dynamic morphometry has various potential applications and might be a useful tool for the assessment of histologic parameters originally conceived for optical microscopy that need to be quantified.
The future of the application of the Bi-Digital O-Ring Test in Sports Psychology.
Ozerkan, Kemal Nuri
2005-01-01
The Bi-Digital O-Ring Test, originally developed by Dr. Omura, utilizes changes in the degree of strength of voluntary movements of muscles of the fingers under a definite muscle tonus, making Bi-Digital O-Rings, as an indicator of pathology in the body. Research in Sports Psychology can use the classical measurement methods and Bi-Digital O-Ring Test method comparatively and thus produce new findings regarding the reliability and certainty of the Bi-Digital O-Ring Test test. It seems probable that by using the non-invasive Bi-Digital O-Ring Test test, it is possible to measure enzymes, hormones and neuro-transmitters instantaneously and assess a sports person's actual psychological and physiological performance, and thereby help them reach their peak performance levels during both exercise and competitions.
Thorisdottir, Rannveig Linda; Sundgren, Johanna; Sheikh, Rafi; Blohmé, Jonas; Hammar, Björn; Kjellström, Sten; Malmsjö, Malin
2018-05-28
To evaluate the digital KM screen computerized ocular motility test and to compare it with conventional nondigital techniques using the Hess and Lees screens. Patients with known ocular deviations and a visual acuity of at least 20/100 underwent testing using the digital KM screen and the Hess and Lees screen tests. The examination duration, the subjectively perceived difficulty, and the patient's method of choice were compared for the three tests. The accuracy of test results was compared using Bland-Altman plots between testing methods. A total of 19 patients were included. Examination with the digital KM screen test was less time-consuming than tests with the Hess and Lees screens (P < 0.001 and P = 0.003, resp., compared with the digital KM screen). Patients found the test with the digital KM screen easier to perform than the Lees screen test (P = 0.009) but of similar difficulty to the Hess screen test (P = 0.203). The majority of the patients (83%) preferred the digital KM screen test to both of the other screen methods (P = 0.008). Bland-Altman plots showed that the results obtained with all three tests were similar. The digital KM screen is accurate and time saving and provides similar results to Lees and Hess screen testing. It also has the advantage of a digital data analysis and registration. Copyright © 2018 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.
In vivo precision of conventional and digital methods of obtaining complete-arch dental impressions.
Ender, Andreas; Attin, Thomas; Mehl, Albert
2016-03-01
Digital impression systems have undergone significant development in recent years, but few studies have investigated the accuracy of the technique in vivo, particularly compared with conventional impression techniques. The purpose of this in vivo study was to investigate the precision of conventional and digital methods for complete-arch impressions. Complete-arch impressions were obtained using 5 conventional (polyether, POE; vinylsiloxanether, VSE; direct scannable vinylsiloxanether, VSES; digitized scannable vinylsiloxanether, VSES-D; and irreversible hydrocolloid, ALG) and 7 digital (CEREC Bluecam, CER; CEREC Omnicam, OC; Cadent iTero, ITE; Lava COS, LAV; Lava True Definition Scanner, T-Def; 3Shape Trios, TRI; and 3Shape Trios Color, TRC) techniques. Impressions were made 3 times each in 5 participants (N=15). The impressions were then compared within and between the test groups. The cast surfaces were measured point-to-point using the signed nearest neighbor method. Precision was calculated from the (90%-10%)/2 percentile value. The precision ranged from 12.3 μm (VSE) to 167.2 μm (ALG), with the highest precision in the VSE and VSES groups. The deviation pattern varied distinctly according to the impression method. Conventional impressions showed the highest accuracy across the complete dental arch in all groups, except for the ALG group. Conventional and digital impression methods differ significantly in the complete-arch accuracy. Digital impression systems had higher local deviations within the complete arch cast; however, they achieve equal and higher precision than some conventional impression materials. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
2013-01-01
Background Understanding the biological mechanisms used by microorganisms for plant biomass degradation is of considerable biotechnological interest. Despite of the growing number of sequenced (meta)genomes of plant biomass-degrading microbes, there is currently no technique for the systematic determination of the genomic components of this process from these data. Results We describe a computational method for the discovery of the protein domains and CAZy families involved in microbial plant biomass degradation. Our method furthermore accurately predicts the capability to degrade plant biomass for microbial species from their genome sequences. Application to a large, manually curated data set of microbial degraders and non-degraders identified gene families of enzymes known by physiological and biochemical tests to be implicated in cellulose degradation, such as GH5 and GH6. Additionally, genes of enzymes that degrade other plant polysaccharides, such as hemicellulose, pectins and oligosaccharides, were found, as well as gene families which have not previously been related to the process. For draft genomes reconstructed from a cow rumen metagenome our method predicted Bacteroidetes-affiliated species and a relative to a known plant biomass degrader to be plant biomass degraders. This was supported by the presence of genes encoding enzymatically active glycoside hydrolases in these genomes. Conclusions Our results show the potential of the method for generating novel insights into microbial plant biomass degradation from (meta-)genome data, where there is an increasing production of genome assemblages for uncultured microbes. PMID:23414703
Quick, Josh; Grubaugh, Nathan D; Pullan, Steven T; Claro, Ingra M; Smith, Andrew D; Gangavarapu, Karthik; Oliveira, Glenn; Robles-Sikisaka, Refugio; Rogers, Thomas F; Beutler, Nathan A; Burton, Dennis R; Lewis-Ximenez, Lia Laura; de Jesus, Jaqueline Goes; Giovanetti, Marta; Hill, Sarah; Black, Allison; Bedford, Trevor; Carroll, Miles W; Nunes, Marcio; Alcantara, Luiz Carlos; Sabino, Ester C; Baylis, Sally A; Faria, Nuno; Loose, Matthew; Simpson, Jared T; Pybus, Oliver G; Andersen, Kristian G; Loman, Nicholas J
2018-01-01
Genome sequencing has become a powerful tool for studying emerging infectious diseases; however, genome sequencing directly from clinical samples without isolation remains challenging for viruses such as Zika, where metagenomic sequencing methods may generate insufficient numbers of viral reads. Here we present a protocol for generating coding-sequence complete genomes comprising an online primer design tool, a novel multiplex PCR enrichment protocol, optimised library preparation methods for the portable MinION sequencer (Oxford Nanopore Technologies) and the Illumina range of instruments, and a bioinformatics pipeline for generating consensus sequences. The MinION protocol does not require an internet connection for analysis, making it suitable for field applications with limited connectivity. Our method relies on multiplex PCR for targeted enrichment of viral genomes from samples containing as few as 50 genome copies per reaction. Viral consensus sequences can be achieved starting with clinical samples in 1-2 days following a simple laboratory workflow. This method has been successfully used by several groups studying Zika virus evolution and is facilitating an understanding of the spread of the virus in the Americas. PMID:28538739
Digital Microdroplet Ejection Technology-Based Heterogeneous Objects Prototyping
Yang, Jiquan; Feng, Chunmei; Yang, Jianfei; Zhu, Liya; Guo, Aiqing
2016-01-01
An integrate fabrication framework is presented to build heterogeneous objects (HEO) using digital microdroplets injecting technology and rapid prototyping. The heterogeneous materials part design and manufacturing method in structure and material was used to change the traditional process. The net node method was used for digital modeling that can configure multimaterials in time. The relationship of material, color, and jetting nozzle was built. The main important contributions are to combine the structure, material, and visualization in one process and give the digital model for manufacture. From the given model, it is concluded that the method is effective for HEO. Using microdroplet rapid prototyping and the model given in the paper HEO could be gotten basically. The model could be used in 3D biomanufacturing. PMID:26981110
Digital Microdroplet Ejection Technology-Based Heterogeneous Objects Prototyping.
Li, Na; Yang, Jiquan; Feng, Chunmei; Yang, Jianfei; Zhu, Liya; Guo, Aiqing
2016-01-01
An integrate fabrication framework is presented to build heterogeneous objects (HEO) using digital microdroplets injecting technology and rapid prototyping. The heterogeneous materials part design and manufacturing method in structure and material was used to change the traditional process. The net node method was used for digital modeling that can configure multimaterials in time. The relationship of material, color, and jetting nozzle was built. The main important contributions are to combine the structure, material, and visualization in one process and give the digital model for manufacture. From the given model, it is concluded that the method is effective for HEO. Using microdroplet rapid prototyping and the model given in the paper HEO could be gotten basically. The model could be used in 3D biomanufacturing.
Seminar on Understanding Digital Control and Analysis in Vibration Test Systems
NASA Technical Reports Server (NTRS)
1975-01-01
The advantages of the digital methods over the analog vibration methods are demonstrated. The following topics are covered: (1) methods of computer-controlled random vibration and reverberation acoustic testing, (2) methods of computer-controlled sinewave vibration testing, and (3) methods of computer-controlled shock testing. General algorithms are described in the form of block diagrams and flow diagrams.
Identification of copy number variants in whole-genome data using Reference Coverage Profiles
Glusman, Gustavo; Severson, Alissa; Dhankani, Varsha; Robinson, Max; Farrah, Terry; Mauldin, Denise E.; Stittrich, Anna B.; Ament, Seth A.; Roach, Jared C.; Brunkow, Mary E.; Bodian, Dale L.; Vockley, Joseph G.; Shmulevich, Ilya; Niederhuber, John E.; Hood, Leroy
2015-01-01
The identification of DNA copy numbers from short-read sequencing data remains a challenge for both technical and algorithmic reasons. The raw data for these analyses are measured in tens to hundreds of gigabytes per genome; transmitting, storing, and analyzing such large files is cumbersome, particularly for methods that analyze several samples simultaneously. We developed a very efficient representation of depth of coverage (150–1000× compression) that enables such analyses. Current methods for analyzing variants in whole-genome sequencing (WGS) data frequently miss copy number variants (CNVs), particularly hemizygous deletions in the 1–100 kb range. To fill this gap, we developed a method to identify CNVs in individual genomes, based on comparison to joint profiles pre-computed from a large set of genomes. We analyzed depth of coverage in over 6000 high quality (>40×) genomes. The depth of coverage has strong sequence-specific fluctuations only partially explained by global parameters like %GC. To account for these fluctuations, we constructed multi-genome profiles representing the observed or inferred diploid depth of coverage at each position along the genome. These Reference Coverage Profiles (RCPs) take into account the diverse technologies and pipeline versions used. Normalization of the scaled coverage to the RCP followed by hidden Markov model (HMM) segmentation enables efficient detection of CNVs and large deletions in individual genomes. Use of pre-computed multi-genome coverage profiles improves our ability to analyze each individual genome. We make available RCPs and tools for performing these analyses on personal genomes. We expect the increased sensitivity and specificity for individual genome analysis to be critical for achieving clinical-grade genome interpretation. PMID:25741365
Non-invasive prenatal testing using cell-free fetal DNA in maternal circulation.
Liao, Gary J W; Gronowski, Ann M; Zhao, Zhen
2014-01-20
The identification of cell-free fetal DNA (cffDNA) in maternal circulation has made non-invasive prenatal testing (NIPT) possible. Maternal plasma cell free DNA is a mixture of maternal and fetal DNA, of which, fetal DNA represents a minor population in maternal plasma. Therefore, methods with high sensitivity and precision are required to detect and differentiate fetal DNA from the large background of maternal DNA. In recent years, technical advances in the molecular analysis of fetal DNA (e.g., digital PCR and massively parallel sequencing (MPS)) has enabled the successful implementation of noninvasive testing into clinical practice, such as fetal sex assessment, RhD genotyping, and fetal chromosomal aneuploidy detection.With the ability to decipher the entire fetal genome from maternal plasma DNA, we foresee that an increased number of non-invasive prenatal tests will be available for detecting many single-gene disorders in the near future. This review briefly summarizes the technical aspects of the NIPT and application of NIPT in clinical practice.
Digital signal processing at Bell Labs-Foundations for speech and acoustics research
NASA Astrophysics Data System (ADS)
Rabiner, Lawrence R.
2004-05-01
Digital signal processing (DSP) is a fundamental tool for much of the research that has been carried out of Bell Labs in the areas of speech and acoustics research. The fundamental bases for DSP include the sampling theorem of Nyquist, the method for digitization of analog signals by Shannon et al., methods of spectral analysis by Tukey, the cepstrum by Bogert et al., and the FFT by Tukey (and Cooley of IBM). Essentially all of these early foundations of DSP came out of the Bell Labs Research Lab in the 1930s, 1940s, 1950s, and 1960s. This fundamental research was motivated by fundamental applications (mainly in the areas of speech, sonar, and acoustics) that led to novel design methods for digital filters (Kaiser, Golden, Rabiner, Schafer), spectrum analysis methods (Rabiner, Schafer, Allen, Crochiere), fast convolution methods based on the FFT (Helms, Bergland), and advanced digital systems used to implement telephony channel banks (Jackson, McDonald, Freeny, Tewksbury). This talk summarizes the key contributions to DSP made at Bell Labs, and illustrates how DSP was utilized in the areas of speech and acoustics research. It also shows the vast, worldwide impact of this DSP research on modern consumer electronics.
Quick, Joshua; Grubaugh, Nathan D; Pullan, Steven T; Claro, Ingra M; Smith, Andrew D; Gangavarapu, Karthik; Oliveira, Glenn; Robles-Sikisaka, Refugio; Rogers, Thomas F; Beutler, Nathan A; Burton, Dennis R; Lewis-Ximenez, Lia Laura; de Jesus, Jaqueline Goes; Giovanetti, Marta; Hill, Sarah C; Black, Allison; Bedford, Trevor; Carroll, Miles W; Nunes, Marcio; Alcantara, Luiz Carlos; Sabino, Ester C; Baylis, Sally A; Faria, Nuno R; Loose, Matthew; Simpson, Jared T; Pybus, Oliver G; Andersen, Kristian G; Loman, Nicholas J
2017-06-01
Genome sequencing has become a powerful tool for studying emerging infectious diseases; however, genome sequencing directly from clinical samples (i.e., without isolation and culture) remains challenging for viruses such as Zika, for which metagenomic sequencing methods may generate insufficient numbers of viral reads. Here we present a protocol for generating coding-sequence-complete genomes, comprising an online primer design tool, a novel multiplex PCR enrichment protocol, optimized library preparation methods for the portable MinION sequencer (Oxford Nanopore Technologies) and the Illumina range of instruments, and a bioinformatics pipeline for generating consensus sequences. The MinION protocol does not require an Internet connection for analysis, making it suitable for field applications with limited connectivity. Our method relies on multiplex PCR for targeted enrichment of viral genomes from samples containing as few as 50 genome copies per reaction. Viral consensus sequences can be achieved in 1-2 d by starting with clinical samples and following a simple laboratory workflow. This method has been successfully used by several groups studying Zika virus evolution and is facilitating an understanding of the spread of the virus in the Americas. The protocol can be used to sequence other viral genomes using the online Primal Scheme primer designer software. It is suitable for sequencing either RNA or DNA viruses in the field during outbreaks or as an inexpensive, convenient method for use in the lab.
The ecoresponsive genome of Daphnia pulex
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colbourne, John K.; Pfrender, Michael E.; Gilbert, Donald
2011-02-04
This document provides supporting material related to the sequencing of the ecoresponsive genome of Daphnia pulex. This material includes information on materials and methods and supporting text, as well as supplemental figures, tables, and references. The coverage of materials and methods addresses genome sequence, assembly, and mapping to chromosomes, gene inventory, attributes of a compact genome, the origin and preservation of Daphnia pulex genes, implications of Daphnia's genome structure, evolutionary diversification of duplicated genes, functional significance of expanded gene families, and ecoresponsive genes. Supporting text covers chromosome studies, gene homology among Daphnia genomes, micro-RNA and transposable elements and the 46more » Daphnia pulex opsins. 36 figures, 50 tables, 183 references.« less
Nguyen, Anh-Dung; Boling, Michelle C; Slye, Carrie A; Hartley, Emily M; Parisi, Gina L
2013-01-01
Accurate, efficient, and reliable measurement methods are essential to prospectively identify risk factors for knee injuries in large cohorts. To determine tester reliability using digital photographs for the measurement of static lower extremity alignment (LEA) and whether values quantified with an electromagnetic motion-tracking system are in agreement with those quantified with clinical methods and digital photographs. Descriptive laboratory study. Laboratory. Thirty-three individuals participated and included 17 (10 women, 7 men; age = 21.7 ± 2.7 years, height = 163.4 ± 6.4 cm, mass = 59.7 ± 7.8 kg, body mass index = 23.7 ± 2.6 kg/m2) in study 1, in which we examined the reliability between clinical measures and digital photographs in 1 trained and 1 novice investigator, and 16 (11 women, 5 men; age = 22.3 ± 1.6 years, height = 170.3 ± 6.9 cm, mass = 72.9 ± 16.4 kg, body mass index = 25.2 ± 5.4 kg/m2) in study 2, in which we examined the agreement among clinical measures, digital photographs, and an electromagnetic tracking system. We evaluated measures of pelvic angle, quadriceps angle, tibiofemoral angle, genu recurvatum, femur length, and tibia length. Clinical measures were assessed using clinically accepted methods. Frontal- and sagittal-plane digital images were captured and imported into a computer software program. Anatomic landmarks were digitized using an electromagnetic tracking system to calculate static LEA. Intraclass correlation coefficients and standard errors of measurement were calculated to examine tester reliability. We calculated 95% limits of agreement and used Bland-Altman plots to examine agreement among clinical measures, digital photographs, and an electromagnetic tracking system. Using digital photographs, fair to excellent intratester (intraclass correlation coefficient range = 0.70-0.99) and intertester (intraclass correlation coefficient range = 0.75-0.97) reliability were observed for static knee alignment and limb-length measures. An acceptable level of agreement was observed between clinical measures and digital pictures for limb-length measures. When comparing clinical measures and digital photographs with the electromagnetic tracking system, an acceptable level of agreement was observed in measures of static knee angles and limb-length measures. The use of digital photographs and an electromagnetic tracking system appears to be an efficient and reliable method to assess static knee alignment and limb-length measurements.
Hitomi, Yuki; Tokunaga, Katsushi
2017-01-01
Human genome variation may cause differences in traits and disease risks. Disease-causal/susceptible genes and variants for both common and rare diseases can be detected by comprehensive whole-genome analyses, such as whole-genome sequencing (WGS), using next-generation sequencing (NGS) technology and genome-wide association studies (GWAS). Here, in addition to the application of an NGS as a whole-genome analysis method, we summarize approaches for the identification of functional disease-causal/susceptible variants from abundant genetic variants in the human genome and methods for evaluating their functional effects in human diseases, using an NGS and in silico and in vitro functional analyses. We also discuss the clinical applications of the functional disease causal/susceptible variants to personalized medicine.
Treangen, Todd J; Ondov, Brian D; Koren, Sergey; Phillippy, Adam M
2014-01-01
Whole-genome sequences are now available for many microbial species and clades, however existing whole-genome alignment methods are limited in their ability to perform sequence comparisons of multiple sequences simultaneously. Here we present the Harvest suite of core-genome alignment and visualization tools for the rapid and simultaneous analysis of thousands of intraspecific microbial strains. Harvest includes Parsnp, a fast core-genome multi-aligner, and Gingr, a dynamic visual platform. Together they provide interactive core-genome alignments, variant calls, recombination detection, and phylogenetic trees. Using simulated and real data we demonstrate that our approach exhibits unrivaled speed while maintaining the accuracy of existing methods. The Harvest suite is open-source and freely available from: http://github.com/marbl/harvest.
Chen, Y-J; Chen, S-K; Huang, H-W; Yao, C-C; Chang, H-F
2004-09-01
To compare the cephalometric landmark identification on softcopy and hardcopy of direct digital cephalography acquired by a storage-phosphor (SP) imaging system. Ten digital cephalograms and their conventional counterpart, hardcopy on a transparent blue film, were obtained by a SP imaging system and a dye sublimation printer. Twelve orthodontic residents identified 19 cephalometric landmarks on monitor-displayed SP digital images with computer-aided method and on their hardcopies with conventional method. The x- and y-coordinates for each landmark, indicating the horizontal and vertical positions, were analysed to assess the reliability of landmark identification and evaluate the concordance of the landmark locations in softcopy and hardcopy of SP digital cephalometric radiography. For each of the 19 landmarks, the location differences as well as the horizontal and vertical components were statistically significant between SP digital cephalometric radiography and its hardcopy. Smaller interobserver errors on SP digital images than those on their hardcopies were noted for all the landmarks, except point Go in vertical direction. The scatter-plots demonstrate the characteristic distribution of the interobserver error in both horizontal and vertical directions. Generally, the dispersion of interobserver error on SP digital cephalometric radiography is less than that on its hardcopy with conventional method. The SP digital cephalometric radiography could yield better or comparable level of performance in landmark identification as its hardcopy, except point Go in vertical direction.
Guizard, Sébastien; Piégu, Benoît; Arensburger, Peter; Guillou, Florian; Bigot, Yves
2016-08-19
The program RepeatMasker and the database Repbase-ISB are part of the most widely used strategy for annotating repeats in animal genomes. They have been used to show that avian genomes have a lower repeat content (8-12 %) than the sequenced genomes of many vertebrate species (30-55 %). However, the efficiency of such a library-based strategies is dependent on the quality and completeness of the sequences in the database that is used. An alternative to these library based methods are methods that identify repeats de novo. These alternative methods have existed for a least a decade and may be more powerful than the library based methods. We have used an annotation strategy involving several complementary de novo tools to determine the repeat content of the model genome galGal4 (1.04 Gbp), including identifying simple sequence repeats (SSRs), tandem repeats and transposable elements (TEs). We annotated over one Gbp. of the galGal4 genome and showed that it is composed of approximately 19 % SSRs and TEs repeats. Furthermore, we estimate that the actual genome of the red jungle fowl contains about 31-35 % repeats. We find that library-based methods tend to overestimate TE diversity. These results have a major impact on the current understanding of repeats distributions throughout chromosomes in the red jungle fowl. Our results are a proof of concept of the reliability of using de novo tools to annotate repeats in large animal genomes. They have also revealed issues that will need to be resolved in order to develop gold-standard methodologies for annotating repeats in eukaryote genomes.
A Computer-Aided Diagnosis System for Breast Cancer Combining Digital Mammography and Genomics
2006-05-01
Huang, "Breast cancer diagnosis using self-organizing map for sonography." Ultrasound Med. Biol. 26, 405 (2000). 20 K. Horsch, M.L. Giger, L.A. Venta ...L.A. Venta , "Performance of computer-aided diagnosis in the interpretation of lesions on breast sonography." Acad Radiol 11, 272 (2004). 22 W. Chen...418. 27. Horsch K, Giger ML, Vyborny CJ, Venta LA. Performance of computer-aided diagnosis in the interpretation of lesions on breast sonography
Digital Documentation: Using Computers to Create Multimedia Reports.
ERIC Educational Resources Information Center
Speitel, Tom; And Others
1996-01-01
Describes methods for creating integrated multimedia documents using recent advances in print, audio, and video digitization that bring added usefulness to computers as data acquisition, processing, and presentation tools. Discusses advantages of digital documentation. (JRH)
Intermittent/transient fault phenomena in digital systems
NASA Technical Reports Server (NTRS)
Masson, G. M.
1977-01-01
An overview of the intermittent/transient (IT) fault study is presented. An interval survivability evaluation of digital systems for IT faults is discussed along with a method for detecting and diagnosing IT faults in digital systems.
Zhang, Kui; Wiener, Howard; Beasley, Mark; George, Varghese; Amos, Christopher I; Allison, David B
2006-08-01
Individual genome scans for quantitative trait loci (QTL) mapping often suffer from low statistical power and imprecise estimates of QTL location and effect. This lack of precision yields large confidence intervals for QTL location, which are problematic for subsequent fine mapping and positional cloning. In prioritizing areas for follow-up after an initial genome scan and in evaluating the credibility of apparent linkage signals, investigators typically examine the results of other genome scans of the same phenotype and informally update their beliefs about which linkage signals in their scan most merit confidence and follow-up via a subjective-intuitive integration approach. A method that acknowledges the wisdom of this general paradigm but formally borrows information from other scans to increase confidence in objectivity would be a benefit. We developed an empirical Bayes analytic method to integrate information from multiple genome scans. The linkage statistic obtained from a single genome scan study is updated by incorporating statistics from other genome scans as prior information. This technique does not require that all studies have an identical marker map or a common estimated QTL effect. The updated linkage statistic can then be used for the estimation of QTL location and effect. We evaluate the performance of our method by using extensive simulations based on actual marker spacing and allele frequencies from available data. Results indicate that the empirical Bayes method can account for between-study heterogeneity, estimate the QTL location and effect more precisely, and provide narrower confidence intervals than results from any single individual study. We also compared the empirical Bayes method with a method originally developed for meta-analysis (a closely related but distinct purpose). In the face of marked heterogeneity among studies, the empirical Bayes method outperforms the comparator.
Transitioning towards the Digital Native: Examining Digital Technologies, Video Games, and Learning
ERIC Educational Resources Information Center
Salomon, John
2010-01-01
Although digital technologies have become commonplace among people who grew up around them, little is known about the effect that such technology will have on learners or its impact on traditional methods of educational delivery. This dissertation examines how certain technologies affect digital natives and seeks to understand specific…
ERIC Educational Resources Information Center
Bouck, Emily C.; Weng, Pei-Lin; Satsangi, Rajiv
2016-01-01
Introduction: Digital textbooks are increasingly marketed and used, yet little research examines this medium. Within the limited research, even less investigates the role of digital textbooks in mathematics--a challenging content area for many students, but especially for students with visual impairments. Methods: Through a qualitative analysis,…
Digital Advances in Contemporary Audio Production.
ERIC Educational Resources Information Center
Shields, Steven O.
Noting that a revolution in sonic high fidelity occurred during the 1980s as digital-based audio production methods began to replace traditional analog modes, this paper offers both an overview of digital audio theory and descriptions of some of the related digital production technologies that have begun to emerge from the mating of the computer…
Regeneration and repair of human digits and limbs: fact and fiction
Cheng, Tsun‐Chih
2015-01-01
Abstract A variety of digit and limb repair and reconstruction methods have been used in different clinical settings, but regeneration remains an item on every plastic surgeon's “wish list.” Although surgical salvage techniques are continually being improved, unreplantable digits and limbs are still abundant. We comprehensively review the structural and functional salvage methods in clinical practice, from the peeling injuries of small distal fingertips to multisegmented amputated limbs, and the developmental and tissue engineering approaches for regenerating human digits and limbs in the laboratory. Although surgical techniques have forged ahead, there are still situations in which digits and limbs are unreplantable. Advances in the field are delineated, and the regeneration processes of salamander limbs, lizard tails, and mouse digits and each component of tissue engineering approaches for digit‐ and limb‐building are discussed. Although the current technology is promising, there are many challenges in human digit and limb regeneration. We hope this review inspires research on the critical gap between clinical and basic science, and leads to more sophisticated digit and limb loss rescue and regeneration innovations. PMID:27499873
Advances in yeast genome engineering.
David, Florian; Siewers, Verena
2015-02-01
Genome engineering based on homologous recombination has been applied to yeast for many years. However, the growing importance of yeast as a cell factory in metabolic engineering and chassis in synthetic biology demands methods for fast and efficient introduction of multiple targeted changes such as gene knockouts and introduction of multistep metabolic pathways. In this review, we summarize recent improvements of existing genome engineering methods, the development of novel techniques, for example for advanced genome redesign and evolution, and the importance of endonucleases as genome engineering tools. © FEMS 2015. All rights reserved. For permissions, please e-mail: journals.permission@oup.com.
Jorjani, Hadi; Zavolan, Mihaela
2014-04-01
Accurate identification of transcription start sites (TSSs) is an essential step in the analysis of transcription regulatory networks. In higher eukaryotes, the capped analysis of gene expression technology enabled comprehensive annotation of TSSs in genomes such as those of mice and humans. In bacteria, an equivalent approach, termed differential RNA sequencing (dRNA-seq), has recently been proposed, but the application of this approach to a large number of genomes is hindered by the paucity of computational analysis methods. With few exceptions, when the method has been used, annotation of TSSs has been largely done manually. In this work, we present a computational method called 'TSSer' that enables the automatic inference of TSSs from dRNA-seq data. The method rests on a probabilistic framework for identifying both genomic positions that are preferentially enriched in the dRNA-seq data as well as preferentially captured relative to neighboring genomic regions. Evaluating our approach for TSS calling on several publicly available datasets, we find that TSSer achieves high consistency with the curated lists of annotated TSSs, but identifies many additional TSSs. Therefore, TSSer can accelerate genome-wide identification of TSSs in bacterial genomes and can aid in further characterization of bacterial transcription regulatory networks. TSSer is freely available under GPL license at http://www.clipz.unibas.ch/TSSer/index.php
DOE Office of Scientific and Technical Information (OSTI.GOV)
Punnoose, Ratish J.; Armstrong, Robert C.; Wong, Matthew H.
Formal methods have come into wide use because of their effectiveness in verifying "safety and security" requirements of digital systems; a set of requirements for which testing is mostly ineffective. Formal methods are routinely used in the design and verification of high-consequence digital systems in industry. This report outlines our work in assessing the capabilities of commercial and open source formal tools and the ways in which they can be leveraged in digital design workflows.
Restoration of hot pixels in digital imagers using lossless approximation techniques
NASA Astrophysics Data System (ADS)
Hadar, O.; Shleifer, A.; Cohen, E.; Dotan, Y.
2015-09-01
During the last twenty years, digital imagers have spread into industrial and everyday devices, such as satellites, security cameras, cell phones, laptops and more. "Hot pixels" are the main defects in remote digital cameras. In this paper we prove an improvement of existing restoration methods that use (solely or as an auxiliary tool) some average of the surrounding single pixel, such as the method of the Chapman-Koren study 1,2. The proposed method uses the CALIC algorithm and adapts it to a full use of the surrounding pixels.
Zahid, Sarwar; Peeler, Crandall; Khan, Naheed; Davis, Joy; Mahmood, Mahdi; Heckenlively, John R; Jayasundera, Thiran
2014-01-01
To develop a reliable and efficient digital method to quantify planimetric Goldmann visual field (GVF) data to monitor disease course and treatment responses in retinal degenerative diseases. A novel method to digitally quantify GVFs using Adobe Photoshop CS3 was developed for comparison to traditional digital planimetry (Placom 45C digital planimeter; Engineer Supply, Lynchburg, Virginia, USA). GVFs from 20 eyes from 10 patients with Stargardt disease were quantified to assess the difference between the two methods (a total of 230 measurements per method). This quantification approach was also applied to 13 patients with X-linked retinitis pigmentosa (XLRP) with mutations in RPGR. Overall, measurements using Adobe Photoshop were more rapidly performed than those using conventional planimetry. Photoshop measurements also exhibited less inter- and intraobserver variability. GVF areas for the I4e isopter in patients with the same mutation in RPGR who were nearby in age had similar qualitative and quantitative areas. Quantification of GVFs using Adobe Photoshop is quicker, more reliable, and less user dependent than conventional digital planimetry. It will be a useful tool for both retrospective and prospective studies of disease course as well as for monitoring treatment response in clinical trials for retinal degenerative diseases.
Digital processing of radiographic images from PACS to publishing.
Christian, M E; Davidson, H C; Wiggins, R H; Berges, G; Cannon, G; Jackson, G; Chapman, B; Harnsberger, H R
2001-03-01
Several studies have addressed the implications of filmless radiologic imaging on telemedicine, diagnostic ability, and electronic teaching files. However, many publishers still require authors to submit hard-copy images for publication of articles and textbooks. This study compares the quality digital images directly exported from picture archive and communications systems (PACS) to images digitized from radiographic film. The authors evaluated the quality of publication-grade glossy photographs produced from digital radiographic images using 3 different methods: (1) film images digitized using a desktop scanner and then printed, (2) digital images obtained directly from PACS then printed, and (3) digital images obtained from PACS and processed to improve sharpness prior to printing. Twenty images were printed using each of the 3 different methods and rated for quality by 7 radiologists. The results were analyzed for statistically significant differences among the image sets. Subjective evaluations of the filmless images found them to be of equal or better quality than the digitized images. Direct electronic transfer of PACS images reduces the number of steps involved in creating publication-quality images as well as providing the means to produce high-quality radiographic images in a digital environment.
Assessing the Robustness of Complete Bacterial Genome Segmentations
NASA Astrophysics Data System (ADS)
Devillers, Hugo; Chiapello, Hélène; Schbath, Sophie; El Karoui, Meriem
Comparison of closely related bacterial genomes has revealed the presence of highly conserved sequences forming a "backbone" that is interrupted by numerous, less conserved, DNA fragments. Segmentation of bacterial genomes into backbone and variable regions is particularly useful to investigate bacterial genome evolution. Several software tools have been designed to compare complete bacterial chromosomes and a few online databases store pre-computed genome comparisons. However, very few statistical methods are available to evaluate the reliability of these software tools and to compare the results obtained with them. To fill this gap, we have developed two local scores to measure the robustness of bacterial genome segmentations. Our method uses a simulation procedure based on random perturbations of the compared genomes. The scores presented in this paper are simple to implement and our results show that they allow to discriminate easily between robust and non-robust bacterial genome segmentations when using aligners such as MAUVE and MGA.
Secure Genomic Computation through Site-Wise Encryption
Zhao, Yongan; Wang, XiaoFeng; Tang, Haixu
2015-01-01
Commercial clouds provide on-demand IT services for big-data analysis, which have become an attractive option for users who have no access to comparable infrastructure. However, utilizing these services for human genome analysis is highly risky, as human genomic data contains identifiable information of human individuals and their disease susceptibility. Therefore, currently, no computation on personal human genomic data is conducted on public clouds. To address this issue, here we present a site-wise encryption approach to encrypt whole human genome sequences, which can be subject to secure searching of genomic signatures on public clouds. We implemented this method within the Hadoop framework, and tested it on the case of searching disease markers retrieved from the ClinVar database against patients’ genomic sequences. The secure search runs only one order of magnitude slower than the simple search without encryption, indicating our method is ready to be used for secure genomic computation on public clouds. PMID:26306278
Secure Genomic Computation through Site-Wise Encryption.
Zhao, Yongan; Wang, XiaoFeng; Tang, Haixu
2015-01-01
Commercial clouds provide on-demand IT services for big-data analysis, which have become an attractive option for users who have no access to comparable infrastructure. However, utilizing these services for human genome analysis is highly risky, as human genomic data contains identifiable information of human individuals and their disease susceptibility. Therefore, currently, no computation on personal human genomic data is conducted on public clouds. To address this issue, here we present a site-wise encryption approach to encrypt whole human genome sequences, which can be subject to secure searching of genomic signatures on public clouds. We implemented this method within the Hadoop framework, and tested it on the case of searching disease markers retrieved from the ClinVar database against patients' genomic sequences. The secure search runs only one order of magnitude slower than the simple search without encryption, indicating our method is ready to be used for secure genomic computation on public clouds.
Uses of antimicrobial genes from microbial genome
Sorek, Rotem; Rubin, Edward M.
2013-08-20
We describe a method for mining microbial genomes to discover antimicrobial genes and proteins having broad spectrum of activity. Also described are antimicrobial genes and their expression products from various microbial genomes that were found using this method. The products of such genes can be used as antimicrobial agents or as tools for molecular biology.
Genome-scale engineering of Saccharomyces cerevisiae with single-nucleotide precision.
Bao, Zehua; HamediRad, Mohammad; Xue, Pu; Xiao, Han; Tasan, Ipek; Chao, Ran; Liang, Jing; Zhao, Huimin
2018-07-01
We developed a CRISPR-Cas9- and homology-directed-repair-assisted genome-scale engineering method named CHAnGE that can rapidly output tens of thousands of specific genetic variants in yeast. More than 98% of target sequences were efficiently edited with an average frequency of 82%. We validate the single-nucleotide resolution genome-editing capability of this technology by creating a genome-wide gene disruption collection and apply our method to improve tolerance to growth inhibitors.
Inverse PCR-based method for isolating novel SINEs from genome.
Han, Yawei; Chen, Liping; Guan, Lihong; He, Shunping
2014-04-01
Short interspersed elements (SINEs) are moderately repetitive DNA sequences in eukaryotic genomes. Although eukaryotic genomes contain numerous SINEs copy, it is very difficult and laborious to isolate and identify them by the reported methods. In this study, the inverse PCR was successfully applied to isolate SINEs from Opsariichthys bidens genome in Eastern Asian Cyprinid. A group of SINEs derived from tRNA(Ala) molecular had been identified, which were named Opsar according to Opsariichthys. SINEs characteristics were exhibited in Opsar, which contained a tRNA(Ala)-derived region at the 5' end, a tRNA-unrelated region, and AT-rich region at the 3' end. The tRNA-derived region of Opsar shared 76 % sequence similarity with tRNA(Ala) gene. This result indicated that Opsar could derive from the inactive or pseudogene of tRNA(Ala). The reliability of method was tested by obtaining C-SINE, Ct-SINE, and M-SINEs from Ctenopharyngodon idellus, Megalobrama amblycephala, and Cyprinus carpio genomes. This method is simpler than the previously reported, which successfully omitted many steps, such as preparation of probes, construction of genomic libraries, and hybridization.
GUIDE-Seq enables genome-wide profiling of off-target cleavage by CRISPR-Cas nucleases
Nguyen, Nhu T.; Liebers, Matthew; Topkar, Ved V.; Thapar, Vishal; Wyvekens, Nicolas; Khayter, Cyd; Iafrate, A. John; Le, Long P.; Aryee, Martin J.; Joung, J. Keith
2014-01-01
CRISPR RNA-guided nucleases (RGNs) are widely used genome-editing reagents, but methods to delineate their genome-wide off-target cleavage activities have been lacking. Here we describe an approach for global detection of DNA double-stranded breaks (DSBs) introduced by RGNs and potentially other nucleases. This method, called Genome-wide Unbiased Identification of DSBs Enabled by Sequencing (GUIDE-Seq), relies on capture of double-stranded oligodeoxynucleotides into breaks Application of GUIDE-Seq to thirteen RGNs in two human cell lines revealed wide variability in RGN off-target activities and unappreciated characteristics of off-target sequences. The majority of identified sites were not detected by existing computational methods or ChIP-Seq. GUIDE-Seq also identified RGN-independent genomic breakpoint ‘hotspots’. Finally, GUIDE-Seq revealed that truncated guide RNAs exhibit substantially reduced RGN-induced off-target DSBs. Our experiments define the most rigorous framework for genome-wide identification of RGN off-target effects to date and provide a method for evaluating the safety of these nucleases prior to clinical use. PMID:25513782
Interference elimination in digital controllers of automation systems of oil and gas complex
NASA Astrophysics Data System (ADS)
Solomentsev, K. Yu; Fugarov, D. D.; Purchina, O. A.; Poluyan, A. Y.; Nesterchuk, V. V.; Petrenkova, S. B.
2018-05-01
The given article considers the problems arising in the process of digital governors development for the systems of automatic control. In the case of interference, and also in case of high frequency of digitization, digital differentiation gives a big error. The problem is that the derivative is calculated as the difference of two close variables. The method of differentiation is offered to reduce this error, when there is a case of averaging the difference quotient of the series of meanings. The structure chart for the implementation of this differentiation method is offered in the case of governors construction.
A Practical Approach to Tumor Heterogeneity in Clinical Research and Diagnostics.
Stanta, Giorgio; Bonin, Serena
2018-01-01
This Pathobiology issue tries to better define the complex phenomenon of intratumor heterogeneity (ITH), mostly from a practical point of view. This topic has been chosen because ITH is a central issue in tumor development and has to be investigated directly in patient tissue and immediately applied in the treatment of the presenting patient. Different types of ITH should be considered: clonal genetic and epigenetic evolution, morphological heterogeneity, and tumor sampling, heterogeneity resulting from microenvironmental autocrine and paracrine interaction, and stochastic plasticity related to different functional cell efficiencies. For a higher level of reproducibility in clinical research and diagnostics, it is necessary to establish standardized analytical methods, including microdissection. In situ techniques can be pivotal to explore tumor microenvironment and can be improved with associated digital analysis. Liquid biopsies for plasma DNA analysis are at present the best method to study recurrent tumors with treatment adaptation, and widespread clinical use could be beneficial. The different types of tumor genomic instabilities could have pragmatic applications to rank ITH for clinical applications: treatment approaches differ in patients with a high nucleotide mutation rate and patients with high copy number alterations. © 2017 S. Karger AG, Basel.
Aquatic Plant Genomics: Advances, Applications, and Prospects
Li, Gaojie; Yang, Jingjing
2017-01-01
Genomics is a discipline in genetics that studies the genome composition of organisms and the precise structure of genes and their expression and regulation. Genomics research has resolved many problems where other biological methods have failed. Here, we summarize advances in aquatic plant genomics with a focus on molecular markers, the genes related to photosynthesis and stress tolerance, comparative study of genomes and genome/transcriptome sequencing technology. PMID:28900619
Primer in Genetics and Genomics, Article 2-Advancing Nursing Research With Genomic Approaches.
Lee, Hyunhwa; Gill, Jessica; Barr, Taura; Yun, Sijung; Kim, Hyungsuk
2017-03-01
Nurses investigate reasons for variable patient symptoms and responses to treatments to inform how best to improve outcomes. Genomics has the potential to guide nursing research exploring contributions to individual variability. This article is meant to serve as an introduction to the novel methods available through genomics for addressing this critical issue and includes a review of methodological considerations for selected genomic approaches. This review presents essential concepts in genetics and genomics that will allow readers to identify upcoming trends in genomics nursing research and improve research practice. It introduces general principles of genomic research and provides an overview of the research process. It also highlights selected nursing studies that serve as clinical examples of the use of genomic technologies. Finally, the authors provide suggestions about how to apply genomic technology in nursing research along with directions for future research. Using genomic approaches in nursing research can advance the understanding of the complex pathophysiology of disease susceptibility and different patient responses to interventions. Nurses should be incorporating genomics into education, clinical practice, and research as the influence of genomics in health-care research and practice continues to grow. Nurses are also well placed to translate genomic discoveries into improved methods for patient assessment and intervention.
NASA Technical Reports Server (NTRS)
Pototzky, Anthony; Wieseman, Carol; Hoadley, Sherwood Tiffany; Mukhopadhyay, Vivek
1991-01-01
Described here is the development and implementation of on-line, near real time controller performance evaluation (CPE) methods capability. Briefly discussed are the structure of data flow, the signal processing methods used to process the data, and the software developed to generate the transfer functions. This methodology is generic in nature and can be used in any type of multi-input/multi-output (MIMO) digital controller application, including digital flight control systems, digitally controlled spacecraft structures, and actively controlled wind tunnel models. Results of applying the CPE methodology to evaluate (in near real time) MIMO digital flutter suppression systems being tested on the Rockwell Active Flexible Wing (AFW) wind tunnel model are presented to demonstrate the CPE capability.
Evaluating digital libraries in the health sector. Part 2: measuring impacts and outcomes.
Cullen, Rowena
2004-03-01
This is the second part of a two-part paper which explores methods that can be used to evaluate digital libraries in the health sector. Part 1 focuses on approaches to evaluation that have been proposed for mainstream digital information services. This paper investigates evaluative models developed for some innovative digital library projects, and some major national and international electronic health information projects. The value of ethnographic methods to provide qualitative data to explore outcomes, adding to quantitative approaches based on inputs and outputs is discussed. The paper concludes that new 'post-positivist' models of evaluation are needed to cover all the dimensions of the digital library in the health sector, and some ways of doing this are outlined.
Optical design of cipher block chaining (CBC) encryption mode by using digital holography
NASA Astrophysics Data System (ADS)
Gil, Sang Keun; Jeon, Seok Hee; Jung, Jong Rae; Kim, Nam
2016-03-01
We propose an optical design of cipher block chaining (CBC) encryption by using digital holographic technique, which has higher security than the conventional electronic method because of the analog-type randomized cipher text with 2-D array. In this paper, an optical design of CBC encryption mode is implemented by 2-step quadrature phase-shifting digital holographic encryption technique using orthogonal polarization. A block of plain text is encrypted with the encryption key by applying 2-step phase-shifting digital holography, and it is changed into cipher text blocks which are digital holograms. These ciphered digital holograms with the encrypted information are Fourier transform holograms and are recorded on CCDs with 256 gray levels quantized intensities. The decryption is computed by these encrypted digital holograms of cipher texts, the same encryption key and the previous cipher text. Results of computer simulations are presented to verify that the proposed method shows the feasibility in the high secure CBC encryption system.
Gimpel, Charlotte; Kain, Renate; Laurinavicius, Arvydas; Bueno, Gloria; Zeng, Caihong; Liu, Zhihong; Schaefer, Franz; Kretzler, Matthias; Holzman, Lawrence B.; Hewitt, Stephen M.
2017-01-01
Abstract The introduction of digital pathology to nephrology provides a platform for the development of new methodologies and protocols for visual, morphometric and computer-aided assessment of renal biopsies. Application of digital imaging to pathology made substantial progress over the past decade; it is now in use for education, clinical trials and translational research. Digital pathology evolved as a valuable tool to generate comprehensive structural information in digital form, a key prerequisite for achieving precision pathology for computational biology. The application of this new technology on an international scale is driving novel methods for collaborations, providing unique opportunities but also challenges. Standardization of methods needs to be rigorously evaluated and applied at each step, from specimen processing to scanning, uploading into digital repositories, morphologic, morphometric and computer-aided assessment, data collection and analysis. In this review, we discuss the status and opportunities created by the application of digital imaging to precision nephropathology, and present a vision for the near future. PMID:28584625
Image compression system and method having optimized quantization tables
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)
1998-01-01
A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.
Barisoni, Laura; Gimpel, Charlotte; Kain, Renate; Laurinavicius, Arvydas; Bueno, Gloria; Zeng, Caihong; Liu, Zhihong; Schaefer, Franz; Kretzler, Matthias; Holzman, Lawrence B; Hewitt, Stephen M
2017-04-01
The introduction of digital pathology to nephrology provides a platform for the development of new methodologies and protocols for visual, morphometric and computer-aided assessment of renal biopsies. Application of digital imaging to pathology made substantial progress over the past decade; it is now in use for education, clinical trials and translational research. Digital pathology evolved as a valuable tool to generate comprehensive structural information in digital form, a key prerequisite for achieving precision pathology for computational biology. The application of this new technology on an international scale is driving novel methods for collaborations, providing unique opportunities but also challenges. Standardization of methods needs to be rigorously evaluated and applied at each step, from specimen processing to scanning, uploading into digital repositories, morphologic, morphometric and computer-aided assessment, data collection and analysis. In this review, we discuss the status and opportunities created by the application of digital imaging to precision nephropathology, and present a vision for the near future.
Assemblathon 2: evaluating de novo methods of genome assembly in three vertebrate species
2013-01-01
Background The process of generating raw genome sequence data continues to become cheaper, faster, and more accurate. However, assembly of such data into high-quality, finished genome sequences remains challenging. Many genome assembly tools are available, but they differ greatly in terms of their performance (speed, scalability, hardware requirements, acceptance of newer read technologies) and in their final output (composition of assembled sequence). More importantly, it remains largely unclear how to best assess the quality of assembled genome sequences. The Assemblathon competitions are intended to assess current state-of-the-art methods in genome assembly. Results In Assemblathon 2, we provided a variety of sequence data to be assembled for three vertebrate species (a bird, a fish, and snake). This resulted in a total of 43 submitted assemblies from 21 participating teams. We evaluated these assemblies using a combination of optical map data, Fosmid sequences, and several statistical methods. From over 100 different metrics, we chose ten key measures by which to assess the overall quality of the assemblies. Conclusions Many current genome assemblers produced useful assemblies, containing a significant representation of their genes and overall genome structure. However, the high degree of variability between the entries suggests that there is still much room for improvement in the field of genome assembly and that approaches which work well in assembling the genome of one species may not necessarily work well for another. PMID:23870653
OrthoMCL: Identification of Ortholog Groups for Eukaryotic Genomes
Li, Li; Stoeckert, Christian J.; Roos, David S.
2003-01-01
The identification of orthologous groups is useful for genome annotation, studies on gene/protein evolution, comparative genomics, and the identification of taxonomically restricted sequences. Methods successfully exploited for prokaryotic genome analysis have proved difficult to apply to eukaryotes, however, as larger genomes may contain multiple paralogous genes, and sequence information is often incomplete. OrthoMCL provides a scalable method for constructing orthologous groups across multiple eukaryotic taxa, using a Markov Cluster algorithm to group (putative) orthologs and paralogs. This method performs similarly to the INPARANOID algorithm when applied to two genomes, but can be extended to cluster orthologs from multiple species. OrthoMCL clusters are coherent with groups identified by EGO, but improved recognition of “recent” paralogs permits overlapping EGO groups representing the same gene to be merged. Comparison with previously assigned EC annotations suggests a high degree of reliability, implying utility for automated eukaryotic genome annotation. OrthoMCL has been applied to the proteome data set from seven publicly available genomes (human, fly, worm, yeast, Arabidopsis, the malaria parasite Plasmodium falciparum, and Escherichia coli). A Web interface allows queries based on individual genes or user-defined phylogenetic patterns (http://www.cbil.upenn.edu/gene-family). Analysis of clusters incorporating P. falciparum genes identifies numerous enzymes that were incompletely annotated in first-pass annotation of the parasite genome. PMID:12952885
An enhanced high-speed multi-digit BCD adder using quantum-dot cellular automata
NASA Astrophysics Data System (ADS)
Ajitha, D.; Ramanaiah, K. V.; Sumalatha, V.
2017-02-01
The advent of development of high-performance, low-power digital circuits is achieved by a suitable emerging nanodevice called quantum-dot cellular automata (QCA). Even though many efficient arithmetic circuits were designed using QCA, there is still a challenge to implement high-speed circuits in an optimized manner. Among these circuits, one of the essential structures is a parallel multi-digit decimal adder unit with significant speed which is very attractive for future environments. To achieve high speed, a new correction logic formulation method is proposed for single and multi-digit BCD adder. The proposed enhanced single-digit BCD adder (ESDBA) is 26% faster than the carry flow adder (CFA)-based BCD adder. The multi-digit operations are also performed using the proposed ESDBA, which is cascaded innovatively. The enhanced multi-digit BCD adder (EMDBA) performs two 4-digit and two 8-digit BCD addition 50% faster than the CFA-based BCD adder with the nominal overhead of the area. The EMDBA performs two 4-digit BCD addition 24% faster with 23% decrease in the area, similarly for 8-digit operation the EMDBA achieves 36% increase in speed with 21% less area compared to the existing carry look ahead (CLA)-based BCD adder design. The proposed multi-digit adder produces significantly less delay of (N –1) + 3.5 clock cycles compared to the N* One digit BCD adder delay required by the conventional BCD adder method. It is observed that as per our knowledge this is the first innovative proposal for multi-digit BCD addition using QCA.
Watanabe, Masaru; Kawaguchi, Tomoya; Isa, Shun-Ichi; Ando, Masahiko; Tamiya, Akihiro; Kubo, Akihito; Saka, Hideo; Takeo, Sadanori; Adachi, Hirofumi; Tagawa, Tsutomu; Kawashima, Osamu; Yamashita, Motohiro; Kataoka, Kazuhiko; Ichinose, Yukito; Takeuchi, Yukiyasu; Watanabe, Katsuya; Matsumura, Akihide; Koh, Yasuhiro
2017-07-01
Epidermal growth factor receptor (EGFR) mutations have been used as the strongest predictor of effectiveness of treatment with EGFR tyrosine kinase inhibitors (TKIs). Three most common EGFR mutations (L858R, exon 19 deletion, and T790M) are known to be major selection markers for EGFR-TKIs therapy. Here, we developed a multiplex picodroplet digital PCR (ddPCR) assay to detect 3 common EGFR mutations in 1 reaction. Serial-dilution experiments with genomic DNA harboring EGFR mutations revealed linear performance, with analytical sensitivity ~0.01% for each mutation. All 33 EGFR-activating mutations detected in formalin-fixed paraffin-embedded (FFPE) tissue samples by the conventional method were also detected by this multiplex assay. Owing to the higher sensitivity, an additional mutation (T790M; including an ultra-low-level mutation, <0.1%) was detected in the same reaction. Regression analysis of the duplex assay and multiplex assay showed a correlation coefficient (R 2 ) of 0.9986 for L858R, 0.9844 for an exon 19 deletion, and 0.9959 for T790M. Using ddPCR, we designed a multiplex ultrasensitive genotyping platform for 3 common EGFR mutations. Results of this proof-of-principle study on clinical samples indicate clinical utility of multiplex ddPCR for screening for multiple EGFR mutations concurrently with an ultra-rare pretreatment mutation (T790M). Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
The future of genomics in polar and alpine cyanobacteria
Anesio, Alexandre M; Sánchez-Baracaldo, Patricia
2018-01-01
Abstract In recent years, genomic analyses have arisen as an exciting way of investigating the functional capacity and environmental adaptations of numerous micro-organisms of global relevance, including cyanobacteria. In the extreme cold of Arctic, Antarctic and alpine environments, cyanobacteria are of fundamental ecological importance as primary producers and ecosystem engineers. While their role in biogeochemical cycles is well appreciated, little is known about the genomic makeup of polar and alpine cyanobacteria. In this article, we present ways that genomic techniques might be used to further our understanding of cyanobacteria in cold environments in terms of their evolution and ecology. Existing examples from other environments (e.g. marine/hot springs) are used to discuss how methods developed there might be used to investigate specific questions in the cryosphere. Phylogenomics, comparative genomics and population genomics are identified as methods for understanding the evolution and biogeography of polar and alpine cyanobacteria. Transcriptomics will allow us to investigate gene expression under extreme environmental conditions, and metagenomics can be used to complement tradition amplicon-based methods of community profiling. Finally, new techniques such as single cell genomics and metagenome assembled genomes will also help to expand our understanding of polar and alpine cyanobacteria that cannot readily be cultured. PMID:29506259
Using comparative genome analysis to identify problems in annotated microbial genomes.
Poptsova, Maria S; Gogarten, J Peter
2010-07-01
Genome annotation is a tedious task that is mostly done by automated methods; however, the accuracy of these approaches has been questioned since the beginning of the sequencing era. Genome annotation is a multilevel process, and errors can emerge at different stages: during sequencing, as a result of gene-calling procedures, and in the process of assigning gene functions. Missed or wrongly annotated genes differentially impact different types of analyses. Here we discuss and demonstrate how the methods of comparative genome analysis can refine annotations by locating missing orthologues. We also discuss possible reasons for errors and show that the second-generation annotation systems, which combine multiple gene-calling programs with similarity-based methods, perform much better than the first annotation tools. Since old errors may propagate to the newly sequenced genomes, we emphasize that the problem of continuously updating popular public databases is an urgent and unresolved one. Due to the progress in genome-sequencing technologies, automated annotation techniques will remain the main approach in the future. Researchers need to be aware of the existing errors in the annotation of even well-studied genomes, such as Escherichia coli, and consider additional quality control for their results.
Mirham, Lorna; Naugler, Christopher; Hayes, Malcolm; Ismiil, Nadia; Belisle, Annie; Sade, Shachar; Streutker, Catherine; MacMillan, Christina; Rasty, Golnar; Popovic, Snezana; Joseph, Mariamma; Gabril, Manal; Barnes, Penny; Hegele, Richard G.; Carter, Beverley; Yousef, George M.
2016-01-01
Background: It is anticipated that many licensing examination centres for pathology will begin fully digitizing the certification examinations. The objective of our study was to test the feasibility of a fully digital examination and to assess the needs, concerns and expectations of pathology residents in moving from a glass slide-based examination to a fully digital examination. Methods: We conducted a mixed methods study that compared, after randomization, the performance of senior residents (postgraduate years 4 and 5) in 7 accredited anatomical pathology training programs across Canada on a pathology examination using either glass slides or digital whole-slide scanned images of the slides. The pilot examination was followed by a post-test survey. In addition, pathology residents from all levels of training were invited to participate in an online survey. Results: A total of 100 residents participated in the pilot examination; 49 were given glass slides instead of digital images. We found no significant difference in examination results between the 2 groups of residents (estimated marginal mean 8.23/12, 95% confidence interval [CI] 7.72-8.87, for glass slides; 7.84/12, 95% CI 7.28-8.41, for digital slides). In the post-test survey, most of the respondents expressed concerns with the digital examination, including slowly functioning software, blurring and poor detail of images, particularly nuclear features. All of the respondents of the general survey (n = 179) agreed that additional training was required if the examination were to become fully digital. Interpretation: Although the performance of residents completing pathology examinations with glass slides was comparable to that of residents using digital images, our study showed that residents were not comfortable with the digital technology, especially given their current level of exposure to it. Additional training may be needed before implementing a fully digital examination, with consideration for a gradual transition. PMID:27280119
Porter, Glenn; Ebeyan, Robert; Crumlish, Charles; Renshaw, Adrian
2015-03-01
The photographic preservation of fingermark impression evidence found on ammunition cases remains problematic due to the cylindrical shape of the deposition substrate preventing complete capture of the impression in a single image. A novel method was developed for the photographic recovery of fingermarks from curved surfaces using digital imaging. The process involves the digital construction of a complete impression image made from several different images captured from multiple camera perspectives. Fingermark impressions deposited onto 9-mm and 0.22-caliber brass cartridge cases and a plastic 12-gauge shotgun shell were tested using various image parameters, including digital stitching method, number of images per 360° rotation of shell, image cropping, and overlap. The results suggest that this method may be successfully used to recover fingermark impression evidence from the surfaces of ammunition cases or other similar cylindrical surfaces. © 2014 American Academy of Forensic Sciences.
Reliability of digital reactor protection system based on extenics.
Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng
2016-01-01
After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.
A new way of analyzing occlusion 3 dimensionally.
Hayasaki, Haruaki; Martins, Renato Parsekian; Gandini, Luiz Gonzaga; Saitoh, Issei; Nonaka, Kazuaki
2005-07-01
This article introduces a new method for 3-dimensional dental cast analysis, by using a mechanical 3-dimensional digitizer, MicroScribe 3DX (Immersion, San Jose, Calif), and TIGARO software (not yet released, but available from the author at hayasaki@dent.kyushu-u.ac.jp ). By digitizing points on the model, multiple measurements can be made, including tooth dimensions; arch length, width, and perimeter; curve of Spee; overjet and overbite; and anteroposterior discrepancy. The bias of the system can be evaluated by comparing the distance between 2 points as determined by the new system and as measured with digital calipers. Fifteen pairs of models were measured digitally and manually, and the bias was evaluated by comparing the variances of both methods and checking for the type of error obtained by each method. No systematic errors were found. The results showed that the method is accurate, and it can be applied to both clinical practice and research.
2009-01-01
Background Gastric cancer is the third most common malignancy affecting the general population worldwide. Aberrant activation of KRAS is a key factor in the development of many types of tumor, however, oncogenic mutations of KRAS are infrequent in gastric cancer. We have developed a novel quantitative method of analysis of DNA copy number, termed digital genome scanning (DGS), which is based on the enumeration of short restriction fragments, and does not involve PCR or hybridization. In the current study, we used DGS to survey copy-number alterations in gastric cancer cells. Methods DGS of gastric cancer cell lines was performed using the sequences of 5000 to 15000 restriction fragments. We screened 20 gastric cancer cell lines and 86 primary gastric tumors for KRAS amplification by quantitative PCR, and investigated KRAS amplification at the DNA, mRNA and protein levels by mutational analysis, real-time PCR, immunoblot analysis, GTP-RAS pull-down assay and immunohistochemical analysis. The effect of KRAS knock-down on the activation of p44/42 MAP kinase and AKT and on cell growth were examined by immunoblot and colorimetric assay, respectively. Results DGS analysis of the HSC45 gastric cancer cell line revealed the amplification of a 500-kb region on chromosome 12p12.1, which contains the KRAS gene locus. Amplification of the KRAS locus was detected in 15% (3/20) of gastric cancer cell lines (8–18-fold amplification) and 4.7% (4/86) of primary gastric tumors (8–50-fold amplification). KRAS mutations were identified in two of the three cell lines in which KRAS was amplified, but were not detected in any of the primary tumors. Overexpression of KRAS protein correlated directly with increased KRAS copy number. The level of GTP-bound KRAS was elevated following serum stimulation in cells with amplified wild-type KRAS, but not in cells with amplified mutant KRAS. Knock-down of KRAS in gastric cancer cells that carried amplified wild-type KRAS resulted in the inhibition of cell growth and suppression of p44/42 MAP kinase and AKT activity. Conclusion Our study highlights the utility of DGS for identification of copy-number alterations. Using DGS, we identified KRAS as a gene that is amplified in human gastric cancer. We demonstrated that gene amplification likely forms the molecular basis of overactivation of KRAS in gastric cancer. Additional studies using a larger cohort of gastric cancer specimens are required to determine the diagnostic and therapeutic implications of KRAS amplification and overexpression. PMID:19545448
Lipson, Steven M; Gair, Marina
2011-01-01
The laboratory component of a microbiology course consists of exercises which mandate a level of proficiency and manual dexterity equal to and often beyond that recognized among other biology courses. Bacterial growth, maintenance, identification (e.g., Gram stain, biochemical tests, genomics), as well as the continuous need to maintain laboratory safety and sterile technique, are only a few skills/responsibilities critical to the discipline of microbiology. Performance of the Gram stain remains one of the most basic and pivotal skills that must be mastered in the microbiology laboratory. However, a number of students continually have difficulty executing the Gram stain and preparative procedures associated with the test. In order to address this issue, we incorporated real-time digital recording as a supplemental teaching aid in the microbiology laboratory. Our use of the digital movie camera in the teaching setting served to enhance interest, motivate students, and in general, improve student performance.
Understanding Digital Note-Taking Practice for Visualization.
Willett, Wesley; Goffin, Pascal; Isenberg, Petra
2015-05-13
We present results and design implications from a study of digital note-taking practice to examine how visualization can support revisitation, reflection, and collaboration around notes. As digital notebooks become common forms of external memory, keeping track of volumes of content is increasingly difficult. Information visualization tools can help give note-takers an overview of their content and allow them to explore diverse sets of notes, find and organize related content, and compare their notes with their collaborators. To ground the design of such tools, we conducted a detailed mixed-methods study of digital note-taking practice. We identify a variety of different editing, organization, and sharing methods used by digital note-takers, many of which result in notes becoming "lost in the pile''. These findings form the basis for our design considerations that examine how visualization can support the revisitation, organization, and sharing of digital notes.
Spatial-Heterodyne Interferometry For Reflection And Transm Ission (Shirt) Measurements
Hanson, Gregory R [Clinton, TN; Bingham, Philip R [Knoxville, TN; Tobin, Ken W [Harriman, TN
2006-02-14
Systems and methods are described for spatial-heterodyne interferometry for reflection and transmission (SHIRT) measurements. A method includes digitally recording a first spatially-heterodyned hologram using a first reference beam and a first object beam; digitally recording a second spatially-heterodyned hologram using a second reference beam and a second object beam; Fourier analyzing the digitally recorded first spatially-heterodyned hologram to define a first analyzed image; Fourier analyzing the digitally recorded second spatially-heterodyned hologram to define a second analyzed image; digitally filtering the first analyzed image to define a first result; and digitally filtering the second analyzed image to define a second result; performing a first inverse Fourier transform on the first result, and performing a second inverse Fourier transform on the second result. The first object beam is transmitted through an object that is at least partially translucent, and the second object beam is reflected from the object.
NASA Astrophysics Data System (ADS)
Li, Deren; Du, Zhiqiang; Zhu, Yixuan; Wang, Tingsong
2009-09-01
Considerable damage has been done to the cultural heritage sites around the world ranging from natural erosion to artificial destruction. With the development of information sciences, frontier technologies are actively introduced to help protect cultural heritage sites. The new concept of a Digital Cultural Heritage has been presented for culture protection and is gradually becoming an efficient method to solve or to remit various difficult problems. This paper puts forward a digitalization method for cultural heritage sites which rationally integrates and utilizes multiform surveying measurements. These techniques have been successfully implemented into two projects, namely the Digital Mogao Grottos and the Chi Lin Nunnery reconstruction. Our results prove that the concept of and the techniques utilized in Digital Cultural Heritage can not only contribute to research, preservation, management, interpretation, and representation of cultural heritages but can also help resolve the conflicts between tourism and protection.
NASA Astrophysics Data System (ADS)
Li, Deren; Du, Zhiqiang; Zhu, Yixuan; Wang, Tingsong
2010-11-01
Considerable damage has been done to the cultural heritage sites around the world ranging from natural erosion to artificial destruction. With the development of information sciences, frontier technologies are actively introduced to help protect cultural heritage sites. The new concept of a Digital Cultural Heritage has been presented for culture protection and is gradually becoming an efficient method to solve or to remit various difficult problems. This paper puts forward a digitalization method for cultural heritage sites which rationally integrates and utilizes multiform surveying measurements. These techniques have been successfully implemented into two projects, namely the Digital Mogao Grottos and the Chi Lin Nunnery reconstruction. Our results prove that the concept of and the techniques utilized in Digital Cultural Heritage can not only contribute to research, preservation, management, interpretation, and representation of cultural heritages but can also help resolve the conflicts between tourism and protection.
Methyl-CpG island-associated genome signature tags
Dunn, John J
2014-05-20
Disclosed is a method for analyzing the organismic complexity of a sample through analysis of the nucleic acid in the sample. In the disclosed method, through a series of steps, including digestion with a type II restriction enzyme, ligation of capture adapters and linkers and digestion with a type IIS restriction enzyme, genome signature tags are produced. The sequences of a statistically significant number of the signature tags are determined and the sequences are used to identify and quantify the organisms in the sample. Various embodiments of the invention described herein include methods for using single point genome signature tags to analyze the related families present in a sample, methods for analyzing sequences associated with hyper- and hypo-methylated CpG islands, methods for visualizing organismic complexity change in a sampling location over time and methods for generating the genome signature tag profile of a sample of fragmented DNA.
Quasispecies Analyses of the HIV-1 Near-full-length Genome With Illumina MiSeq
Ode, Hirotaka; Matsuda, Masakazu; Matsuoka, Kazuhiro; Hachiya, Atsuko; Hattori, Junko; Kito, Yumiko; Yokomaku, Yoshiyuki; Iwatani, Yasumasa; Sugiura, Wataru
2015-01-01
Human immunodeficiency virus type-1 (HIV-1) exhibits high between-host genetic diversity and within-host heterogeneity, recognized as quasispecies. Because HIV-1 quasispecies fluctuate in terms of multiple factors, such as antiretroviral exposure and host immunity, analyzing the HIV-1 genome is critical for selecting effective antiretroviral therapy and understanding within-host viral coevolution mechanisms. Here, to obtain HIV-1 genome sequence information that includes minority variants, we sought to develop a method for evaluating quasispecies throughout the HIV-1 near-full-length genome using the Illumina MiSeq benchtop deep sequencer. To ensure the reliability of minority mutation detection, we applied an analysis method of sequence read mapping onto a consensus sequence derived from de novo assembly followed by iterative mapping and subsequent unique error correction. Deep sequencing analyses of aHIV-1 clone showed that the analysis method reduced erroneous base prevalence below 1% in each sequence position and discarded only < 1% of all collected nucleotides, maximizing the usage of the collected genome sequences. Further, we designed primer sets to amplify the HIV-1 near-full-length genome from clinical plasma samples. Deep sequencing of 92 samples in combination with the primer sets and our analysis method provided sufficient coverage to identify >1%-frequency sequences throughout the genome. When we evaluated sequences of pol genes from 18 treatment-naïve patients' samples, the deep sequencing results were in agreement with Sanger sequencing and identified numerous additional minority mutations. The results suggest that our deep sequencing method would be suitable for identifying within-host viral population dynamics throughout the genome. PMID:26617593
Coudray-Meunier, Coralie; Fraisse, Audrey; Martin-Latil, Sandra; Guillier, Laurent; Delannoy, Sabine; Fach, Patrick; Perelle, Sylvie
2015-05-18
Sensitive and quantitative detection of foodborne enteric viruses is classically achieved by quantitative RT-PCR (RT-qPCR). Recently, digital PCR (dPCR) was described as a novel approach to genome quantification without need for a standard curve. The performance of microfluidic digital RT-PCR (RT-dPCR) was compared to RT-qPCR for detecting the main viruses responsible for foodborne outbreaks (human Noroviruses (NoV) and Hepatitis A virus (HAV)) in spiked lettuce and bottled water. Two process controls (Mengovirus and Murine Norovirus) were used and external amplification controls (EAC) were added to examine inhibition of RT-qPCR and RT-dPCR. For detecting viral RNA and cDNA, the sensitivity of the RT-dPCR assays was either comparable to that of RT-qPCR (RNA of HAV, NoV GI, Mengovirus) or slightly (around 1 log10) decreased (NoV GII and MNV-1 RNA and of HAV, NoV GI, NoV GII cDNA). The number of genomic copies determined by dPCR was always from 0.4 to 1.7 log10 lower than the expected numbers of copies calculated by using the standard qPCR curve. Viral recoveries calculated by RT-dPCR were found to be significantly higher than by RT-qPCR for NoV GI, HAV and Mengovirus in water, and for NoV GII and HAV in lettuce samples. The RT-dPCR assay proved to be more tolerant to inhibitory substances present in lettuce samples. This absolute quantitation approach may be useful to standardize quantification of enteric viruses in bottled water and lettuce samples and may be extended to quantifying other human pathogens in food samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Grybauskas, Simonas; Balciuniene, Irena; Vetra, Janis
2007-01-01
The emerging market of digital cephalographs and computerized cephalometry is overwhelming the need to examine the advantages and drawbacks of manual cephalometry, meanwhile, small offices continue to benefit from the economic efficacy and ease of use of analogue cephalograms. The use of modern cephalometric software requires import of digital cephalograms or digital capture of analogue data: scanning and digital photography. The validity of digital photographs of analogue headfilms rather than original headfilms in clinical practice has not been well established. Digital photography could be a fast and inexpensive method of digital capture of analogue cephalograms for use in digital cephalometry. The objective of this study was to determine the validity and reproducibility of measurements obtained from digital photographs of analogue headfilms in lateral cephalometry. Analogue cephalometric radiographs were performed on 15 human dry skulls. Each of them was traced on acetate paper and photographed three times independently. Acetate tracings and digital photographs were digitized and analyzed in cephalometric software. Linear regression model, paired t-test intergroup analysis and coefficient of repeatability were used to assess validity and reproducibility for 63 angular, linear and derivative measurements. 54 out of 63 measurements were determined to have clinically acceptable reproducibility in the acetate tracing group as well as 46 out of 63 in the digital photography group. The worst reproducibility was determined for measurements dependent on landmarks of incisors and poorly defined outlines, majority of them being angular measurements. Validity was acceptable for all measurements, and although statistically significant differences between methods existed for as many as 15 parameters, they appeared to be clinically insignificant being smaller than 1 unit of measurement. Validity was acceptable for 59 of 63 measurements obtained from digital photographs, substantiating the use of digital photography for headfilm capture and computer-aided cephalometric analysis.
Mechanical properties of the human hand digits: Age-related differences
Park, Jaebum; Pazin, Nemanja; Friedman, Jason; Zatsiorsky, Vladimir M.; Latash, Mark L.
2014-01-01
Background Mechanical properties of human digits may have significant implications for the hand function. We quantified several mechanical characteristics of individual digits in young and older adults. Methods Digit tip friction was measured at several normal force values using a method of induced relative motion between the digit tip and the object surface. A modified quick-release paradigm was used to estimate digit apparent stiffness, damping, and inertial parameters. The subjects grasped a vertical handle instrumented with force/moment sensors using a prismatic grasp with four digits; the handle was fixed to the table. Unexpectedly, one of the sensors yielded leading to a quick displacement of the corresponding digit. A second-order, linear model was used to fit the force/displacement data. Findings Friction of the digit pads was significantly lower in older adults. The apparent stiffness coefficient values were higher while the damping coefficients were lower in older adults leading to lower damping ratio. The damping ratio was above unity for most data in young adults and below unity for older adults. Quick release of a digit led to force changes in other digits of the hand, likely due to inertial hand properties. These phenomena of “mechanical enslaving” were smaller in older adults although no significant difference was found in the inertial parameter in the two groups. Interpretations The decreased friction and damping ratio present challenges for the control of everyday prehensile tasks. They may lead to excessive digit forces and low stability of the grasped object. PMID:24355703
Phylo_dCor: distance correlation as a novel metric for phylogenetic profiling.
Sferra, Gabriella; Fratini, Federica; Ponzi, Marta; Pizzi, Elisabetta
2017-09-05
Elaboration of powerful methods to predict functional and/or physical protein-protein interactions from genome sequence is one of the main tasks in the post-genomic era. Phylogenetic profiling allows the prediction of protein-protein interactions at a whole genome level in both Prokaryotes and Eukaryotes. For this reason it is considered one of the most promising methods. Here, we propose an improvement of phylogenetic profiling that enables handling of large genomic datasets and infer global protein-protein interactions. This method uses the distance correlation as a new measure of phylogenetic profile similarity. We constructed robust reference sets and developed Phylo-dCor, a parallelized version of the algorithm for calculating the distance correlation that makes it applicable to large genomic data. Using Saccharomyces cerevisiae and Escherichia coli genome datasets, we showed that Phylo-dCor outperforms phylogenetic profiling methods previously described based on the mutual information and Pearson's correlation as measures of profile similarity. In this work, we constructed and assessed robust reference sets and propose the distance correlation as a measure for comparing phylogenetic profiles. To make it applicable to large genomic data, we developed Phylo-dCor, a parallelized version of the algorithm for calculating the distance correlation. Two R scripts that can be run on a wide range of machines are available upon request.
Enzymatically Generated CRISPR Libraries for Genome Labeling and Screening
Lane, Andrew B.; Strzelecka, Magdalena; Ettinger, Andreas; Grenfell, Andrew W.; Wittmann, Torsten; Heald, Rebecca
2015-01-01
Summary CRISPR-based technologies have emerged as powerful tools to alter genomes and mark chromosomal loci, but an inexpensive method for generating large numbers of RNA guides for whole genome screening and labeling is lacking. Using a method that permits library construction from any source of DNA, we generated guide libraries that label repetitive loci or a single chromosomal locus in Xenopus egg extracts and show that a complex library can target the E. coli genome at high frequency. PMID:26212133
Phytozome Comparative Plant Genomics Portal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodstein, David; Batra, Sajeev; Carlson, Joseph
2014-09-09
The Dept. of Energy Joint Genome Institute is a genomics user facility supporting DOE mission science in the areas of Bioenergy, Carbon Cycling, and Biogeochemistry. The Plant Program at the JGI applies genomic, analytical, computational and informatics platforms and methods to: 1. Understand and accelerate the improvement (domestication) of bioenergy crops 2. Characterize and moderate plant response to climate change 3. Use comparative genomics to identify constrained elements and infer gene function 4. Build high quality genomic resource platforms of JGI Plant Flagship genomes for functional and experimental work 5. Expand functional genomic resources for Plant Flagship genomes
Single-Cell Whole-Genome Amplification and Sequencing: Methodology and Applications.
Huang, Lei; Ma, Fei; Chapman, Alec; Lu, Sijia; Xie, Xiaoliang Sunney
2015-01-01
We present a survey of single-cell whole-genome amplification (WGA) methods, including degenerate oligonucleotide-primed polymerase chain reaction (DOP-PCR), multiple displacement amplification (MDA), and multiple annealing and looping-based amplification cycles (MALBAC). The key parameters to characterize the performance of these methods are defined, including genome coverage, uniformity, reproducibility, unmappable rates, chimera rates, allele dropout rates, false positive rates for calling single-nucleotide variations, and ability to call copy-number variations. Using these parameters, we compare five commercial WGA kits by performing deep sequencing of multiple single cells. We also discuss several major applications of single-cell genomics, including studies of whole-genome de novo mutation rates, the early evolution of cancer genomes, circulating tumor cells (CTCs), meiotic recombination of germ cells, preimplantation genetic diagnosis (PGD), and preimplantation genomic screening (PGS) for in vitro-fertilized embryos.
Teichmann, A Lina; Nieuwenstein, Mark R; Rich, Anina N
2017-08-01
For digit-color synaesthetes, digits elicit vivid experiences of color that are highly consistent for each individual. The conscious experience of synaesthesia is typically unidirectional: Digits evoke colors but not vice versa. There is an ongoing debate about whether synaesthetes have a memory advantage over non-synaesthetes. One key question in this debate is whether synaesthetes have a general superiority or whether any benefit is specific to a certain type of material. Here, we focus on immediate serial recall and ask digit-color synaesthetes and controls to memorize digit and color sequences. We developed a sensitive staircase method manipulating presentation duration to measure participants' serial recall of both overlearned and novel sequences. Our results show that synaesthetes can activate digit information to enhance serial memory for color sequences. When color sequences corresponded to ascending or descending digit sequences, synaesthetes encoded these sequences at a faster rate than their non-synaesthetes counterparts and faster than non-structured color sequences. However, encoding color sequences is approximately 200 ms slower than encoding digit sequences directly, independent of group and condition, which shows that the translation process is time consuming. These results suggest memory advantages in synaesthesia require a modified dual-coding account, in which secondary (synaesthetically linked) information is useful only if it is more memorable than the primary information to be recalled. Our study further shows that duration thresholds are a sensitive method to measure subtle differences in serial recall performance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Couvillon, L. A., Jr.; Carl, C.; Goldstein, R. M.; Posner, E. C.; Green, R. R. (Inventor)
1973-01-01
A method and apparatus are described for synchronizing a received PCM communications signal without requiring a separate synchronizing channel. The technique provides digital correlation of the received signal with a reference signal, first with its unmodulated subcarrier and then with a bit sync code modulated subcarrier, where the code sequence length is equal in duration to each data bit.
ERIC Educational Resources Information Center
Nasah, Angelique; DaCosta, Boaventura; Kinsell, Carolyn; Seok, Soonhwa
2010-01-01
Research suggests students' use of information and communication technology (ICT) may be more a matter of digital literacy and access rather than a generational trait. We sought to identify ICT preferences of post-secondary students (N = 580) through a Digital Propensity Index (DPI), investigating communication methods, Internet practices and the…
Digital Authenticity and Integrity: Digital Cultural Heritage Documents as Research Resources
ERIC Educational Resources Information Center
Bradley; Rachael
2005-01-01
This article presents the results of a survey addressing methods of securing digital content and ensuring the content's authenticity and integrity, as well as the perceived importance of authenticity and integrity. The survey was sent to 40 digital repositories in the United States and Canada between June 30 and July 19, 2003. Twenty-two…
Validity of radiographic assessment of the knee joint space using automatic image analysis.
Komatsu, Daigo; Hasegawa, Yukiharu; Kojima, Toshihisa; Seki, Taisuke; Ikeuchi, Kazuma; Takegami, Yasuhiko; Amano, Takafumi; Higuchi, Yoshitoshi; Kasai, Takehiro; Ishiguro, Naoki
2016-09-01
The present study investigated whether there were differences between automatic and manual measurements of the minimum joint space width (mJSW) on knee radiographs. Knee radiographs of 324 participants in a systematic health screening were analyzed using the following three methods: manual measurement of film-based radiographs (Manual), manual measurement of digitized radiographs (Digital), and automatic measurement of digitized radiographs (Auto). The mean mJSWs on the medial and lateral sides of the knees were determined using each method, and measurement reliability was evaluated using intra-class correlation coefficients. Measurement errors were compared between normal knees and knees with radiographic osteoarthritis. All three methods demonstrated good reliability, although the reliability was slightly lower with the Manual method than with the other methods. On the medial and lateral sides of the knees, the mJSWs were the largest in the Manual method and the smallest in the Auto method. The measurement errors of each method were significantly larger for normal knees than for radiographic osteoarthritis knees. The mJSW measurements are more accurate and reliable with the Auto method than with the Manual or Digital method, especially for normal knees. Therefore, the Auto method is ideal for the assessment of the knee joint space.
Accuracy and training population design for genomic selection in elite north american oats
USDA-ARS?s Scientific Manuscript database
Genomic selection (GS) is a method to estimate the breeding values of individuals by using markers throughout the genome. We evaluated the accuracies of GS using data from five traits on 446 oat lines genotyped with 1005 Diversity Array Technology (DArT) markers and two GS methods (RR-BLUP and Bayes...
Williams, Lisa; Moeke-Maxwell, Tess; Kothari, Shuchi; Pearson, Sarina; Gott, Merryn; Black, Stella; Frey, Rosemary; Wharemate, Rawiri; Hansen, Whio
2015-04-01
Māori regard stories as a preferred method for imparting knowledge through waiata (song), moteatea (poetry), kauwhau (moralistic tale), pakiwaitara (story) and purakau (myths). Storytelling is also an expression of tinorangatiratanga (self-determination); Māori have the right to manage their knowledge, which includes embodiment in forms transcending typical western formulations. Digital storytelling is a process by which 'ordinary people' create short autobiographical videos. It has found application in numerous disciplines including public health and has been used to articulatethe experiences of those often excluded from knowledge production. To explore the use of digital storytelling as a research method for learning about whānau (family) experiences providing end of life care for kaumātua (older people). Eight Māori and their nominated co-creators attended a three-day digital story telling workshop led by co-researchers Shuchi Kothari and Sarina Pearson. They were guided in the creation of first-person digital stories about caring for kaumātua. The videos were shared at a group screening, and participants completed questionnaires about the workshop and their videos. A Kaupapa Māori narrative analysis was applied to their stories to gain new perspectives on Māori end of life caregiving practices. (Kaupapa Maori research privileges Maori worldviews and indigenous knowledge systems.) Digital storytelling is an appropriate method as Māori is an oral/aural society. It allows Māori to share their stories with others, thus promoting community support at the end of life, befitting a public health approach. Digital storytelling can be a useful method for Māori to express their experiences providing end of life caregiving. © 2015, Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Feasibility of digital imaging to characterize earth materials : part 2.
DOT National Transportation Integrated Search
2012-06-06
This study demonstrated the feasibility of digital imaging to characterize earth materials. Two rapid, relatively low cost image-based methods were developed for determining the grain size distribution of soils and aggregates. The first method, calle...
Feasibility of digital imaging to characterize earth materials : part 6.
DOT National Transportation Integrated Search
2012-06-06
This study demonstrated the feasibility of digital imaging to characterize earth materials. Two rapid, relatively low cost image-based methods were developed for determining the grain size distribution of soils and aggregates. The first method, calle...
Feasibility of digital imaging to characterize earth materials : part 3.
DOT National Transportation Integrated Search
2012-06-06
This study demonstrated the feasibility of digital imaging to characterize earth materials. Two rapid, relatively low cost image-based methods were developed for determining the grain size distribution of soils and aggregates. The first method, calle...
A method for reducing sampling jitter in digital control systems
NASA Technical Reports Server (NTRS)
Anderson, T. O.; HURBD W. J.; Hurd, W. J.
1969-01-01
Digital phase lock loop system is designed by smoothing the proportional control with a low pass filter. This method does not significantly affect the loop dynamics when the smoothing filter bandwidth is wide compared to loop bandwidth.
Feasibility of digital imaging to characterize earth materials : part 1.
DOT National Transportation Integrated Search
2012-06-06
This study demonstrated the feasibility of digital imaging to characterize earth materials. Two rapid, relatively low cost image-based methods were developed for determining the grain size distribution of soils and aggregates. The first method, calle...
Feasibility of digital imaging to characterize earth materials : part 4.
DOT National Transportation Integrated Search
2012-06-06
This study demonstrated the feasibility of digital imaging to characterize earth materials. Two rapid, relatively low cost image-based methods were developed for determining the grain size distribution of soils and aggregates. The first method, calle...
Feasibility of digital imaging to characterize earth materials : part 5.
DOT National Transportation Integrated Search
2012-05-06
This study demonstrated the feasibility of digital imaging to characterize earth materials. Two rapid, relatively low cost image-based methods were developed for determining the grain size distribution of soils and aggregates. The first method, calle...
Li, Caiqin; Wang, Yan; Ying, Peiyuan; Ma, Wuqiang; Li, Jianguo
2015-01-01
The high level of physiological fruitlet abscission in litchi (Litchi chinensis Sonn.) causes severe yield loss. Cell separation occurs at the fruit abscission zone (FAZ) and can be triggered by ethylene. However, a deep knowledge of the molecular events occurring in the FAZ is still unknown. Here, genome-wide digital transcript abundance (DTA) analysis of putative fruit abscission related genes regulated by ethephon in litchi were studied. More than 81 million high quality reads from seven ethephon treated and untreated control libraries were obtained by high-throughput sequencing. Through DTA profile analysis in combination with Gene Ontology and KEGG pathway enrichment analyses, a total of 2730 statistically significant candidate genes were involved in the ethephon-promoted litchi fruitlet abscission. Of these, there were 1867 early-responsive genes whose expressions were up- or down-regulated from 0 to 1 d after treatment. The most affected genes included those related to ethylene biosynthesis and signaling, auxin transport and signaling, transcription factors (TFs), protein ubiquitination, ROS response, calcium signal transduction, and cell wall modification. These genes could be clustered into four groups and 13 subgroups according to their similar expression patterns. qRT-PCR displayed the expression pattern of 41 selected candidate genes, which proved the accuracy of our DTA data. Ethephon treatment significantly increased fruit abscission and ethylene production of fruitlet. The possible molecular events to control the ethephon-promoted litchi fruitlet abscission were prompted out. The increased ethylene evolution in fruitlet would suppress the synthesis and polar transport of auxin and trigger abscission signaling. To the best of our knowledge, it is the first time to monitor the gene expression profile occurring in the FAZ-enriched pedicel during litchi fruit abscission induced by ethephon on the genome-wide level. This study will contribute to a better understanding for the molecular regulatory mechanism of fruit abscission in litchi. PMID:26217356
VHDL Modeling and Simulation of a Digital Image Synthesizer for Countering ISAR
2003-06-01
This thesis discusses VHDL modeling and simulation of a full custom Application Specific Integrated Circuit (ASIC) for a Digital Image Synthesizer...necessary for a given application . With such a digital method, it is possible for a small ship to appear as large as an aircraft carrier or any high...INTRODUCTION TO DIGITAL IMAGE SYNTHESIZER (DIS) A. BACKGROUND The Digital Image Synthesizer (DIS) is an Application Specific Integrated Circuit
Mating programs including genomic relationships and dominance effects
USDA-ARS?s Scientific Manuscript database
Breed associations, artificial-insemination organizations, and on-farm software providers need new computerized mating programs for genomic selection so that genomic inbreeding could be minimized by comparing genotypes of potential mates. Efficient methods for transferring elements of the genomic re...