Science.gov

Sample records for large-scale transposon mutagenesis

  1. Large-Scale Transposon Mutagenesis of Photobacterium profundum SS9 Reveals New Genetic Loci Important for Growth at Low Temperature and High Pressure▿

    PubMed Central

    Lauro, Federico M.; Tran, Khiem; Vezzi, Alessandro; Vitulo, Nicola; Valle, Giorgio; Bartlett, Douglas H.

    2008-01-01

    Microorganisms adapted to piezopsychrophilic growth dominate the majority of the biosphere that is at relatively constant low temperatures and high pressures, but the genetic bases for the adaptations are largely unknown. Here we report the use of transposon mutagenesis with the deep-sea bacterium Photobacterium profundum strain SS9 to isolate dozens of mutant strains whose growth is impaired at low temperature and/or whose growth is altered as a function of hydrostatic pressure. In many cases the gene mutation-growth phenotype relationship was verified by complementation analysis. The largest fraction of loci associated with temperature sensitivity were involved in the biosynthesis of the cell envelope, in particular the biosynthesis of extracellular polysaccharide. The largest fraction of loci associated with pressure sensitivity were involved in chromosomal structure and function. Genes for ribosome assembly and function were found to be important for both low-temperature and high-pressure growth. Likewise, both adaptation to temperature and adaptation to pressure were affected by mutations in a number of sensory and regulatory loci, suggesting the importance of signal transduction mechanisms in adaptation to either physical parameter. These analyses were the first global analyses of genes conditionally required for low-temperature or high-pressure growth in a deep-sea microorganism. PMID:18156275

  2. Construction of a large-scale Burkholderia cenocepacia J2315 transposon mutant library

    NASA Astrophysics Data System (ADS)

    Wong, Yee-Chin; Pain, Arnab; Nathan, Sheila

    2014-09-01

    Burkholderia cenocepacia, a pathogenic member of the Burkholderia cepacia complex (Bcc), has emerged as a significant threat towards cystic fibrosis patients, where infection often leads to the fatal clinical manifestation known as cepacia syndrome. Many studies have investigated the pathogenicity of B. cenocepacia as well as its ability to become highly resistant towards many of the antibiotics currently in use. In addition, studies have also been undertaken to understand the pathogen's capacity to adapt and survive in a broad range of environments. Transposon based mutagenesis has been widely used in creating insertional knock-out mutants and coupled with recent advances in sequencing technology, robust tools to study gene function in a genome-wide manner have been developed based on the assembly of saturated transposon mutant libraries. In this study, we describe the construction of a large-scale library of B. cenocepacia transposon mutants. To create transposon mutants of B. cenocepacia strain J2315, electrocompetent bacteria were electrotransformed with the EZ-Tn5 transposome. Tetracyline resistant colonies were harvested off selective agar and pooled. Mutants were generated in multiple batches with each batch consisting of ˜20,000 to 40,000 mutants. Transposon insertion was validated by PCR amplification of the transposon region. In conclusion, a saturated B. cenocepacia J2315 transposon mutant library with an estimated total number of 500,000 mutants was successfully constructed. This mutant library can now be further exploited as a genetic tool to assess the function of every gene in the genome, facilitating the discovery of genes important for bacterial survival and adaptation, as well as virulence.

  3. Random tag insertions by Transposon Integration mediated Mutagenesis (TIM).

    PubMed

    Hoeller, Brigitte M; Reiter, Birgit; Abad, Sandra; Graze, Ina; Glieder, Anton

    2008-10-01

    Transposon Integration mediated Mutagenesis (TIM) is a broadly applicable tool for protein engineering. This method combines random integration of modified bacteriophage Mu transposons with their subsequent defined excision employing type IIS restriction endonuclease AarI. TIM enables deletion or insertion of an arbitrary number of bases at random positions, insertion of functional sequence tags at random positions, replacing randomly selected triplets by a specific codon (e.g. scanning) and site-saturation mutagenesis. As a proof of concept a transposon named GeneOpenerAarIKan was designed and employed to introduce 6xHis tags randomly into the esterase EstC from Burkholderia gladioli. A TIM library was screened with colony based assays for clones with an integrated 6xHis tag and for clones exhibiting esterase activity. The employed strategy enables the isolation of randomly tagged active enzymes in single mutagenesis experiments.

  4. A mariner transposon vector adapted for mutagenesis in oral streptococci

    PubMed Central

    Nilsson, Martin; Christiansen, Natalia; Høiby, Niels; Twetman, Svante; Givskov, Michael; Tolker-Nielsen, Tim

    2014-01-01

    This article describes the construction and characterization of a mariner-based transposon vector designed for use in oral streptococci, but with a potential use in other Gram-positive bacteria. The new transposon vector, termed pMN100, contains the temperature-sensitive origin of replication repATs-pWV01, a selectable kanamycin resistance gene, a Himar1 transposase gene regulated by a xylose-inducible promoter, and an erythromycin resistance gene flanked by himar inverted repeats. The pMN100 plasmid was transformed into Streptococcus mutans UA159 and transposon mutagenesis was performed via a protocol established to perform high numbers of separate transpositions despite a low frequency of transposition. The distribution of transposon inserts in 30 randomly picked mutants suggested that mariner transposon mutagenesis is unbiased in S. mutans. A generated transposon mutant library containing 5000 mutants was used in a screen to identify genes involved in the production of sucrose-dependent extracellular matrix components. Mutants with transposon inserts in genes encoding glycosyltransferases and the competence-related secretory locus were predominantly found in this screen. PMID:24753509

  5. Genome-wide transposon mutagenesis in pathogenic Leptospira species.

    PubMed

    Murray, Gerald L; Morel, Viviane; Cerqueira, Gustavo M; Croda, Julio; Srikram, Amporn; Henry, Rebekah; Ko, Albert I; Dellagostin, Odir A; Bulach, Dieter M; Sermswan, Rasana W; Adler, Ben; Picardeau, Mathieu

    2009-02-01

    Leptospira interrogans is the most common cause of leptospirosis in humans and animals. Genetic analysis of L. interrogans has been severely hindered by a lack of tools for genetic manipulation. Recently we developed the mariner-based transposon Himar1 to generate the first defined mutants in L. interrogans. In this study, a total of 929 independent transposon mutants were obtained and the location of insertion determined. Of these mutants, 721 were located in the protein coding regions of 551 different genes. While sequence analysis of transposon insertion sites indicated that transposition occurred in an essentially random fashion in the genome, 25 unique transposon mutants were found to exhibit insertions into genes encoding 16S or 23S rRNAs, suggesting these genes are insertional hot spots in the L. interrogans genome. In contrast, loci containing notionally essential genes involved in lipopolysaccharide and heme biosynthesis showed few transposon insertions. The effect of gene disruption on the virulence of a selected set of defined mutants was investigated using the hamster model of leptospirosis. Two attenuated mutants with disruptions in hypothetical genes were identified, thus validating the use of transposon mutagenesis for the identification of novel virulence factors in L. interrogans. This library provides a valuable resource for the study of gene function in L. interrogans. Combined with the genome sequences of L. interrogans, this provides an opportunity to investigate genes that contribute to pathogenesis and will provide a better understanding of the biology of L. interrogans. PMID:19047402

  6. Sleeping Beauty transposon insertional mutagenesis based mouse models for cancer gene discovery

    PubMed Central

    Moriarity, Branden S; Largaespada, David A

    2016-01-01

    Large-scale genomic efforts to study human cancer, such as the cancer gene atlas (TCGA), have identified numerous cancer drivers in a wide variety of tumor types. However, there are limitations to this approach, the mutations and expression or copy number changes that are identified are not always clearly functionally relevant, and only annotated genes and genetic elements are thoroughly queried. The use of complimentary, nonbiased, functional approaches to identify drivers of cancer development and progression is ideal to maximize the rate at which cancer discoveries are achieved. One such approach that has been successful is the use of the Sleeping Beauty (SB) transposon-based mutagenesis system in mice. This system uses a conditionally expressed transposase and mutagenic transposon allele to target mutagenesis to somatic cells of a given tissue in mice to cause random mutations leading to tumor development. Analysis of tumors for transposon common insertion sites (CIS) identifies candidate cancer genes specific to that tumor type. While similar screens have been performed in mice with the PiggyBac (PB) transposon and viral approaches, we limit extensive discussion to SB. Here we discuss the basic structure of these screens, screens that have been performed, methods used to identify CIS. PMID:26051241

  7. Implementing large-scale ENU mutagenesis screens in North America

    PubMed Central

    Clark, Amander T.; Goldowitz, Daniel; Takahashi, Joseph S.; Vitaterna, Martha Hotz; Siepka, Sandra M.; Peters, Luanne L.; Frankel, Wayne N.; Carlson, George A.; Rossant, Janet; Nadeau, Joseph H.; Justice, Monica J.

    2013-01-01

    A step towards annotating the mouse genome is to use forward genetics in phenotype-driven screens to saturate the genome with mutations. The purpose of this article is to highlight the new projects in North America that are focused on isolating mouse mutations after ENU mutagenesis and phenotype screening. PMID:15619961

  8. Forward genetic screens in Xenopus using transposon-mediated insertional mutagenesis.

    PubMed

    Yergeau, Donald A; Kelley, Clair M; Zhu, Haiqing; Kuliyev, Emin; Mead, Paul E

    2012-01-01

    The class II DNA "cut-and-paste" transposons have been used to efficiently modify the Xenopus genome for transgenesis applications. Once integrated, the transposon is an effective substrate for excision and re-integration (remobilization) elsewhere in the genome by simply supplying the transposase enzyme in trans. We have used two methods to remobilize transposons resident in the frog genome: micro-injection of transposase mRNA at the one-cell stage and expression of the enzyme in the germline from a transgene. Double-transgenic frogs (hoppers) that harbor transgenes for both the substrate transposon and the transposase enzyme are outcrossed to wild-type animals and the progeny are scored for changes in reporter gene expression. Although both methods work effectively to remobilize transposons, the breeding-mediated strategy eliminates the time-consuming micro-injection step; novel integration events are produced by simply outcrossing the hopper frogs. As each outcross of Xenopus tropicalis typically produces 2,000, or more, progeny, this method can be used to perform large-scale insertional mutagenesis screens in this highly tractable developmental model system.

  9. Genes Necessary for Bacterial Magnetite Biomineralization Identified by Transposon Mutagenesis

    NASA Astrophysics Data System (ADS)

    Nash, C. Z.; Komeili, A.; Newman, D. K.; Kirschvink, J. L.

    2004-12-01

    Magnetic bacteria synthesize nanoscale crystals of magnetite in intracellular, membrane-bounded organelles (magnetosomes). These crystals are preserved in the fossil record at least as far back as the late Neoproterozoic and have been tentatively identified in much older rocks (1). This fossil record may provide deep time calibration points for molecular evolution studies once the genes involved in biologically controlled magnetic mineralization (BCMM) are known. Further, a genetic and biochemical understanding of BCMM will give insight into the depositional environment and biogeochemical cycles in which magnetic bacteria play a role. The BCMM process is not well understood, though proteins have been identified from the magnetosome membrane and genetic manipulation and biochemical characterization of these proteins are underway. Most of the proteins currently thought to be involved are encoded within the mam cluster, a large cluster of genes whose products localize to the magnetosome membrane and are conserved among magnetic bacteria (2). In an effort to identify all of the genes necessary for bacterial BCMM, we undertook a transposon mutagenesis of Magnetospirillum magneticum AMB-1. Non-magnetic mutants (MNMs) were identified by growth in liquid culture followed by a magnetic assay. The insertion site of the transposon was identified two ways. First MNMs were screened with a PCR assay to determine if the transposon had inserted into the mam cluster. Second, the transposon was rescued from the mutant DNA and cloned for sequencing. The majority insertion sites are located within the mam cluster. Insertion sites also occur in operons which have not previously been suspected to be involved in magnetite biomineralization. None of the insertion sites have occurred within genes reported from previous transposon mutagenesis studies of AMB-1 (3, 4). Two of the non-mam cluster insertion sites occur in operons containing genes conserved particularly between MS-1 and MC-1. We

  10. Large-scale mutagenesis and phenotypic screens for the nervous system and behavior in mice.

    PubMed

    Vitaterna, Martha Hotz; Pinto, Lawrence H; Takahashi, Joseph S

    2006-04-01

    Significant developments have occurred in our understanding of the mammalian genome thanks to informatics, expression profiling and sequencing of the human and rodent genomes. However, although these facets of genomic analysis are being addressed, analysis of in vivo gene function remains a formidable task. Evaluation of the phenotype of mutants provides powerful access to gene function, and this approach is particularly relevant to the nervous system and behavior. Here, we discuss the complementary mouse genetic approaches of gene-driven, targeted mutagenesis and phenotype-driven, chemical mutagenesis. We highlight an NIH-supported large-scale effort to use phenotype-driven mutagenesis screens to identify mouse mutants with neural and behavioral alterations. Such single-gene mutations can then be used for gene identification using positional candidate gene-cloning methods.

  11. Generation of Enterobacter sp. YSU Auxotrophs Using Transposon Mutagenesis

    PubMed Central

    Caguiat, Jonathan James

    2014-01-01

    Prototrophic bacteria grow on M-9 minimal salts medium supplemented with glucose (M-9 medium), which is used as a carbon and energy source. Auxotrophs can be generated using a transposome. The commercially available, Tn5-derived transposome used in this protocol consists of a linear segment of DNA containing an R6Kγ replication origin, a gene for kanamycin resistance and two mosaic sequence ends, which serve as transposase binding sites. The transposome, provided as a DNA/transposase protein complex, is introduced by electroporation into the prototrophic strain, Enterobacter sp. YSU, and randomly incorporates itself into this host’s genome. Transformants are replica plated onto Luria-Bertani agar plates containing kanamycin, (LB-kan) and onto M-9 medium agar plates containing kanamycin (M-9-kan). The transformants that grow on LB-kan plates but not on M-9-kan plates are considered to be auxotrophs. Purified genomic DNA from an auxotroph is partially digested, ligated and transformed into a pir+ Escherichia coli (E. coli) strain. The R6Kγ replication origin allows the plasmid to replicate in pir+ E. coli strains, and the kanamycin resistance marker allows for plasmid selection. Each transformant possesses a new plasmid containing the transposon flanked by the interrupted chromosomal region. Sanger sequencing and the Basic Local Alignment Search Tool (BLAST) suggest a putative identity of the interrupted gene. There are three advantages to using this transposome mutagenesis strategy. First, it does not rely on the expression of a transposase gene by the host. Second, the transposome is introduced into the target host by electroporation, rather than by conjugation or by transduction and therefore is more efficient. Third, the R6Kγ replication origin makes it easy to identify the mutated gene which is partially recovered in a recombinant plasmid. This technique can be used to investigate the genes involved in other characteristics of Enterobacter sp. YSU or of a

  12. Random insertion and gene disruption via transposon mutagenesis of Ureaplasma parvum using a mini-transposon plasmid.

    PubMed

    Aboklaish, Ali F; Dordet-Frisoni, Emilie; Citti, Christine; Toleman, Mark A; Glass, John I; Spiller, O Brad

    2014-11-01

    While transposon mutagenesis has been successfully used for Mycoplasma spp. to disrupt and determine non-essential genes, previous attempts with Ureaplasma spp. have been unsuccessful. Using a polyethylene glycol-transformation enhancing protocol, we were able to transform three separate serovars of Ureaplasma parvum with a Tn4001-based mini-transposon plasmid containing a gentamicin resistance selection marker. Despite the large degree of homology between Ureaplasma parvum and Ureaplasma urealyticum, all attempts to transform the latter in parallel failed, with the exception of a single clinical U. urealyticum isolate. PCR probing and sequencing were used to confirm transposon insertion into the bacterial genome and identify disrupted genes. Transformation of prototype serovar 3 consistently resulted in transfer only of sequence between the mini-transposon inverted repeats, but some strains showed additional sequence transfer. Transposon insertion occurred randomly in the genome resulting in unique disruption of genes UU047, UU390, UU440, UU450, UU520, UU526, UU582 for single clones from a panel of screened clones. An intergenic insertion between genes UU187 and UU188 was also characterised. Two phenotypic alterations were observed in the mutated strains: Disruption of a DEAD-box RNA helicase (UU582) altered growth kinetics, while the U. urealyticum strain lost resistance to serum attack coincident with disruption of gene UUR10_137 and loss of expression of a 41 kDa protein. Transposon mutagenesis was used successfully to insert single copies of a mini-transposon into the genome and disrupt genes leading to phenotypic changes in Ureaplasma parvum strains. This method can now be used to deliver exogenous genes for expression and determine essential genes for Ureaplasma parvum replication in culture and experimental models. PMID:25444567

  13. Excavating the Genome: Large Scale Mutagenesis Screening for the Discovery of New Mouse Models

    PubMed Central

    Sundberg, John P.; Dadras, Soheil S.; Silva, Kathleen A.; Kennedy, Victoria E.; Murray, Stephen A.; Denegre, James; Schofield, Paul N.; King, Lloyd E.; Wiles, Michael; Pratt, C. Herbert

    2016-01-01

    Technology now exists for rapid screening of mutated laboratory mice to identify phenotypes associated with specific genetic mutations. Large repositories exist for spontaneous mutants and those induced by chemical mutagenesis, many of which have never been studied or comprehensively evaluated. To supplement these resources, a variety of techniques have been consolidated in an international effort to create mutations in all known protein coding genes in the mouse. With targeted embryonic stem cell lines now available for almost all protein coding genes and more recently CRISPR/Cas9 technology, large-scale efforts are underway to create novel mutant mouse strains and to characterize their phenotypes. However, accurate diagnosis of skin, hair, and nail diseases still relies on careful gross and histological analysis. While not automated to the level of the physiological phenotyping, histopathology provides the most direct and accurate diagnosis and correlation with human diseases. As a result of these efforts, many new mouse dermatological disease models are being developed. PMID:26551941

  14. Sleeping Beauty transposon mutagenesis identifies genes that cooperate with mutant Smad4 in gastric cancer development.

    PubMed

    Takeda, Haruna; Rust, Alistair G; Ward, Jerrold M; Yew, Christopher Chin Kuan; Jenkins, Nancy A; Copeland, Neal G

    2016-04-01

    Mutations in SMAD4 predispose to the development of gastrointestinal cancer, which is the third leading cause of cancer-related deaths. To identify genes driving gastric cancer (GC) development, we performed a Sleeping Beauty (SB) transposon mutagenesis screen in the stomach of Smad4(+/-) mutant mice. This screen identified 59 candidate GC trunk drivers and a much larger number of candidate GC progression genes. Strikingly, 22 SB-identified trunk drivers are known or candidate cancer genes, whereas four SB-identified trunk drivers, including PTEN, SMAD4, RNF43, and NF1, are known human GC trunk drivers. Similar to human GC, pathway analyses identified WNT, TGF-β, and PI3K-PTEN signaling, ubiquitin-mediated proteolysis, adherens junctions, and RNA degradation in addition to genes involved in chromatin modification and organization as highly deregulated pathways in GC. Comparative oncogenomic filtering of the complete list of SB-identified genes showed that they are highly enriched for genes mutated in human GC and identified many candidate human GC genes. Finally, by comparing our complete list of SB-identified genes against the list of mutated genes identified in five large-scale human GC sequencing studies, we identified LDL receptor-related protein 1B (LRP1B) as a previously unidentified human candidate GC tumor suppressor gene. In LRP1B, 129 mutations were found in 462 human GC samples sequenced, and LRP1B is one of the top 10 most deleted genes identified in a panel of 3,312 human cancers. SB mutagenesis has, thus, helped to catalog the cooperative molecular mechanisms driving SMAD4-induced GC growth and discover genes with potential clinical importance in human GC. PMID:27006499

  15. Sleeping Beauty transposon mutagenesis identifies genes that cooperate with mutant Smad4 in gastric cancer development

    PubMed Central

    Takeda, Haruna; Rust, Alistair G.; Ward, Jerrold M.; Yew, Christopher Chin Kuan; Jenkins, Nancy A.; Copeland, Neal G.

    2016-01-01

    Mutations in SMAD4 predispose to the development of gastrointestinal cancer, which is the third leading cause of cancer-related deaths. To identify genes driving gastric cancer (GC) development, we performed a Sleeping Beauty (SB) transposon mutagenesis screen in the stomach of Smad4+/− mutant mice. This screen identified 59 candidate GC trunk drivers and a much larger number of candidate GC progression genes. Strikingly, 22 SB-identified trunk drivers are known or candidate cancer genes, whereas four SB-identified trunk drivers, including PTEN, SMAD4, RNF43, and NF1, are known human GC trunk drivers. Similar to human GC, pathway analyses identified WNT, TGF-β, and PI3K-PTEN signaling, ubiquitin-mediated proteolysis, adherens junctions, and RNA degradation in addition to genes involved in chromatin modification and organization as highly deregulated pathways in GC. Comparative oncogenomic filtering of the complete list of SB-identified genes showed that they are highly enriched for genes mutated in human GC and identified many candidate human GC genes. Finally, by comparing our complete list of SB-identified genes against the list of mutated genes identified in five large-scale human GC sequencing studies, we identified LDL receptor-related protein 1B (LRP1B) as a previously unidentified human candidate GC tumor suppressor gene. In LRP1B, 129 mutations were found in 462 human GC samples sequenced, and LRP1B is one of the top 10 most deleted genes identified in a panel of 3,312 human cancers. SB mutagenesis has, thus, helped to catalog the cooperative molecular mechanisms driving SMAD4-induced GC growth and discover genes with potential clinical importance in human GC. PMID:27006499

  16. Sleeping Beauty transposon mutagenesis identifies genes that cooperate with mutant Smad4 in gastric cancer development.

    PubMed

    Takeda, Haruna; Rust, Alistair G; Ward, Jerrold M; Yew, Christopher Chin Kuan; Jenkins, Nancy A; Copeland, Neal G

    2016-04-01

    Mutations in SMAD4 predispose to the development of gastrointestinal cancer, which is the third leading cause of cancer-related deaths. To identify genes driving gastric cancer (GC) development, we performed a Sleeping Beauty (SB) transposon mutagenesis screen in the stomach of Smad4(+/-) mutant mice. This screen identified 59 candidate GC trunk drivers and a much larger number of candidate GC progression genes. Strikingly, 22 SB-identified trunk drivers are known or candidate cancer genes, whereas four SB-identified trunk drivers, including PTEN, SMAD4, RNF43, and NF1, are known human GC trunk drivers. Similar to human GC, pathway analyses identified WNT, TGF-β, and PI3K-PTEN signaling, ubiquitin-mediated proteolysis, adherens junctions, and RNA degradation in addition to genes involved in chromatin modification and organization as highly deregulated pathways in GC. Comparative oncogenomic filtering of the complete list of SB-identified genes showed that they are highly enriched for genes mutated in human GC and identified many candidate human GC genes. Finally, by comparing our complete list of SB-identified genes against the list of mutated genes identified in five large-scale human GC sequencing studies, we identified LDL receptor-related protein 1B (LRP1B) as a previously unidentified human candidate GC tumor suppressor gene. In LRP1B, 129 mutations were found in 462 human GC samples sequenced, and LRP1B is one of the top 10 most deleted genes identified in a panel of 3,312 human cancers. SB mutagenesis has, thus, helped to catalog the cooperative molecular mechanisms driving SMAD4-induced GC growth and discover genes with potential clinical importance in human GC.

  17. A Plasmid-Transposon Hybrid Mutagenesis System Effective in a Broad Range of Enterobacteria

    PubMed Central

    Monson, Rita; Smith, Debra S.; Matilla, Miguel A.; Roberts, Kevin; Richardson, Elizabeth; Drew, Alison; Williamson, Neil; Ramsay, Josh; Welch, Martin; Salmond, George P. C.

    2015-01-01

    Random transposon mutagenesis is a powerful technique used to generate libraries of genetic insertions in many different bacterial strains. Here we develop a system facilitating random transposon mutagenesis in a range of different Gram-negative bacterial strains, including Pectobacterium atrosepticum, Citrobacter rodentium, Serratia sp. ATCC39006, Serratia plymuthica, Dickeya dadantii, and many more. Transposon mutagenesis was optimized in each of these strains and three studies are presented to show the efficacy of this system. Firstly, the important agricultural pathogen D. dadantii was mutagenized. Two mutants that showed reduced protease production and one mutant producing the previously cryptic pigment, indigoidine, were identified and characterized. Secondly, the enterobacterium, Serratia sp. ATCC39006 was mutagenized and mutants incapable of producing gas vesicles, proteinaceous intracellular organelles, were identified. One of these contained a β-galactosidase transcriptional fusion within the gene gvpA1, essential for gas vesicle production. Finally, the system was used to mutate the biosynthetic gene clusters of the antifungal, anti-oomycete and anticancer polyketide, oocydin A, in the plant-associated enterobacterium, Dickeya solani MK10. The mutagenesis system was developed to allow easy identification of transposon insertion sites by sequencing, after facile generation of a replicon encompassing the transposon and adjacent DNA, post-excision. Furthermore, the system can also create transcriptional fusions with either β-galactosidase or β-glucuronidase as reporters, and exploits a variety of drug resistance markers so that multiple selectable fusions can be generated in a single strain. This system of various transposons has wide utility and can be combined in many different ways. PMID:26733980

  18. Evaluating Risks of Insertional Mutagenesis by DNA Transposons in Gene Therapy

    PubMed Central

    Hackett, Perry B.; Largaespada, David A.; Switzer, Kirsten C.; Cooper, Laurence J.N.

    2013-01-01

    Investigational therapy can be successfully undertaken using viral- and non-viral-mediated ex vivo gene transfer. Indeed, recent clinical trials have established the potential for genetically modified T cells to improve and restore health. Recently the Sleeping Beauty (SB) transposon/transposase system has been applied in clinical trials to stably insert a chimeric antigen receptor (CAR) to redirect T-cell specificity. We discuss the context in which the SB system can be harnessed for gene therapy and describe the human application of SB-modified CAR+ T cells. We have focused on theoretical issues relating to insertional mutagenesis in the context of human genomes that are naturally subjected to remobilization of transposons and the experimental evidence over the last decade of employing SB transposons for defining genes that induce cancer. These findings are put into the context of the use of SB transposons in the treatment of human disease. PMID:23313630

  19. Excavating the Genome: Large-Scale Mutagenesis Screening for the Discovery of New Mouse Models.

    PubMed

    Sundberg, John P; Dadras, Soheil S; Silva, Kathleen A; Kennedy, Victoria E; Murray, Stephen A; Denegre, James M; Schofield, Paul N; King, Lloyd E; Wiles, Michael V; Pratt, C Herbert

    2015-11-01

    Technology now exists for rapid screening of mutated laboratory mice to identify phenotypes associated with specific genetic mutations. Large repositories exist for spontaneous mutants and those induced by chemical mutagenesis, many of which have never been fully studied or comprehensively evaluated. To supplement these resources, a variety of techniques have been consolidated in an international effort to create mutations in all known protein coding genes in the mouse. With targeted embryonic stem cell lines now available for almost all protein coding genes and more recently CRISPR/Cas9 technology, large-scale efforts are underway to create further novel mutant mouse strains and to characterize their phenotypes. However, accurate diagnosis of skin, hair, and nail diseases still relies on careful gross and histological analysis, and while not automated to the level of the physiological phenotyping, histopathology still provides the most direct and accurate diagnosis and correlation with human diseases. As a result of these efforts, many new mouse dermatological disease models are being characterized and developed. PMID:26551941

  20. Random transposon mutagenesis of the Saccharopolyspora erythraea genome reveals additional genes influencing erythromycin biosynthesis.

    PubMed

    Fedashchin, Andrij; Cernota, William H; Gonzalez, Melissa C; Leach, Benjamin I; Kwan, Noelle; Wesley, Roy K; Weber, J Mark

    2015-11-01

    A single cycle of strain improvement was performed in Saccharopolyspora erythraea mutB and 15 genotypes influencing erythromycin production were found. Genotypes generated by transposon mutagenesis appeared in the screen at a frequency of ~3%. Mutations affecting central metabolism and regulatory genes were found, as well as hydrolases, peptidases, glycosyl transferases and unknown genes. Only one mutant retained high erythromycin production when scaled-up from micro-agar plug fermentations to shake flasks. This mutant had a knockout of the cwh1 gene (SACE_1598), encoding a cell-wall-associated hydrolase. The cwh1 knockout produced visible growth and morphological defects on solid medium. This study demonstrated that random transposon mutagenesis uncovers strain improvement-related genes potentially useful for strain engineering. PMID:26468041

  1. Transposon mutagenesis identifies genes that transform neural stem cells into glioma-initiating cells.

    PubMed

    Koso, Hideto; Takeda, Haruna; Yew, Christopher Chin Kuan; Ward, Jerrold M; Nariai, Naoki; Ueno, Kazuko; Nagasaki, Masao; Watanabe, Sumiko; Rust, Alistair G; Adams, David J; Copeland, Neal G; Jenkins, Nancy A

    2012-10-30

    Neural stem cells (NSCs) are considered to be the cell of origin of glioblastoma multiforme (GBM). However, the genetic alterations that transform NSCs into glioma-initiating cells remain elusive. Using a unique transposon mutagenesis strategy that mutagenizes NSCs in culture, followed by additional rounds of mutagenesis to generate tumors in vivo, we have identified genes and signaling pathways that can transform NSCs into glioma-initiating cells. Mobilization of Sleeping Beauty transposons in NSCs induced the immortalization of astroglial-like cells, which were then able to generate tumors with characteristics of the mesenchymal subtype of GBM on transplantation, consistent with a potential astroglial origin for mesenchymal GBM. Sequence analysis of transposon insertion sites from tumors and immortalized cells identified more than 200 frequently mutated genes, including human GBM-associated genes, such as Met and Nf1, and made it possible to discriminate between genes that function during astroglial immortalization vs. later stages of tumor development. We also functionally validated five GBM candidate genes using a previously undescribed high-throughput method. Finally, we show that even clonally related tumors derived from the same immortalized line have acquired distinct combinations of genetic alterations during tumor development, suggesting that tumor formation in this model system involves competition among genetically variant cells, which is similar to the Darwinian evolutionary processes now thought to generate many human cancers. This mutagenesis strategy is faster and simpler than conventional transposon screens and can potentially be applied to any tissue stem/progenitor cells that can be grown and differentiated in vitro.

  2. A transposon-based tool for transformation and mutagenesis in trypanosomatid protozoa.

    PubMed

    Damasceno, Jeziel D; Beverley, Stephen M; Tosi, Luiz R O

    2015-01-01

    The ability of transposable elements to mobilize across genomes and affect the expression of genes makes them exceptional tools for genetic manipulation methodologies. Several transposon-based systems have been modified and incorporated into shuttle mutagenesis approaches in a variety of organisms. We have found that the Mos1 element, a DNA transposon from Drosophila mauritiana, is suitable and readily adaptable to a variety of strategies to the study of trypanosomatid parasitic protozoa. Trypanosomatids are the causative agents of a wide range of neglected diseases in underdeveloped regions of the globe. In this chapter we describe the basic elements and the available protocols for the in vitro use of Mos1 derivatives in the protozoan parasite Leishmania.

  3. A conditional transposon-based insertional mutagenesis screen for hepatocellular carcinoma-associated genes in mice

    PubMed Central

    Keng, Vincent W.; Villanueva, Augusto; Chiang, Derek Y.; Dupuy, Adam J.; Ryan, Barbara J.; Matise, Ilze; Silverstein, Kevin A.T.; Sarver, Aaron; Starr, Timothy K.; Akagi, Keiko; Tessarollo, Lino; Collier, Lara S.; Powers, Scott; Lowe, Scott W.; Jenkins, Nancy A.; Copeland, Neal G.; Llovet, Josep M.; Largaespada, David A.

    2009-01-01

    Here we describe a Sleeping Beauty (SB) transposition system that utilizes a conditional SB transposase allele, which can be activated by Cre recombinase to drive the transposition of a mutagenic transposon in virtually any tissue and control the type of cancer produced. To demonstrate the potential of this system for modeling cancer in mice, we used it to screen for hepatocellular carcinoma (HCC) associated genes in mice by specifically limiting SB transposition to the liver. Among 8,060 non-redundant insertions subsequently cloned from 68 tumor nodules we identified 19 highly significant candidate disease loci, which encode genes like EGFR and MET that are known HCC genes and others like UBE2H that are not strongly implicated in HCC but represent potential new therapeutic targets for treating this neoplasm. With these improvements, transposon-based insertional mutagenesis now offers great potential for better understanding the cancer genome and for identifying new targets for therapeutic development. PMID:19234449

  4. Random Transposon Mutagenesis for Cell-Envelope Resistant to Phage Infection.

    PubMed

    Reyes-Cortés, Ruth; Arguijo-Hernández, Emma S; Carballo-Ontiveros, Marco A; Martínez-Peñafiel, Eva; Kameyama, Luis

    2016-01-01

    In order to identify host components involved in the infective process of bacteriophages, we developed a wide-range strategy to obtain cell envelope mutants, using Escherichia coli W3110 and its specific phage mEp213. The strategy consisted in four steps: (1) random mutagenesis using transposon miniTn10Km(r); (2) selection of phage-resistant mutants by replica-plating; (3) electroporation of the phage-resistant mutants with mEp213 genome, followed by selection of those allowing phage development; and (4) sequencing of the transposon-disrupted genes. This strategy allowed us to distinguish the host factors related to phage development or multiplication within the cell, from those involved in phage infection at the level of the cell envelope. PMID:27311665

  5. Transposon mutagenesis identifies genes driving hepatocellular carcinoma in a chronic hepatitis B mouse model

    PubMed Central

    Bard-Chapeau, Emilie A.; Nguyen, Anh-Tuan; Rust, Alistair G.; Sayadi, Ahmed; Lee, Philip; Chua, Belinda Q; New, Lee-Sun; de Jong, Johann; Ward, Jerrold M.; Chin, Christopher KY.; Chew, Valerie; Toh, Han Chong; Abastado, Jean-Pierre; Benoukraf, Touati; Soong, Richie; Bard, Frederic A.; Dupuy, Adam J.; Johnson, Randy L.; Radda, George K.; Chan, Eric CY.; Wessels, Lodewyk FA.; Adams, David J.

    2014-01-01

    The most common risk factor for developing hepatocellular carcinoma (HCC) is chronic infection with hepatitis B virus (HBV). To better understand the evolutionary forces driving HCC we performed a near saturating transposon mutagenesis screen in a mouse HBV model of HCC. This screen identified 21 candidate early stage drivers, and a bewildering number (2860) of candidate later stage drivers, that were enriched for genes mutated, deregulated, or that function in signaling pathways important for human HCC, with a striking 1199 genes linked to cellular metabolic processes. Our study provides a comprehensive overview of the genetic landscape of HCC. PMID:24316982

  6. Transposon mutagenesis in Desulfovibrio desulfuricans: Development of a random mutagenesis tool from Tn7

    SciTech Connect

    Wall, J.D.; Murnan, T.; Argyle, J.

    1996-10-01

    The transposons Tn5, Tn7, Tn9, and Tn10 or their derivatives have been examined for transposition in the sulfate-reducing bacterium Desulfovibrio desulfuricans G20. Tn7 inserted with a frequency of 10{sup {minus}4} to 10{sup {minus}3} into a unique attachment site that shows strong homology with those sites identified in other gram-negative bacteria. Inactivation of the tnsD gene in Tn7, encoding the function directing insertion into the unique site, yielded a derivative that transposed essentially randomly with a frequency of ca. 10{sup {minus}6} per donor. Derivatives of Tn5, but not wild-type Tn5, were also found to undergo random transposition at a similar frequency. No evidence was obtained for transposition of Tn9 or Tn10. 34 refs., 5 figs., 2 tabs.

  7. Sleeping Beauty Transposon Mutagenesis as a Tool for Gene Discovery in the NOD Mouse Model of Type 1 Diabetes.

    PubMed

    Elso, Colleen M; Chu, Edward P F; Alsayb, May A; Mackin, Leanne; Ivory, Sean T; Ashton, Michelle P; Bröer, Stefan; Silveira, Pablo A; Brodnicki, Thomas C

    2015-10-04

    A number of different strategies have been used to identify genes for which genetic variation contributes to type 1 diabetes (T1D) pathogenesis. Genetic studies in humans have identified >40 loci that affect the risk for developing T1D, but the underlying causative alleles are often difficult to pinpoint or have subtle biological effects. A complementary strategy to identifying "natural" alleles in the human population is to engineer "artificial" alleles within inbred mouse strains and determine their effect on T1D incidence. We describe the use of the Sleeping Beauty (SB) transposon mutagenesis system in the nonobese diabetic (NOD) mouse strain, which harbors a genetic background predisposed to developing T1D. Mutagenesis in this system is random, but a green fluorescent protein (GFP)-polyA gene trap within the SB transposon enables early detection of mice harboring transposon-disrupted genes. The SB transposon also acts as a molecular tag to, without additional breeding, efficiently identify mutated genes and prioritize mutant mice for further characterization. We show here that the SB transposon is functional in NOD mice and can produce a null allele in a novel candidate gene that increases diabetes incidence. We propose that SB transposon mutagenesis could be used as a complementary strategy to traditional methods to help identify genes that, when disrupted, affect T1D pathogenesis.

  8. Sleeping Beauty Transposon Mutagenesis as a Tool for Gene Discovery in the NOD Mouse Model of Type 1 Diabetes

    PubMed Central

    Elso, Colleen M.; Chu, Edward P. F.; Alsayb, May A.; Mackin, Leanne; Ivory, Sean T.; Ashton, Michelle P.; Bröer, Stefan; Silveira, Pablo A.; Brodnicki, Thomas C.

    2015-01-01

    A number of different strategies have been used to identify genes for which genetic variation contributes to type 1 diabetes (T1D) pathogenesis. Genetic studies in humans have identified >40 loci that affect the risk for developing T1D, but the underlying causative alleles are often difficult to pinpoint or have subtle biological effects. A complementary strategy to identifying “natural” alleles in the human population is to engineer “artificial” alleles within inbred mouse strains and determine their effect on T1D incidence. We describe the use of the Sleeping Beauty (SB) transposon mutagenesis system in the nonobese diabetic (NOD) mouse strain, which harbors a genetic background predisposed to developing T1D. Mutagenesis in this system is random, but a green fluorescent protein (GFP)-polyA gene trap within the SB transposon enables early detection of mice harboring transposon-disrupted genes. The SB transposon also acts as a molecular tag to, without additional breeding, efficiently identify mutated genes and prioritize mutant mice for further characterization. We show here that the SB transposon is functional in NOD mice and can produce a null allele in a novel candidate gene that increases diabetes incidence. We propose that SB transposon mutagenesis could be used as a complementary strategy to traditional methods to help identify genes that, when disrupted, affect T1D pathogenesis. PMID:26438296

  9. Sleeping Beauty Transposon Mutagenesis as a Tool for Gene Discovery in the NOD Mouse Model of Type 1 Diabetes.

    PubMed

    Elso, Colleen M; Chu, Edward P F; Alsayb, May A; Mackin, Leanne; Ivory, Sean T; Ashton, Michelle P; Bröer, Stefan; Silveira, Pablo A; Brodnicki, Thomas C

    2015-12-01

    A number of different strategies have been used to identify genes for which genetic variation contributes to type 1 diabetes (T1D) pathogenesis. Genetic studies in humans have identified >40 loci that affect the risk for developing T1D, but the underlying causative alleles are often difficult to pinpoint or have subtle biological effects. A complementary strategy to identifying "natural" alleles in the human population is to engineer "artificial" alleles within inbred mouse strains and determine their effect on T1D incidence. We describe the use of the Sleeping Beauty (SB) transposon mutagenesis system in the nonobese diabetic (NOD) mouse strain, which harbors a genetic background predisposed to developing T1D. Mutagenesis in this system is random, but a green fluorescent protein (GFP)-polyA gene trap within the SB transposon enables early detection of mice harboring transposon-disrupted genes. The SB transposon also acts as a molecular tag to, without additional breeding, efficiently identify mutated genes and prioritize mutant mice for further characterization. We show here that the SB transposon is functional in NOD mice and can produce a null allele in a novel candidate gene that increases diabetes incidence. We propose that SB transposon mutagenesis could be used as a complementary strategy to traditional methods to help identify genes that, when disrupted, affect T1D pathogenesis. PMID:26438296

  10. Transposon mutagenesis affecting thiosulfate oxidation in Bosea thiooxidans, a new chemolithoheterotrophic bacterium.

    PubMed

    Das, S K; Mishra, A K

    1996-06-01

    Transposon insertion mutagenesis was used to isolate mutants of Bosea thiooxidans which are impaired in thiosulfate oxidation. Suicide plasmid pSUP5011 was used to introduce the transposon Tn5 into B. thiooxidans via Escherichia coli S17.1-mediated conjugation. Neomycin-resistant transconjugants occurred at a frequency of 2.2 X 10(-4) per donor. Transconjugants defective in thiosulfate oxidation were categorized into three classes on the basis of growth response, enzyme activities, and cytochrome patterns. Class I mutants were deficient in cytochrome c, and no thiosulfate oxidase activity was detected. Class II mutants retained the activities of key enzymes of thiosulfate metabolism, although at reduced levels. Mutants of this class grown on mixed-substrate agar plates deposited elemental sulfur on the colony surfaces. Class III mutants were unable to utilize thiosulfate, though they had normal levels of cytochrome c. The transposon insertions occurred at different chromosomal positions, as confirmed by Southern blotting of chromosomal DNA of mutants deficient in thiosulfate oxidation, a deficiency which resulted from single insertions of Tn5.

  11. Identification of two new Pmp22 mouse mutants using large-scale mutagenesis and a novel rapid mapping strategy.

    PubMed

    Isaacs, A M; Davies, K E; Hunter, A J; Nolan, P M; Vizor, L; Peters, J; Gale, D G; Kelsell, D P; Latham, I D; Chase, J M; Fisher, E M; Bouzyk, M M; Potter, A; Masih, M; Walsh, F S; Sims, M A; Doncaster, K E; Parsons, C A; Martin, J; Brown, S D; Rastan, S; Spurr, N K; Gray, I C

    2000-07-22

    Mouse mutants have a key role in discerning mammalian gene function and modelling human disease; however, at present mutants exist for only 1-2% of all mouse genes. In order to address this phenotype gap, we have embarked on a genome-wide, phenotype-driven, large-scale N-ethyl-N--nitrosourea (ENU) mutagenesis screen for dominant mutations of clinical and pharmacological interest in the mouse. Here we describe the identification of two similar neurological phenotypes and determination of the underlying mutations using a novel rapid mapping strategy incorporating speed back-crosses and high throughput genotyping. Two mutant mice were identified with marked resting tremor and further characterized using the SHIRPA behavioural and functional assessment protocol. Back-cross animals were generated using in vitro fertilization and genome scans performed utilizing DNA pools derived from multiple mutant mice. Both mutants were mapped to a region on chromosome 11 containing the peripheral myelin protein 22 gene (Pmp22). Sequence analysis revealed novel point mutations in Pmp22 in both lines. The first mutation, H12R, alters the same amino acid as in the severe human peripheral neuropathy Dejerine Sottas syndrome and Y153TER in the other mutant truncates the Pmp22 protein by seven amino acids. Histological analysis of both lines revealed hypo-myelination of peripheral nerves. This is the first report of the generation of a clinically relevant neurological mutant and its rapid genetic characterization from a large-scale mutagenesis screen for dominant phenotypes in the mouse, and validates the use of large-scale screens to generate desired clinical phenotypes in mice. PMID:10915775

  12. Transposon mutagenesis identifies genetic drivers of BrafV600E melanoma

    PubMed Central

    Mann, Michael B; Black, Michael A; Jones, Devin J; Ward, Jerrold M; Yew, Christopher Chin Kuan; Newberg, Justin Y; Dupuy, Adam J; Rust, Alistair G; Bosenberg, Marcus W; McMahon, Martin; Print, Cristin G; Copeland, Neal G; Jenkins, Nancy A

    2016-01-01

    Although nearly half of human melanomas harbor oncogenic BRAFV600E mutations, the genetic events that cooperate with these mutations to drive melanogenesis are still largely unknown. Here we show that Sleeping Beauty (SB) transposon-mediated mutagenesis drives melanoma progression in BrafV600E mutant mice and identify 1,232 recurrently mutated candidate cancer genes (CCGs) from 70 SB-driven melanomas. CCGs are enriched in Wnt, PI3K, MAPK and netrin signaling pathway components and are more highly connected to one another than predicted by chance, indicating that SB targets cooperative genetic networks in melanoma. Human orthologs of >500 CCGs are enriched for mutations in human melanoma or showed statistically significant clinical associations between RNA abundance and survival of patients with metastatic melanoma. We also functionally validate CEP350 as a new tumor-suppressor gene in human melanoma. SB mutagenesis has thus helped to catalog the cooperative molecular mechanisms driving BRAFV600E melanoma and discover new genes with potential clinical importance in human melanoma. PMID:25848750

  13. Transposon mutagenesis identifies genetic drivers of Braf(V600E) melanoma.

    PubMed

    Mann, Michael B; Black, Michael A; Jones, Devin J; Ward, Jerrold M; Yew, Christopher Chin Kuan; Newberg, Justin Y; Dupuy, Adam J; Rust, Alistair G; Bosenberg, Marcus W; McMahon, Martin; Print, Cristin G; Copeland, Neal G; Jenkins, Nancy A

    2015-05-01

    Although nearly half of human melanomas harbor oncogenic BRAF(V600E) mutations, the genetic events that cooperate with these mutations to drive melanogenesis are still largely unknown. Here we show that Sleeping Beauty (SB) transposon-mediated mutagenesis drives melanoma progression in Braf(V600E) mutant mice and identify 1,232 recurrently mutated candidate cancer genes (CCGs) from 70 SB-driven melanomas. CCGs are enriched in Wnt, PI3K, MAPK and netrin signaling pathway components and are more highly connected to one another than predicted by chance, indicating that SB targets cooperative genetic networks in melanoma. Human orthologs of >500 CCGs are enriched for mutations in human melanoma or showed statistically significant clinical associations between RNA abundance and survival of patients with metastatic melanoma. We also functionally validate CEP350 as a new tumor-suppressor gene in human melanoma. SB mutagenesis has thus helped to catalog the cooperative molecular mechanisms driving BRAF(V600E) melanoma and discover new genes with potential clinical importance in human melanoma. PMID:25848750

  14. In vivo growth characteristics of leucine and methionine auxotrophic mutants of Mycobacterium bovis BCG generated by transposon mutagenesis.

    PubMed Central

    McAdam, R A; Weisbrod, T R; Martin, J; Scuderi, J D; Brown, A M; Cirillo, J D; Bloom, B R; Jacobs, W R

    1995-01-01

    Insertional mutagenesis in Mycobacterium bovis BCG, a member of the slow-growing M. tuberculosis complex, was accomplished with transposons engineered from the Mycobacterium smegmatis insertion element IS1096. Transposons were created by placing a kanamycin resistance gene in several different positions in IS1096, and the resulting transposons were electroporated into BCG on nonreplicating plasmids. These analyses demonstrated that only one of the two open reading frames was necessary for transposition. A library of insertions was generated. Southern analysis of 23 kanamycin-resistant clones revealed that the transposons had inserted directly, with no evidence of cointegrate formation, into different restriction fragments in each clone. Sequence analysis of nine of the clones revealed junctional direct 8-bp repeats with only a slight similarity in target sites. These results suggest that IS1096-derived transposons transposed into the BCG genome in a relatively random fashion. Three auxotrophs, two for leucine and one for methionine, were isolated from the library of transposon insertions in BCG. They were characterized by sequencing and found to be homologous to the leuD gene of Escherichia coli and a sulfate-binding protein of cyanobacteria, respectively. When inoculated intravenously into C57BL/6 mice, the leucine auxotrophs, in contrast to the parent BCG strain or the methionine auxotroph, showed an inability to grow in vivo and were cleared within 7 weeks from the lungs and spleen. PMID:7868221

  15. A facile and efficient transposon mutagenesis method for generation of multi-codon deletions in protein sequences.

    PubMed

    Liu, Shu-Su; Wei, Xuan; Ji, Qun; Xin, Xiu; Jiang, Biao; Liu, Jia

    2016-06-10

    Substitutions, insertions and deletions are all important mutation events in natural and laboratory protein evolution. However, protein engineering using insertions and deletions (indels) is hindered by the lack of a convenient mutagenesis method. Here, we describe a general transposon mutagenesis method that allows for removal of up to five consecutive in-frame codons from a random position of a target protein. This method, referred to as codon deletion mutagenesis (CDM), relies on an engineered Mu transposon that carries asymmetric terminal sequences flanking the MuA transposase recognition sites. CDM requires minimal DNA manipulations, and can generate multi-codon deletions with high efficiency (>90%). As a proof of principle, we constructed five libraries of green fluorescent protein (GFP) containing one to five random codon deletions, respectively. Several variants with multi-codon deletions remained fluorescent, none of which could be easily identified using traditional mutagenesis method. CDM provides a facile and efficient approach to sampling a protein sequence with multi-codon deletions. It will not only facilitate our understanding of the effects of amino acid deletions on protein function but also expedite protein engineering using deletion mutagenesis. PMID:27071724

  16. Novel strategies for gene trapping and insertional mutagenesis mediated by Sleeping Beauty transposon

    PubMed Central

    Song, Guili; Cui, Zongbin

    2013-01-01

    Gene and poly(A) trappings are high-throughput approaches to capture and interrupt the expression of endogenous genes within a target genome. Although a number of trapping vectors have been developed for investigation of gene functions in cells and vertebrate models, there is still room for the improvement of their efficiency and sensitivity. Recently, two novel trapping vectors mediated by Sleeping Beauty (SB) transposon have been generated by the combination of three functional cassettes that are required for finding endogenous genes, disrupting the expression of trapped genes, and inducing the excision of integrated traps from their original insertion sites and then inserting into another gene. In addition, several other strategies are utilized to improve the activities of two trapping vectors. First, activities of all components were examined in vitro before the generation of two vectors. Second, the inducible promoter from the tilapia Hsp70 gene was used to drive the expression of SB gene, which can mediate the excision of integrated transposons upon induction at 37 °C. Third, the Cre/LoxP system was introduced to delete the SB expression cassette for stabilization of gene interruption and bio-safety. Fourth, three stop codons in different reading frames were introduced downstream of a strong splice acceptor (SA) in the gene trapping vector to effectively terminate the translation of trapped endogenous genes. Fifth, the strong splicing donor (SD) and AU-rich RNA-destabilizing element exhibited no obvious insertion bias and markedly reduced SD read-through events, and the combination of an enhanced SA, a poly(A) signal and a transcript terminator in the poly(A) trapping vector efficiently disrupted the transcription of trapped genes. Thus, these two trapping vectors are alternative and effective tools for large-scale identification and disruption of endogenous genes in vertebrate cells and animals. PMID:24251071

  17. Transposon mutagenesis reveals cooperation of ETS family transcription factors with signaling pathways in erythro-megakaryocytic leukemia

    PubMed Central

    Tang, Jian Zhong; Carmichael, Catherine L.; Shi, Wei; Metcalf, Donald; Ng, Ashley P.; Hyland, Craig D.; Jenkins, Nancy A.; Copeland, Neal G.; Howell, Viive M.; Zhao, Zhizhuang Joe; Smyth, Gordon K.; Kile, Benjamin T.; Alexander, Warren S.

    2013-01-01

    To define genetic lesions driving leukemia, we targeted cre-dependent Sleeping Beauty (SB) transposon mutagenesis to the blood-forming system using a hematopoietic-selective vav 1 oncogene (vav1) promoter. Leukemias of diverse lineages ensued, most commonly lymphoid leukemia and erythroleukemia. The inclusion of a transgenic allele of Janus kinase 2 (JAK2)V617F resulted in acceleration of transposon-driven disease and strong selection for erythroleukemic pathology with transformation of bipotential erythro-megakaryocytic cells. The genes encoding the E-twenty-six (ETS) transcription factors Ets related gene (Erg) and Ets1 were the most common sites for transposon insertion in SB-induced JAK2V617F-positive erythroleukemias, present in 87.5% and 65%, respectively, of independent leukemias examined. The role of activated Erg was validated by reproducing erythroleukemic pathology in mice transplanted with fetal liver cells expressing translocated in liposarcoma (TLS)-ERG, an activated form of ERG found in human leukemia. Via application of SB mutagenesis to TLS-ERG–induced erythroid transformation, we identified multiple loci as likely collaborators with activation of Erg. Jak2 was identified as a common transposon insertion site in TLS-ERG–induced disease, strongly validating the cooperation between JAK2V617F and transposon insertion at the Erg locus in the JAK2V617F-positive leukemias. Moreover, loci expressing other regulators of signal transduction pathways were conspicuous among the common transposon insertion sites in TLS-ERG–driven leukemia, suggesting that a key mechanism in erythroleukemia may be the collaboration of lesions disturbing erythroid maturation, most notably in genes of the ETS family, with mutations that reduce dependence on exogenous signals. PMID:23533276

  18. PiggyBac transposon-based polyadenylation-signal trap for genome-wide mutagenesis in mice

    PubMed Central

    Li, Limei; Liu, Peng; Sun, Liangliang; Bin Zhou; Fei, Jian

    2016-01-01

    We designed a new type of polyadenylation-signal (PAS) trap vector system in living mice, the piggyBac (PB) (PAS-trapping (EGFP)) gene trapping vector, which takes advantage of the efficient transposition ability of PB and efficient gene trap and insertional mutagenesis of PAS-trapping. The reporter gene of PB(PAS-trapping (EGFP)) is an EGFP gene with its own promoter, but lacking a poly(A) signal. Transgenic mouse lines carrying PB(PAS-trapping (EGFP)) and protamine 1 (Prm1) promoter-driven PB transposase transgenes (Prm1-PBase) were generated by microinjection. Male mice doubly positive for PB(PAS-trapping (EGFP)) and Prm1-PBase were crossed with WT females, generating offspring with various insertion mutations. We found that 44.8% (26/58) of pups were transposon-positive progenies. New transposon integrations comprised 26.9% (7/26) of the transposon-positive progenies. We found that 100% (5/5) of the EGFP fluorescence-positive mice had new trap insertions mediated by a PB transposon in transcriptional units. The direction of the EGFP gene in the vector was consistent with the direction of the endogenous gene reading frame. Furthermore, mice that were EGFP-PCR positive, but EGFP fluorescent negative, did not show successful gene trapping. Thus, the novel PB(PAS-trapping (EGFP)) system is an efficient genome-wide gene-trap mutagenesis in mice. PMID:27292714

  19. Functional characterization of the Sindbis virus E2 glycoprotein by transposon linker-insertion mutagenesis

    SciTech Connect

    Navaratnarajah, Chanakha K.; Kuhn, Richard J. . E-mail: kuhnr@purdue.edu

    2007-06-20

    The glycoprotein envelope of alphaviruses consists of two proteins, E1 and E2. E1 is responsible for fusion and E2 is responsible for receptor binding. An atomic structure is available for E1, but one for E2 has not been reported. In this study, transposon linker-insertion mutagenesis was used to probe the function of different domains of E2. A library of mutants, containing 19 amino acid insertions in the E2 glycoprotein sequence of the prototype alphavirus, Sindbis virus (SINV), was generated. Fifty-seven independent E2 insertions were characterized, of which more than half (67%) gave rise to viable virus. The wild-type-like mutants identify regions that accommodate insertions without perturbing virus production and can be used to insert targeting moieties to direct SINV to specific receptors. The defective and lethal mutants give insight into regions of E2 important for protein stability, transport to the cell membrane, E1-E2 contacts, and receptor binding.

  20. Transposon Mutagenesis in Bifidobacterium breve: Construction and Characterization of a Tn5 Transposon Mutant Library for Bifidobacterium breve UCC2003

    PubMed Central

    Ruiz, Lorena; Motherway, Mary O’Connell; Lanigan, Noreen; van Sinderen, Douwe

    2013-01-01

    Bifidobacteria are claimed to contribute positively to human health through a range of beneficial or probiotic activities, including amelioration of gastrointestinal and metabolic disorders, and therefore this particular group of gastrointestinal commensals has enjoyed increasing industrial and scientific attention in recent years. However, the molecular mechanisms underlying these probiotic mechanisms are still largely unknown, mainly due to the fact that molecular tools for bifidobacteria are rather poorly developed, with many strains lacking genetic accessibility. In this work, we describe the generation of transposon insertion mutants in two bifidobacterial strains, B. breve UCC2003 and B. breve NCFB2258. We also report the creation of the first transposon mutant library in a bifidobacterial strain, employing B. breve UCC2003 and a Tn5-based transposome strategy. The library was found to be composed of clones containing single transposon insertions which appear to be randomly distributed along the genome. The usefulness of the library to perform phenotypic screenings was confirmed through identification and analysis of mutants defective in D-galactose, D-lactose or pullulan utilization abilities. PMID:23737995

  1. Characterization and Transposon Mutagenesis of the Maize (Zea mays) Pho1 Gene Family.

    PubMed

    Salazar-Vidal, M Nancy; Acosta-Segovia, Edith; Sánchez-León, Nidia; Ahern, Kevin R; Brutnell, Thomas P; Sawers, Ruairidh J H

    2016-01-01

    Phosphorus is an essential nutrient for all plants, but also one of the least mobile, and consequently least available, in the soil. Plants have evolved a series of molecular, metabolic and developmental adaptations to increase the acquisition of phosphorus and to maximize the efficiency of use within the plant. In Arabidopsis (Arabidopsis thaliana), the AtPHO1 protein regulates and facilitates the distribution of phosphorus. To investigate the role of PHO1 proteins in maize (Zea mays), the B73 reference genome was searched for homologous sequences, and four genes identified that were designated ZmPho1;1, ZmPho1;2a, ZmPho1;2b and ZmPho1;3. ZmPho1;2a and ZmPho1;2b are the most similar to AtPHO1, and represent candidate co-orthologs that we hypothesize to have been retained following whole genome duplication. Evidence was obtained for the production of natural anti-sense transcripts associated with both ZmPho1;2a and ZmPho1;2b, suggesting the possibility of regulatory crosstalk between paralogs. To characterize functional divergence between ZmPho1;2a and ZmPho1;2b, a program of transposon mutagenesis was initiated using the Ac/Ds system, and, here, we report the generation of novel alleles of ZmPho1;2a and ZmPho1;2b. PMID:27648940

  2. Characterization and Transposon Mutagenesis of the Maize (Zea mays) Pho1 Gene Family

    PubMed Central

    Salazar-Vidal, M. Nancy; Acosta-Segovia, Edith; Sánchez-León, Nidia; Ahern, Kevin R.; Brutnell, Thomas P.; Sawers, Ruairidh J. H.

    2016-01-01

    Phosphorus is an essential nutrient for all plants, but also one of the least mobile, and consequently least available, in the soil. Plants have evolved a series of molecular, metabolic and developmental adaptations to increase the acquisition of phosphorus and to maximize the efficiency of use within the plant. In Arabidopsis (Arabidopsis thaliana), the AtPHO1 protein regulates and facilitates the distribution of phosphorus. To investigate the role of PHO1 proteins in maize (Zea mays), the B73 reference genome was searched for homologous sequences, and four genes identified that were designated ZmPho1;1, ZmPho1;2a, ZmPho1;2b and ZmPho1;3. ZmPho1;2a and ZmPho1;2b are the most similar to AtPHO1, and represent candidate co-orthologs that we hypothesize to have been retained following whole genome duplication. Evidence was obtained for the production of natural anti-sense transcripts associated with both ZmPho1;2a and ZmPho1;2b, suggesting the possibility of regulatory crosstalk between paralogs. To characterize functional divergence between ZmPho1;2a and ZmPho1;2b, a program of transposon mutagenesis was initiated using the Ac/Ds system, and, here, we report the generation of novel alleles of ZmPho1;2a and ZmPho1;2b. PMID:27648940

  3. Efficient mutagenesis by Cas9 protein-mediated oligonucleotide insertion and large-scale assessment of single-guide RNAs.

    PubMed

    Gagnon, James A; Valen, Eivind; Thyme, Summer B; Huang, Peng; Akhmetova, Laila; Ahkmetova, Laila; Pauli, Andrea; Montague, Tessa G; Zimmerman, Steven; Richter, Constance; Schier, Alexander F

    2014-01-01

    The CRISPR/Cas9 system has been implemented in a variety of model organisms to mediate site-directed mutagenesis. A wide range of mutation rates has been reported, but at a limited number of genomic target sites. To uncover the rules that govern effective Cas9-mediated mutagenesis in zebrafish, we targeted over a hundred genomic loci for mutagenesis using a streamlined and cloning-free method. We generated mutations in 85% of target genes with mutation rates varying across several orders of magnitude, and identified sequence composition rules that influence mutagenesis. We increased rates of mutagenesis by implementing several novel approaches. The activities of poor or unsuccessful single-guide RNAs (sgRNAs) initiating with a 5' adenine were improved by rescuing 5' end homogeneity of the sgRNA. In some cases, direct injection of Cas9 protein/sgRNA complex further increased mutagenic activity. We also observed that low diversity of mutant alleles led to repeated failure to obtain frame-shift mutations. This limitation was overcome by knock-in of a stop codon cassette that ensured coding frame truncation. Our improved methods and detailed protocols make Cas9-mediated mutagenesis an attractive approach for labs of all sizes. PMID:24873830

  4. Efficient mutagenesis by Cas9 protein-mediated oligonucleotide insertion and large-scale assessment of single-guide RNAs.

    PubMed

    Gagnon, James A; Valen, Eivind; Thyme, Summer B; Huang, Peng; Akhmetova, Laila; Ahkmetova, Laila; Pauli, Andrea; Montague, Tessa G; Zimmerman, Steven; Richter, Constance; Schier, Alexander F

    2014-01-01

    The CRISPR/Cas9 system has been implemented in a variety of model organisms to mediate site-directed mutagenesis. A wide range of mutation rates has been reported, but at a limited number of genomic target sites. To uncover the rules that govern effective Cas9-mediated mutagenesis in zebrafish, we targeted over a hundred genomic loci for mutagenesis using a streamlined and cloning-free method. We generated mutations in 85% of target genes with mutation rates varying across several orders of magnitude, and identified sequence composition rules that influence mutagenesis. We increased rates of mutagenesis by implementing several novel approaches. The activities of poor or unsuccessful single-guide RNAs (sgRNAs) initiating with a 5' adenine were improved by rescuing 5' end homogeneity of the sgRNA. In some cases, direct injection of Cas9 protein/sgRNA complex further increased mutagenic activity. We also observed that low diversity of mutant alleles led to repeated failure to obtain frame-shift mutations. This limitation was overcome by knock-in of a stop codon cassette that ensured coding frame truncation. Our improved methods and detailed protocols make Cas9-mediated mutagenesis an attractive approach for labs of all sizes.

  5. Mutagenesis of dimeric plasmids by the transposon. gamma. delta. (Tn1000)

    SciTech Connect

    Liu, L.; Berg, C.M. )

    1990-05-01

    The Escherichia coli F factor mediates conjugal transfer of a plasmid such as pBR322 primarily by replicative transposition of transposon {gamma}{delta} (Tn1000) from F to that plasmid to form a cointegrate intermediate. Although resolution of this cointegrate always yields a plasmid containing a single {gamma}{delta} insertion, the occasional recovery of transposon-free plasmids after connuugal transfer has led to alternative hypotheses for F mobilization. The authors show here that {gamma}{delta}-free plasmids are found after F-mediated conjugal transfer only when the donor plasmid is a dimer and the recipient is Rec{sup +}.

  6. Transposon mutagenesis by Tn4560 and applications with avermectin-producing Streptomyces avermitilis.

    PubMed Central

    Ikeda, H; Takada, Y; Pang, C H; Tanaka, H; Omura, S

    1993-01-01

    The Tn3-like Streptomyces transposon Tn4560 was used to mutagenize Streptomyces avermitilis, the producer of anthelmintic avermectins and the cell growth inhibitor oligomycin. Tn4560 transposed in this strain from a temperature-sensitive plasmid to the chromosome and from the chromosome to a plasmid with an apparent frequency of about 10(-4) to 10(-3) at both 30 and 39 degrees C. Auxotrophic and antibiotic nonproducing mutations were, however, obtained only with cultures that were kept at 37 or 39 degrees C. About 0.1% of the transposon inserts obtained at 39 degrees C caused auxotrophy or abolished antibiotic production. The sites of insertion into the S. avermitilis chromosome were mapped. Chromosomal DNA fragments containing Tn4560 insertions in antibiotic production genes were cloned onto a Streptomyces plasmid with temperature-sensitive replication and used to transport transposon mutations to other strains, using homologous recombination. This technique was used to construct an avermectin production strain that no longer makes the toxic oligomycin. PMID:8384619

  7. [Improvement of butanol production by Escherichia coli via Tn5 transposon mediated mutagenesis].

    PubMed

    Lin, Zhao; Dong, Hongjun; Li, Yin

    2015-12-01

    For engineering an efficient butanol-producing Escherichia coli strain, many efforts have been paid on the known genes or pathways based on current knowledge. However, many genes in the genome could also contribute to butanol production in an unexpected way. In this work, we used Tn5 transposon to construct a mutant library including 1 196 strains in a previously engineered butanol-producing E. coli strain. To screen the strains with improved titer of butanol production, we developed a high-throughput method for pyruvate detection based on dinitrophenylhydrazine reaction using 96-well microplate reader, because pyruvate is the precursor of butanol and its concentration is inversely correlated with butanol in the fermentation broth. Using this method, we successfully screened three mutants with increased butanol titer. The insertion sites of Tn5 transposon was in the ORFs of pykA, tdk, and cadC by inverse PCR and sequencing. These found genes would be efficient targets for further strain improvement. And the genome scanning strategy described here will be helpful for other microbial cell factory construction. PMID:27093834

  8. Transposon Mutagenesis Screen Identifies Potential Lung Cancer Drivers and CUL3 as a Tumor Suppressor

    PubMed Central

    Dorr, Casey; Janik, Callie; Weg, Madison; Been, Raha A.; Bader, Justin; Kang, Ryan; Ng, Brandon; Foran, Lindsey; Landman, Sean R.; O’Sullivan, M. Gerard; Steinbach, Michael; Sarver, Aaron L.; Silverstein, Kevin A. T.; Largaespada, David A.

    2015-01-01

    Non-small cell lung cancers (NSCLCs) harbor thousands of passenger events that hide genetic drivers. Even highly recurrent events in NSCLC, such as mutations in PTEN, EGFR, KRAS, and ALK, are only detected in, at most, 30% of patients. Thus, many unidentified low-penetrant events are causing a significant portion of lung cancers. To detect low-penetrance drivers of NSCLC a forward genetic screen was performed in mice using the Sleeping Beauty (SB) DNA transposon as a random mutagen to generate lung tumors in a Pten deficient background. SB mutations coupled with Pten deficiency were sufficient to produce lung tumors in 29% of mice. Pten deficiency alone, without SB mutations, resulted in lung tumors in 11% of mice, while the rate in control mice was ~3%. In addition, thyroid cancer and other carcinomas as well as the presence of bronchiolar and alveolar epithelialization in mice deficient for Pten were also identified. Analysis of common transposon insertion sites identified 76 candidate cancer driver genes. These genes are frequently dysregulated in human lung cancers and implicate several signaling pathways. Cullin3 (Cul3), a member of an ubiquitin ligase complex that plays a role in the oxidative stress response pathway, was identified in the screen and evidence demonstrates that Cul3 functions as a tumor suppressor. PMID:25995385

  9. Development and Use of an Efficient System for Random mariner Transposon Mutagenesis To Identify Novel Genetic Determinants of Biofilm Formation in the Core Enterococcus faecalis Genome▿ †

    PubMed Central

    Kristich, Christopher J.; Nguyen, Vy T.; Le, Thinh; Barnes, Aaron M. T.; Grindle, Suzanne; Dunny, Gary M.

    2008-01-01

    Enterococcus faecalis is a gram-positive commensal bacterium of the gastrointestinal tract and an important opportunistic pathogen. Despite the increasing clinical significance of the enterococci, most of the genetic analysis of these organisms has focused on mobile genetic elements, and existing tools for manipulation and analysis of the core E. faecalis chromosome are limited. We are interested in a comprehensive analysis of the genetic determinants for biofilm formation encoded within the core E. faecalis genome. To identify such determinants, we developed a substantially improved system for transposon mutagenesis in E. faecalis based on a mini-mariner transposable element. Mutagenesis of wild-type E. faecalis with this element yielded predominantly mutants carrying a single copy of the transposable element, and insertions were distributed around the entire chromosome in an apparently random fashion. We constructed a library of E. faecalis transposon insertion mutants and screened this library to identify mutants exhibiting a defect in biofilm formation. Biofilm-defective mutants were found to carry transposon insertions both in genes that were previously known to play a role in biofilm formation and in new genes lacking any known function; for several genes identified in the screen, complementation analysis confirmed a direct role in biofilm formation. These results provide significant new information about the genetics of enterococcal biofilm formation and demonstrate the general utility of our transposon system for functional genomic analysis of E. faecalis. PMID:18408066

  10. Transposon mutagenesis with coat color genotyping identifies an essential role for Skor2 in sonic hedgehog signaling and cerebellum development

    PubMed Central

    Wang, Baiping; Harrison, Wilbur; Overbeek, Paul A.; Zheng, Hui

    2011-01-01

    Correct development of the cerebellum requires coordinated sonic hedgehog (Shh) signaling from Purkinje to granule cells. How Shh expression is regulated in Purkinje cells is poorly understood. Using a novel tyrosinase minigene-tagged Sleeping Beauty transposon-mediated mutagenesis, which allows for coat color-based genotyping, we created mice in which the Ski/Sno family transcriptional co-repressor 2 (Skor2) gene is deleted. Loss of Skor2 leads to defective Purkinje cell development, a severe reduction of granule cell proliferation and a malformed cerebellum. Skor2 is specifically expressed in Purkinje cells in the brain, where it is required for proper expression of Shh. Skor2 overexpression suppresses BMP signaling in an HDAC-dependent manner and stimulates Shh promoter activity, suggesting that Skor2 represses BMP signaling to activate Shh expression. Our study identifies an essential function for Skor2 as a novel transcriptional regulator in Purkinje cells that acts upstream of Shh during cerebellum development. PMID:21937600

  11. Transposon mutagenesis with coat color genotyping identifies an essential role for Skor2 in sonic hedgehog signaling and cerebellum development.

    PubMed

    Wang, Baiping; Harrison, Wilbur; Overbeek, Paul A; Zheng, Hui

    2011-10-01

    Correct development of the cerebellum requires coordinated sonic hedgehog (Shh) signaling from Purkinje to granule cells. How Shh expression is regulated in Purkinje cells is poorly understood. Using a novel tyrosinase minigene-tagged Sleeping Beauty transposon-mediated mutagenesis, which allows for coat color-based genotyping, we created mice in which the Ski/Sno family transcriptional co-repressor 2 (Skor2) gene is deleted. Loss of Skor2 leads to defective Purkinje cell development, a severe reduction of granule cell proliferation and a malformed cerebellum. Skor2 is specifically expressed in Purkinje cells in the brain, where it is required for proper expression of Shh. Skor2 overexpression suppresses BMP signaling in an HDAC-dependent manner and stimulates Shh promoter activity, suggesting that Skor2 represses BMP signaling to activate Shh expression. Our study identifies an essential function for Skor2 as a novel transcriptional regulator in Purkinje cells that acts upstream of Shh during cerebellum development. PMID:21937600

  12. Mouse Models of Cancer: Sleeping Beauty Transposons for Insertional Mutagenesis Screens and Reverse Genetic Studies

    PubMed Central

    Tschida, Barbara R.; Largaespada, David A.; Keng, Vincent W.

    2014-01-01

    The genetic complexity and heterogeneity of cancer has posed a problem in designing rationally targeted therapies effective in a large proportion of human cancer. Genomic characterization of many cancer types has provided a staggering amount of data that needs to be interpreted to further our understanding of this disease. Forward genetic screening in mice using Sleeping Beauty (SB) based insertional mutagenesis is an effective method for candidate cancer gene discovery that can aid in distinguishing driver from passenger mutations in human cancer. This system has been adapted for unbiased screens to identify drivers of multiple cancer types. These screens have already identified hundreds of candidate cancer-promoting mutations. These can be used to develop new mouse models for further study, which may prove useful for therapeutic testing. SB technology may also hold the key for rapid generation of reverse genetic mouse models of cancer, and has already been used to model glioblastoma and liver cancer. PMID:24468652

  13. Identification and Characterization of Non-Cellulose-Producing Mutants of Gluconacetobacter hansenii Generated by Tn5 Transposon Mutagenesis

    PubMed Central

    Deng, Ying; Nagachar, Nivedita; Xiao, Chaowen; Tien, Ming

    2013-01-01

    The acs operon of Gluconacetobacter is thought to encode AcsA, AcsB, AcsC, and AcsD proteins that constitute the cellulose synthase complex, required for the synthesis and secretion of crystalline cellulose microfibrils. A few other genes have been shown to be involved in this process, but their precise role is unclear. We report here the use of Tn5 transposon insertion mutagenesis to identify and characterize six non-cellulose-producing (Cel−) mutants of Gluconacetobacter hansenii ATCC 23769. The genes disrupted were acsA, acsC, ccpAx (encoding cellulose-complementing protein [the subscript “Ax” indicates genes from organisms formerly classified as Acetobacter xylinum]), dgc1 (encoding guanylate dicyclase), and crp-fnr (encoding a cyclic AMP receptor protein/fumarate nitrate reductase transcriptional regulator). Protein blot analysis revealed that (i) AcsB and AcsC were absent in the acsA mutant, (ii) the levels of AcsB and AcsC were significantly reduced in the ccpAx mutant, and (iii) the level of AcsD was not affected in any of the Cel− mutants. Promoter analysis showed that the acs operon does not include acsD, unlike the organization of the acs operon of several strains of closely related Gluconacetobacter xylinus. Complementation experiments confirmed that the gene disrupted in each Cel− mutant was responsible for the phenotype. Quantitative real-time PCR and protein blotting results suggest that the transcription of bglAx (encoding β-glucosidase and located immediately downstream from acsD) was strongly dependent on Crp/Fnr. A bglAx knockout mutant, generated via homologous recombination, produced only ∼16% of the wild-type cellulose level. Since the crp-fnr mutant did not produce any cellulose, Crp/Fnr may regulate the expression of other gene(s) involved in cellulose biosynthesis. PMID:24013627

  14. Identification of a proton-chloride antiporter (EriC) by Himar1 transposon mutagenesis in Lactobacillus reuteri and its role in histamine production.

    PubMed

    Hemarajata, P; Spinler, J K; Balderas, M A; Versalovic, J

    2014-03-01

    The gut microbiome may modulate intestinal immunity by luminal conversion of dietary amino acids to biologically active signals. The model probiotic organism Lactobacillus reuteri ATCC PTA 6475 is indigenous to the human microbiome, and converts the amino acid L-histidine to the biogenic amine, histamine. Histamine suppresses tumor necrosis factor (TNF) production by human myeloid cells and is a product of L-histidine decarboxylation, which is a proton-facilitated reaction. A transposon mutagenesis strategy was developed based on a single-plasmid nisin-inducible Himar1 transposase/transposon delivery system for L. reuteri. A highly conserved proton-chloride antiporter gene (eriC), a gene widely present in the gut microbiome was discovered by Himar1 transposon (Tn)-mutagenesis presented in this study. Genetic inactivation of eriC by transposon insertion and genetic recombineering resulted in reduced ability of L. reuteri to inhibit TNF production by activated human myeloid cells, diminished histamine production by the bacteria and downregulated expression of histidine decarboxylase cluster genes compared to those of WT 6475. EriC belongs to a large family of ion transporters that includes chloride channels and proton-chloride antiporters and may facilitate the availability of protons for the decarboxylation reaction, resulting in histamine production by L. reuteri. This report leverages the tools of bacterial genetics for probiotic gene discovery. The findings highlight the widely conserved nature of ion transporters in bacteria and how ion transporters are coupled with amino acid decarboxylation and contribute to microbiome-mediated immunomodulation. PMID:24488273

  15. Mutagenesis of Bordetella pertussis with transposon Tn5tac1: conditional expression of virulence-associated genes.

    PubMed Central

    Cookson, B T; Berg, D E; Goldman, W E

    1990-01-01

    The Tn5tac1 transposon contains a strong outward-facing promoter, Ptac, a lacI repressor gene, and a selectable Kanr gene. Transcription from Ptac is repressed by the lacI protein unless an inducer (isopropyl-beta-D-thiogalactopyranoside [IPTG]) is present. Thus, Tn5tac1 generates insertion mutations in Escherichia coli with conditional phenotypes because it is polar on distal gene expression when IPTG is absent and directs transcription of these genes when the inducer is present. To test the usefulness of Tn5tac1 in Bordetella pertussis, a nonenteric gram-negative bacterial pathogen, we chose the bifunctional adenylate cyclase-hemolysin determinant as an easily scored marker to monitor insertional mutagenesis. Tn5tac1 delivered to B. pertussis on conjugal suicide plasmids resulted in Kanr exconjugants at a frequency of 10(-3) per donor cell, and nonhemolytic (Hly-) mutants were found among the Kanr colonies at a frequency of about 1%. Of eight independent Kanr Hly- mutants, two were conditional and exhibited an Hly+ phenotype only in the presence of IPTG. Using a new quantitative assay for adenylate cyclase based on high-pressure liquid chromatography, we found that enzymatic activity in these two strains was specifically induced at least 500-fold in a dose-dependent fashion over the range of 0 to 125 microM IPTG. These data show that Ptac serves as a promoter, lacI is expressed and is functional, and IPTG can induce Ptac transcription in B. pertussis. Adenylate cyclase expression in whole cells, culture supernatants, and cell extracts from these strains depended upon IPTG, suggesting that the insertions do not merely alter secretion of adenylate cyclase-hemolysin. Other virulence determinants under control of the vir locus are expressed normally, implying that these Tn5tac1 insertions specifically regulate adenylate cyclase-hemolysin expression. We conclude that Tn5tac1 insertion mutations permit sensitive, exogenous control over the expression of genes of

  16. Transposon Mutagenesis Paired with Deep Sequencing of Caulobacter crescentus under Uranium Stress Reveals Genes Essential for Detoxification and Stress Tolerance

    PubMed Central

    Yung, Mimi C.; Park, Dan M.; Overton, K. Wesley; Blow, Matthew J.; Hoover, Cindi A.; Smit, John; Murray, Sean R.; Ricci, Dante P.; Christen, Beat; Bowman, Grant R.

    2015-01-01

    ABSTRACT The ubiquitous aquatic bacterium Caulobacter crescentus is highly resistant to uranium (U) and facilitates U biomineralization and thus holds promise as an agent of U bioremediation. To gain an understanding of how C. crescentus tolerates U, we employed transposon (Tn) mutagenesis paired with deep sequencing (Tn-seq) in a global screen for genomic elements required for U resistance. Of the 3,879 annotated genes in the C. crescentus genome, 37 were found to be specifically associated with fitness under U stress, 15 of which were subsequently tested through mutational analysis. Systematic deletion analysis revealed that mutants lacking outer membrane transporters (rsaFa and rsaFb), a stress-responsive transcription factor (cztR), or a ppGpp synthetase/hydrolase (spoT) exhibited a significantly lower survival rate under U stress. RsaFa and RsaFb, which are homologues of TolC in Escherichia coli, have previously been shown to mediate S-layer export. Transcriptional analysis revealed upregulation of rsaFa and rsaFb by 4- and 10-fold, respectively, in the presence of U. We additionally show that rsaFa mutants accumulated higher levels of U than the wild type, with no significant increase in oxidative stress levels. Our results suggest a function for RsaFa and RsaFb in U efflux and/or maintenance of membrane integrity during U stress. In addition, we present data implicating CztR and SpoT in resistance to U stress. Together, our findings reveal novel gene targets that are key to understanding the molecular mechanisms of U resistance in C. crescentus. IMPORTANCE Caulobacter crescentus is an aerobic bacterium that is highly resistant to uranium (U) and has great potential to be used in U bioremediation, but its mechanisms of U resistance are poorly understood. We conducted a Tn-seq screen to identify genes specifically required for U resistance in C. crescentus. The genes that we identified have previously remained elusive using other omics approaches and thus

  17. In vivo transposon mutagenesis of the methanogenic archaeon Methanosarcina acetivorans C2A using a modified version of the insect mariner-family transposable element Himar1

    PubMed Central

    Zhang, J. K.; Pritchett, M. A.; Lampe, D. J.; Robertson, H. M.; Metcalf, W. W.

    2000-01-01

    We present here a method for in vivo transposon mutagenesis of a methanogenic archaeon, Methanosarcina acetivorans C2A, which because of its independence from host-specific factors may have broad application among many microorganisms. Because there are no known Methanosarcina transposons we modified the mariner transposable element Himar1, originally found in the insect Hematobia irritans, to allow its use in this organism. This element was chosen because, like other mariner elements, its transposition is independent of host factors, requiring only its cognate transposase. Modified mini-Himar1 elements were constructed that carry selectable markers that are functional in Methanosarcina species and that express the Himar1 transposase from known Methanosarcina promoters. These mini-mariner elements transpose at high frequency in M. acetivorans to random sites in the genome. The presence of an Escherichia coli selectable marker and plasmid origin of replication within the mini-mariner elements allows facile cloning of these transposon insertions to identify the mutated gene. In preliminary experiments, we have isolated numerous mini-mariner-induced M. acetivorans mutants, including ones with insertions that confer resistance to toxic analogs and in genes that encode proteins involved in heat shock, nitrogen fixation, and cell-wall structures. PMID:10920201

  18. Genetic Transformation of Hordeum vulgare ssp. spontaneum for the Development of a Transposon-Based Insertional Mutagenesis System.

    PubMed

    Cardinal, Marie-Josée; Kaur, Rajvinder; Singh, Jaswinder

    2016-10-01

    Domestication and intensive selective breeding of plants has triggered erosion of genetic diversity of important stress-related alleles. Researchers highlight the potential of using wild accessions as a gene source for improvement of cereals such as barley, which has major economic and social importance worldwide. Previously, we have successfully introduced the maize Ac/Ds transposon system for gene identification in cultivated barley. The objective of current research was to investigate the response of Hordeum vulgare ssp. spontaneum wild barley accessions in tissue culture to standardize parameters for introduction of Ac/Ds transposons through genetic transformation. We investigated the response of ten wild barley genotypes for callus induction, regenerative green callus induction and regeneration of fertile plants. The activity of exogenous Ac/Ds elements was observed through a transient assay on immature wild barley embryos/callus whereby transformed embryos/calli were identified by the expression of GUS. Transient Ds expression bombardment experiments were performed on 352 pieces of callus (3-5 mm each) or immature embryos in 4 genotypes of wild barley. The transformation frequency of putative transgenic callus lines based on transient GUS expression ranged between 72 and100 % in wild barley genotypes. This is the first report of a transformation system in H. vulgare ssp. spontaneum. PMID:27480175

  19. Transposon Mutagenesis of Salmonella enterica Serovar Enteritidis Identifies Genes That Contribute to Invasiveness in Human and Chicken Cells and Survival in Egg Albumen

    PubMed Central

    Zhou, Xiaohui; Kim, Hye-Young; Call, Douglas R.; Guard, Jean

    2012-01-01

    Salmonella enterica serovar Enteritidis is an important food-borne pathogen, and chickens are a primary reservoir of human infection. While most knowledge about Salmonella pathogenesis is based on research conducted on Salmonella enterica serovar Typhimurium, S. Enteritidis is known to have pathobiology specific to chickens that impacts epidemiology in humans. Therefore, more information is needed about S. Enteritidis pathobiology in comparison to that of S. Typhimurium. We used transposon mutagenesis to identify S. Enteritidis virulence genes by assay of invasiveness in human intestinal epithelial (Caco-2) cells and chicken liver (LMH) cells and survival within chicken (HD-11) macrophages as a surrogate marker for virulence. A total of 4,330 transposon insertion mutants of an invasive G1 Nalr strain were screened using Caco-2 cells. This led to the identification of attenuating mutations in a total of 33 different loci, many of which include genes previously known to contribute to enteric infection (e.g., Salmonella pathogenicity island 1 [SPI-1], SPI-4, SPI-5, CS54, fliH, fljB, csgB, spvR, and rfbMN) in S. Enteritidis and other Salmonella serovars. Several genes or genomic islands that have not been reported previously (e.g., SPI-14, ksgA, SEN0034, SEN2278, and SEN3503) or that are absent in S. Typhimurium or in most other Salmonella serovars (e.g., pegD, SEN1152, SEN1393, and SEN1966) were also identified. Most mutants with reduced Caco-2 cell invasiveness also showed significantly reduced invasiveness in chicken liver cells and impaired survival in chicken macrophages and in egg albumen. Consequently, these genes may play an important role during infection of the chicken host and also contribute to successful egg contamination by S. Enteritidis. PMID:22988017

  20. Optimized BlaM-transposon shuttle mutagenesis of Helicobacter pylori allows the identification of novel genetic loci involved in bacterial virulence.

    PubMed

    Odenbreit, S; Till, M; Haas, R

    1996-04-01

    Helicobacter pylori is an important etiologic agent of gastroduodenal disease in humans. In this report, we describe a general genetic approach for the identification of genes encoding exported proteins in H. pylori. The novel TnMax9 mini-blaM transposon was used for insertion mutagenesis of a H. pylori gene library established in Escherichia coli. A total of 192 E. coli clones expressing active beta-lactamase fusion proteins (BlaM+) were obtained, indicating that the corresponding target plasmids carry H. pylori genes encoding putative extracytoplasmic proteins. Natural transformation of H. pylori P1 or P12 using the 192 mutant plasmids resulted in 135 distinct H. pylori mutant strains (70%). Screening of the H. pylori collection of mutant strains allowed the identification of mutant strains impaired in motility, in natural transformation competence and in adherence to gastric epithelial cell lines. Motility mutants could be grouped into distinct classes: (i) mutant strains lacking the major flagellin subunit FlaA and intact flagella (class I); (ii) mutant strains with apparently normal flagella, but reduced motility (class II), and (iii) mutant strains with obviously normal flagella, but completely abolished motility (class III). Two independent mutations that exhibited defects in natural competence for genetic transformation mapped to different genetic loci. In addition, two independent mutant strains were isolated by their failure to bind to the human gastric carcinoma cell line KatoIII. Both mutant strains carried a transposon in the same gene, 0.8 kb apart, and showed decreased autoagglutination when compared to the wild-type strain. PMID:8733234

  1. Transposon mutagenesis of Salmonella enterica serovar Enteritidis identifies genes that contribute to invasiveness in human and chicken cells and survival in egg albumen.

    PubMed

    Shah, Devendra H; Zhou, Xiaohui; Kim, Hye-Young; Call, Douglas R; Guard, Jean

    2012-12-01

    Salmonella enterica serovar Enteritidis is an important food-borne pathogen, and chickens are a primary reservoir of human infection. While most knowledge about Salmonella pathogenesis is based on research conducted on Salmonella enterica serovar Typhimurium, S. Enteritidis is known to have pathobiology specific to chickens that impacts epidemiology in humans. Therefore, more information is needed about S. Enteritidis pathobiology in comparison to that of S. Typhimurium. We used transposon mutagenesis to identify S. Enteritidis virulence genes by assay of invasiveness in human intestinal epithelial (Caco-2) cells and chicken liver (LMH) cells and survival within chicken (HD-11) macrophages as a surrogate marker for virulence. A total of 4,330 transposon insertion mutants of an invasive G1 Nal(r) strain were screened using Caco-2 cells. This led to the identification of attenuating mutations in a total of 33 different loci, many of which include genes previously known to contribute to enteric infection (e.g., Salmonella pathogenicity island 1 [SPI-1], SPI-4, SPI-5, CS54, fliH, fljB, csgB, spvR, and rfbMN) in S. Enteritidis and other Salmonella serovars. Several genes or genomic islands that have not been reported previously (e.g., SPI-14, ksgA, SEN0034, SEN2278, and SEN3503) or that are absent in S. Typhimurium or in most other Salmonella serovars (e.g., pegD, SEN1152, SEN1393, and SEN1966) were also identified. Most mutants with reduced Caco-2 cell invasiveness also showed significantly reduced invasiveness in chicken liver cells and impaired survival in chicken macrophages and in egg albumen. Consequently, these genes may play an important role during infection of the chicken host and also contribute to successful egg contamination by S. Enteritidis.

  2. Silent Mischief: Bacteriophage Mu Insertions Contaminate Products of Escherichia coli Random Mutagenesis Performed Using Suicidal Transposon Delivery Plasmids Mobilized by Broad-Host-Range RP4 Conjugative Machinery ▿

    PubMed Central

    Ferrières, Lionel; Hémery, Gaëlle; Nham, Toan; Guérout, Anne-Marie; Mazel, Didier; Beloin, Christophe; Ghigo, Jean-Marc

    2010-01-01

    Random transposon mutagenesis is the strategy of choice for associating a phenotype with its unknown genetic determinants. It is generally performed by mobilization of a conditionally replicating vector delivering transposons to recipient cells using broad-host-range RP4 conjugative machinery carried by the donor strain. In the present study, we demonstrate that bacteriophage Mu, which was deliberately introduced during the original construction of the widely used donor strains SM10 λpir and S17-1 λpir, is silently transferred to Escherichia coli recipient cells at high frequency, both by hfr and by release of Mu particles by the donor strain. Our findings suggest that bacteriophage Mu could have contaminated many random-mutagenesis experiments performed on Mu-sensitive species with these popular donor strains, leading to potential misinterpretation of the transposon mutant phenotype and therefore perturbing analysis of mutant screens. To circumvent this problem, we precisely mapped Mu insertions in SM10 λpir and S17-1 λpir and constructed a new Mu-free donor strain, MFDpir, harboring stable hfr-deficient RP4 conjugative functions and sustaining replication of Π-dependent suicide vectors. This strain can therefore be used with most of the available transposon-delivering plasmids and should enable more efficient and easy-to-analyze mutant hunts in E. coli and other Mu-sensitive RP4 host bacteria. PMID:20935093

  3. Large-scale insertional mutagenesis of Chlamydomonas supports phylogenomic functional prediction of photosynthetic genes and analysis of classical acetate-requiring mutants.

    PubMed

    Dent, Rachel M; Sharifi, Marina N; Malnoë, Alizée; Haglund, Cat; Calderon, Robert H; Wakao, Setsuko; Niyogi, Krishna K

    2015-04-01

    Chlamydomonas reinhardtii is a unicellular green alga that is a key model organism in the study of photosynthesis and oxidative stress. Here we describe the large-scale generation of a population of insertional mutants that have been screened for phenotypes related to photosynthesis and the isolation of 459 flanking sequence tags from 439 mutants. Recent phylogenomic analysis has identified a core set of genes, named GreenCut2, that are conserved in green algae and plants. Many of these genes are likely to be central to the process of photosynthesis, and they are over-represented by sixfold among the screened insertional mutants, with insertion events isolated in or adjacent to 68 of 597 GreenCut2 genes. This enrichment thus provides experimental support for functional assignments based on previous bioinformatic analysis. To illustrate one of the uses of the population, a candidate gene approach based on genome position of the flanking sequence of the insertional mutant CAL027_01_20 was used to identify the molecular basis of the classical C. reinhardtii mutation ac17. These mutations were shown to affect the gene PDH2, which encodes a subunit of the plastid pyruvate dehydrogenase complex. The mutants and associated flanking sequence data described here are publicly available to the research community, and they represent one of the largest phenotyped collections of algal insertional mutants to date.

  4. Large-scale insertional mutagenesis of Chlamydomonas supports phylogenomic functional prediction of photosynthetic genes and analysis of classical acetate-requiring mutants.

    PubMed

    Dent, Rachel M; Sharifi, Marina N; Malnoë, Alizée; Haglund, Cat; Calderon, Robert H; Wakao, Setsuko; Niyogi, Krishna K

    2015-04-01

    Chlamydomonas reinhardtii is a unicellular green alga that is a key model organism in the study of photosynthesis and oxidative stress. Here we describe the large-scale generation of a population of insertional mutants that have been screened for phenotypes related to photosynthesis and the isolation of 459 flanking sequence tags from 439 mutants. Recent phylogenomic analysis has identified a core set of genes, named GreenCut2, that are conserved in green algae and plants. Many of these genes are likely to be central to the process of photosynthesis, and they are over-represented by sixfold among the screened insertional mutants, with insertion events isolated in or adjacent to 68 of 597 GreenCut2 genes. This enrichment thus provides experimental support for functional assignments based on previous bioinformatic analysis. To illustrate one of the uses of the population, a candidate gene approach based on genome position of the flanking sequence of the insertional mutant CAL027_01_20 was used to identify the molecular basis of the classical C. reinhardtii mutation ac17. These mutations were shown to affect the gene PDH2, which encodes a subunit of the plastid pyruvate dehydrogenase complex. The mutants and associated flanking sequence data described here are publicly available to the research community, and they represent one of the largest phenotyped collections of algal insertional mutants to date. PMID:25711437

  5. PiggyBac Transposon-Mediated Mutagenesis in Rats Reveals a Crucial Role of Bbx in Growth and Male Fertility.

    PubMed

    Wang, Chieh-Ying; Tang, Ming-Chu; Chang, Wen-Chi; Furushima, Kenryo; Jang, Chuan-Wei; Behringer, Richard R; Chen, Chun-Ming

    2016-09-01

    Bobby sox homolog (Bbx) is an evolutionally conserved gene, but its biological function remains elusive. Here, we characterized defects of Bbx mutant rats that were created by PiggyBac-mediated insertional mutagenesis. Smaller body size and male infertility were the two major phenotypes of homozygous Bbx mutants. Bbx expression profile analysis showed that Bbx was more highly expressed in the testis and pituitary gland than in other organs. Histology and hormonal gene expression analysis of control and Bbx-null pituitary glands showed that loss of Bbx appeared to be dispensable for pituitary histogenesis and the expression of major hormones. BBX was localized in the nuclei of postmeiotic spermatids and Sertoli cells in wild-type testes, but absent in mutant testes. An increased presence of aberrant multinuclear giant cells and apoptotic cells was observed in mutant seminiferous tubules. TUNEL-positive cells costained with CREM (round spermatid marker), but not PLZF (spermatogonia marker), gammaH2Ax (meiotic spermatocyte marker), or GATA4 (Sertoli cell marker). Finally, there were drastically reduced numbers and motility of epididymal sperm from Bbx-null rats. These results suggest that loss of BBX induces apoptosis of postmeiotic spermatids and results in spermiogenesis defects and infertility. PMID:27465138

  6. Transposon Mutagenesis of the Plant-Associated Bacillus amyloliquefaciens ssp. plantarum FZB42 Revealed That the nfrA and RBAM17410 Genes Are Involved in Plant-Microbe-Interactions

    PubMed Central

    Dietel, Kristin; Beator, Barbara; Dolgova, Olga; Fan, Ben; Bleiss, Wilfrid; Ziegler, Jörg; Schmid, Michael; Hartmann, Anton; Borriss, Rainer

    2014-01-01

    Bacillus amyloliquefaciens ssp. plantarum FZB42 represents the prototype of Gram-positive plant growth promoting and biocontrol bacteria. In this study, we applied transposon mutagenesis to generate a transposon library, which was screened for genes involved in multicellular behavior and biofilm formation on roots as a prerequisite of plant growth promoting activity. Transposon insertion sites were determined by rescue-cloning followed by DNA sequencing. As in B. subtilis, the global transcriptional regulator DegU was identified as an activator of genes necessary for swarming and biofilm formation, and the DegU-mutant of FZB42 was found impaired in efficient root colonization. Direct screening of 3,000 transposon insertion mutants for plant-growth-promotion revealed the gene products of nfrA and RBAM_017140 to be essential for beneficial effects exerted by FZB42 on plants. We analyzed the performance of GFP-labeled wild-type and transposon mutants in the colonization of lettuce roots using confocal laser scanning microscopy. While the wild-type strain heavily colonized root surfaces, the nfrA mutant did not colonize lettuce roots, although it was not impaired in growth in laboratory cultures, biofilm formation and swarming motility on agar plates. The RBAM17410 gene, occurring in only a few members of the B. subtilis species complex, was directly involved in plant growth promotion. None of the mutant strains were affected in producing the plant growth hormone auxin. We hypothesize that the nfrA gene product is essential for overcoming the stress caused by plant response towards bacterial root colonization. PMID:24847778

  7. Precise excision and self-integration of a composite transposon as a model for spontaneous large-scale chromosome inversion/deletion of the Staphylococcus haemolyticus clinical strain JCSC1435.

    PubMed

    Watanabe, Shinya; Ito, Teruyo; Morimoto, Yuh; Takeuchi, Fumihiko; Hiramatsu, Keiichi

    2007-04-01

    Large-scale chromosomal inversions (455 to 535 kbp) or deletions (266 to 320 kbp) were found to accompany spontaneous loss of beta-lactam resistance during drug-free passage of the multiresistant Staphylococcus haemolyticus clinical strain JCSC1435. Identification and sequencing of the rearranged chromosomal loci revealed that ISSha1 of S. haemolyticus is responsible for the chromosome rearrangements.

  8. Mutator and MULE Transposons.

    PubMed

    Lisch, Damon

    2015-04-01

    The Mutator system of transposable elements (TEs) is a highly mutagenic family of transposons in maize. Because they transpose at high rates and target genic regions, these transposons can rapidly generate large numbers of new mutants, which has made the Mutator system a favored tool for both forward and reverse mutagenesis in maize. Low copy number versions of this system have also proved to be excellent models for understanding the regulation and behavior of Class II transposons in plants. Notably, the availability of a naturally occurring locus that can heritably silence autonomous Mutator elements has provided insights into the means by which otherwise active transposons are recognized and silenced. This chapter will provide a review of the biology, regulation, evolution and uses of this remarkable transposon system, with an emphasis on recent developments in our understanding of the ways in which this TE system is recognized and epigenetically silenced as well as recent evidence that Mu-like elements (MULEs) have had a significant impact on the evolution of plant genomes.

  9. Misty somites, a maternal effect gene identified by transposon-mediated insertional mutagenesis in zebrafish that is essential for the somite boundary maintenance.

    PubMed

    Kotani, Tomoya; Kawakami, Koichi

    2008-04-15

    Somite boundary formation is crucial for segmentation of vertebrate somites and vertebrae and skeletal muscle morphogenesis. Previously, we developed a Tol2 transposon-mediated gene trap method in zebrafish. In the present study, we aimed to isolate transposon insertions that trap maternally-expressed genes. We found that homozygous female fish carrying a transposon insertion within a maternally-expressed gene misty somites (mys) produced embryos that showed obscure somite boundaries at the early segmentation stage (12-13 hpf). The somite boundaries became clear and distinct after this period and the embryos survived to adulthood. This phenotype was rescued by expression of mys cDNA in the homozygous adults, confirming that it was caused by a decreased mys activity. We analyzed a role of the mys gene by using morpholino oligonucleotides (MOs). The MO-injected embryo exhibited severer phenotypes than the insertional mutant probably because the mys gene was partially active in the insertional mutant. The MO-injected embryo also showed the obscure somite boundary phenotype. Fibronectin and phosphorylated FAK at the intersomitic regions were accumulated at the boundaries at this stage, but, unlike wild type embryos, somitic cells adjacent to the boundaries did not undergo epithelialization, suggesting that Mys is required for epithelialization of the somitic cells. Then in the MO-injected embryos, the boundaries once became clear and distinct, but, in the subsequent stages, disappeared, resulting in abnormal muscle morphogenesis. Accumulation of Fibronectin and phosphorylated FAK observed in the initial stage also disappeared. Thus, Mys is crucial for maintenance of the somite boundaries formed at the initial stage. To analyze the mys defect at the cellular level, we placed cells dissociated from the MO-injected embryo on Fibronectin-coated glasses. By this cell spreading assay, we found that the mys-deficient cells reduced the activity to form lamellipodia on

  10. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  11. Genome Sequencing and Transposon Mutagenesis of Burkholderia seminalis TC3.4.2R3 Identify Genes Contributing to Suppression of Orchid Necrosis Caused by B. gladioli.

    PubMed

    Araújo, Welington L; Creason, Allison L; Mano, Emy T; Camargo-Neves, Aline A; Minami, Sonia N; Chang, Jeff H; Loper, Joyce E

    2016-06-01

    From a screen of 36 plant-associated strains of Burkholderia spp., we identified 24 strains that suppressed leaf and pseudobulb necrosis of orchid caused by B. gladioli. To gain insights into the mechanisms of disease suppression, we generated a draft genome sequence from one suppressive strain, TC3.4.2R3. The genome is an estimated 7.67 megabases in size, with three replicons, two chromosomes, and the plasmid pC3. Using a combination of multilocus sequence analysis and phylogenomics, we identified TC3.4.2R3 as B. seminalis, a species within the Burkholderia cepacia complex that includes opportunistic human pathogens and environmental strains. We generated and screened a library of 3,840 transposon mutants of strain TC3.4.2R3 on orchid leaves to identify genes contributing to plant disease suppression. Twelve mutants deficient in suppression of leaf necrosis were selected and the transposon insertions were mapped to eight loci. One gene is in a wcb cluster that is related to synthesis of extracellular polysaccharide, a key determinant in bacterial-host interactions in other systems, and the other seven are highly conserved among Burkholderia spp. The fundamental information developed in this study will serve as a resource for future research aiming to identify mechanisms contributing to biological control. PMID:26959838

  12. Genome Sequencing and Transposon Mutagenesis of Burkholderia seminalis TC3.4.2R3 Identify Genes Contributing to Suppression of Orchid Necrosis Caused by B. gladioli.

    PubMed

    Araújo, Welington L; Creason, Allison L; Mano, Emy T; Camargo-Neves, Aline A; Minami, Sonia N; Chang, Jeff H; Loper, Joyce E

    2016-06-01

    From a screen of 36 plant-associated strains of Burkholderia spp., we identified 24 strains that suppressed leaf and pseudobulb necrosis of orchid caused by B. gladioli. To gain insights into the mechanisms of disease suppression, we generated a draft genome sequence from one suppressive strain, TC3.4.2R3. The genome is an estimated 7.67 megabases in size, with three replicons, two chromosomes, and the plasmid pC3. Using a combination of multilocus sequence analysis and phylogenomics, we identified TC3.4.2R3 as B. seminalis, a species within the Burkholderia cepacia complex that includes opportunistic human pathogens and environmental strains. We generated and screened a library of 3,840 transposon mutants of strain TC3.4.2R3 on orchid leaves to identify genes contributing to plant disease suppression. Twelve mutants deficient in suppression of leaf necrosis were selected and the transposon insertions were mapped to eight loci. One gene is in a wcb cluster that is related to synthesis of extracellular polysaccharide, a key determinant in bacterial-host interactions in other systems, and the other seven are highly conserved among Burkholderia spp. The fundamental information developed in this study will serve as a resource for future research aiming to identify mechanisms contributing to biological control.

  13. Genome-Wide Transposon Mutagenesis Reveals a Role for pO157 Genes in Biofilm Development in Escherichia coli O157:H7 EDL933▿

    PubMed Central

    Puttamreddy, Supraja; Cornick, Nancy A.; Minion, F. Chris

    2010-01-01

    Enterohemorrhagic Escherichia coli O157:H7, a world-wide human food-borne pathogen, causes mild to severe diarrhea, hemorrhagic colitis, and hemolytic uremic syndrome. The ability of this pathogen to persist in the environment contributes to its dissemination to a wide range of foods and food processing surfaces. Biofilms are thought to be involved in persistence, but the process of biofilm formation is complex and poorly understood in E. coli O157:H7. To better understand the genetics of this process, a mini-Tn5 transposon insertion library was constructed in strain EDL933 and screened for biofilm-negative mutants using a microtiter plate assay. Ninety-five of 11,000 independent insertions (0.86%) were biofilm negative, and transposon insertions were located in 51 distinct genes/intergenic regions that must be involved either directly or indirectly in biofilm formation. All of the 51 biofilm-negative mutants showed reduced biofilm formation on both hydrophilic and hydrophobic surfaces. Thirty-six genes were unique to this study, including genes on the virulence plasmid pO157. The type V secreted autotransporter serine protease EspP and the enterohemolysin translocator EhxD were found to be directly involved in biofilm formation. In addition, EhxD and EspP were also important for adherence to T84 intestinal epithelial cells, suggesting a role for these genes in tissue interactions in vivo. PMID:20351142

  14. Transposon facilitated DNA sequencing

    SciTech Connect

    Berg, D.E.; Berg, C.M.; Huang, H.V.

    1990-01-01

    The purpose of this research is to investigate and develop methods that exploit the power of bacterial transposable elements for large scale DNA sequencing: Our premise is that the use of transposons to put primer binding sites randomly in target DNAs should provide access to all portions of large DNA fragments, without the inefficiencies of methods involving random subcloning and attendant repetitive sequencing, or of sequential synthesis of many oligonucleotide primers that are used to match systematically along a DNA molecule. Two unrelated bacterial transposons, Tn5 and {gamma}{delta}, are being used because they have both proven useful for molecular analyses, and because they differ sufficiently in mechanism and specificity of transposition to merit parallel development.

  15. Rapid 96-well plates DNA extraction and sequencing procedures to identify genome-wide transposon insertion sites in a difficult to lyse bacterium: Lactobacillus casei.

    PubMed

    Scornec, Hélène; Tichit, Magali; Bouchier, Christiane; Pédron, Thierry; Cavin, Jean-François; Sansonetti, Philippe J; Licandro-Seraut, Hélène

    2014-11-01

    Random transposon mutagenesis followed by adequate screening methods is an unavoidable procedure to characterize genetics of bacterial adaptation to environmental changes. We have recently constructed a mutant library of Lactobacillus casei and we aimed to fully annotate it. However, we have observed that, for L. casei which is a difficult to lyse bacterium, methods used to identify the transposon insertion site in a few mutants (transposon rescue by restriction and recircularization or PCR-based methods) were not transposable for a larger number because they are too time-consuming and sometimes not reliable. Here, we describe a method for large-scale and reliable identification of transposon insertion sites in a L. casei mutant library of 9250 mutants. DNA extraction procedure based on silica membranes in 96-column format was optimized to obtain genomic DNA from a large number of mutants. Then reliable direct genomic sequencing was improved to fit the obtained genomic DNA extracts. Using this procedure, readable and identifiable sequences were obtained for 87% of the L. casei mutants. This method extends the applications of a library of this type, reduces the number of insertions needed to be screened, and allows selection of specific mutants from an arrayed and stored mutant library. This method is applicable to any already existing mutant library (obtained by transposon or insertional mutagenesis) and could be useful for other bacterial species, especially for highly lysis-resistant bacteria species such as lactic acid bacteria.

  16. Mini-Tn5 transposon derivatives for insertion mutagenesis, promoter probing, and chromosomal insertion of cloned DNA in gram-negative eubacteria.

    PubMed Central

    de Lorenzo, V; Herrero, M; Jakubzik, U; Timmis, K N

    1990-01-01

    A collection of Tn5-derived minitransposons has been constructed that simplifies substantially the generation of insertion mutants, in vivo fusions with reporter genes, and the introduction of foreign DNA fragments into the chromosome of a variety of gram-negative bacteria, including the enteric bacteria and typical soil bacteria like Pseudomonas species. The minitransposons consist of genes specifying resistance to kanamycin, chloramphenicol, streptomycin-spectinomycin, and tetracycline as selection markers and a unique NotI cloning site flanked by 19-base-pair terminal repeat sequences of Tn5. Further derivatives also contain lacZ, phoA, luxAB, or xylE genes devoid of their native promoters located next to the terminal repeats in an orientation that affords the generation of gene-operon fusions. The transposons are located on a R6K-based suicide delivery plasmid that provides the IS50R transposase tnp gene in cis but external to the mobile element and whose conjugal transfer to recipients is mediated by RP4 mobilization functions in the donor. PMID:2172217

  17. Transposon-mediated transgenesis in the frog: New tools for biomedical and developmental studies.

    PubMed

    Yergeau, Donald Albert; Mead, Paul Evan

    2009-01-01

    The amphibian, Xenopus laevis has been an excellent developmental model for over half a century. The large egg size, external fertilization and simple husbandry make this frog an ideal tool to study early vertebrate development. The tetraploid genome and long generation time, however, has hindered the use of X. laevis in large-scale genetic efforts. A close West African relative, Xenopus tropicalis (also commonly known as Silurana tropicalis), overcomes the limitations of X. laevis in genetic studies as X. tropicalis has a diploid genome and breeding adults can be obtained in six to nine months. We have focused our efforts on developing transposon systems for efficient transgenesis and insertional mutagenesis in the frog. Transposon systems have been used for transgenesis in a wide variety of model organisms. In this review, we will discuss the advantages and limitations of different transposon systems for generating transgenic Xenopus. In addition, we will describe strategies for the identification of novel genes through an insertional mutagenesis approach using transposable elements.

  18. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  19. Tol2 transposon-mediated transgenesis in Xenopus tropicalis.

    PubMed

    Hamlet, Michelle R Johnson; Yergeau, Donald A; Kuliyev, Emin; Takeda, Masatoshi; Taira, Masanori; Kawakami, Koichi; Mead, Paul E

    2006-09-01

    The diploid frog Xenopus tropicalis is becoming a powerful developmental genetic model system. Sequencing of the X. tropicalis genome is nearing completion and several labs are embarking on mutagenesis screens. We are interested in developing insertional mutagenesis strategies in X. tropicalis. Transposon-mediated insertional mutagenesis, once used exclusively in plants and invertebrate systems, is now more widely applicable to vertebrates. The first step in developing transposons as tools for mutagenesis is to demonstrate that these mobile elements function efficiently in the target organism. Here, we show that the Medaka fish transposon, Tol2, is able to stably integrate into the X. tropicalis genome and will serve as a powerful tool for insertional mutagenesis strategies in the frog.

  20. Transposon tools: worldwide landscape of intellectual property and technological developments.

    PubMed

    Palazzoli, Fabien; Testu, François-Xavier; Merly, Franck; Bigot, Yves

    2010-03-01

    DNA transposons are considered to be good candidates for developing tools for genome engineering, insertional mutagenesis and gene delivery for therapeutic purposes, as illustrated by the recent first clinical trial of a transposon. In this article we set out to highlight the interest of patent information, and to develop a strategy for the technological development of transposon tools, similar to what has been done in many other fields. We propose a patent landscape for transposon tools, including the changes in international patent applications, and review the leading inventors and applicants. We also provide an overview of the potential patent portfolio for the prokaryotic and eukaryotic transposons that are exploited by spin-off companies. Finally, we discuss the difficulties involved in tracing relevant state-of-the-art of articles and patent documents, based on the example of one of the most promising transposon systems, including all the impacts on the technological development of transposon tools.

  1. Transposon-mediated Genome Manipulations in Vertebrates

    PubMed Central

    Ivics, Zoltán; Li, Meng Amy; Mátés, Lajos; Boeke, Jef D.; Bradley, Allan; Izsvák, Zsuzsanna

    2010-01-01

    Transposable elements are segments of DNA with the unique ability to move about in the genome. This inherent feature can be exploited to harness these elements as gene vectors for diverse genome manipulations. Transposon-based genetic strategies have been established in vertebrate species over the last decade, and current progress in this field indicates that transposable elements will serve as indispensable tools in the genetic toolkit of vertebrate models. In particular, transposons can be applied as vectors for somatic and germline transgenesis, and as insertional mutagens in both loss-of-function and gain-of-function forward mutagenesis screens. The major advantage of using transposons as genetic tools is that they facilitate analysis of gene function in an easy, controlled and scalable manner. Transposon-based technologies are beginning to be exploited to link sequence information to gene functions in vertebrate models. In this article, we provide an overview of transposon-based methods used in vertebrate model organisms, and highlight the most important considerations concerning genetic applications of the transposon systems. PMID:19478801

  2. Fighting an old war with a new weapon--silencing transposons by Piwi-interacting RNA.

    PubMed

    Guo, Manhong; Wu, Yuliang

    2013-09-01

    Discovered six decades ago, transposons are known to selfishly multiply within and between chromosomes. Although they may play a creative role in building new functional parts of the genome, transposons usually cause insertional mutagenesis and/or turn nearby genes on or off. To maintain genome integrity, cells use a variety of strategies to defend against the proliferation of transposons. A class of small noncoding RNA, discovered seven years ago and called piRNA, is a new player in the war to silence transposons. piRNA is made via two biogenesis pathways: the primary processing pathway and the ping-pong amplification loop. These pathways are critically involved in transposon RNA degradation, DNA methylation, and histone modification machinery that represses transposons. In this review, we briefly introduce transposon-caused genomic instability and summarize our current understanding of the piRNA pathway, focusing on its key function in transposon silencing.

  3. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  4. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe.

  5. DNA Transposons: Nature and Applications in Genomics

    PubMed Central

    Muñoz-López, Martín; García-Pérez, José L.

    2010-01-01

    Repeated DNA makes up a large fraction of a typical mammalian genome, and some repetitive elements are able to move within the genome (transposons and retrotransposons). DNA transposons move from one genomic location to another by a cut-and-paste mechanism. They are powerful forces of genetic change and have played a significant role in the evolution of many genomes. As genetic tools, DNA transposons can be used to introduce a piece of foreign DNA into a genome. Indeed, they have been used for transgenesis and insertional mutagenesis in different organisms, since these elements are not generally dependent on host factors to mediate their mobility. Thus, DNA transposons are useful tools to analyze the regulatory genome, study embryonic development, identify genes and pathways implicated in disease or pathogenesis of pathogens, and even contribute to gene therapy. In this review, we will describe the nature of these elements and discuss recent advances in this field of research, as well as our evolving knowledge of the DNA transposons most widely used in these studies. PMID:20885819

  6. Large-scale mapping of mutations affecting zebrafish development

    PubMed Central

    Geisler, Robert; Rauch, Gerd-Jörg; Geiger-Rudolph, Silke; Albrecht, Andrea; van Bebber, Frauke; Berger, Andrea; Busch-Nentwich, Elisabeth; Dahm, Ralf; Dekens, Marcus PS; Dooley, Christopher; Elli, Alexandra F; Gehring, Ines; Geiger, Horst; Geisler, Maria; Glaser, Stefanie; Holley, Scott; Huber, Matthias; Kerr, Andy; Kirn, Anette; Knirsch, Martina; Konantz, Martina; Küchler, Axel M; Maderspacher, Florian; Neuhauss, Stephan C; Nicolson, Teresa; Ober, Elke A; Praeg, Elke; Ray, Russell; Rentzsch, Brit; Rick, Jens M; Rief, Eva; Schauerte, Heike E; Schepp, Carsten P; Schönberger, Ulrike; Schonthaler, Helia B; Seiler, Christoph; Sidi, Samuel; Söllner, Christian; Wehner, Anja; Weiler, Christian; Nüsslein-Volhard, Christiane

    2007-01-01

    Background Large-scale mutagenesis screens in the zebrafish employing the mutagen ENU have isolated several hundred mutant loci that represent putative developmental control genes. In order to realize the potential of such screens, systematic genetic mapping of the mutations is necessary. Here we report on a large-scale effort to map the mutations generated in mutagenesis screening at the Max Planck Institute for Developmental Biology by genome scanning with microsatellite markers. Results We have selected a set of microsatellite markers and developed methods and scoring criteria suitable for efficient, high-throughput genome scanning. We have used these methods to successfully obtain a rough map position for 319 mutant loci from the Tübingen I mutagenesis screen and subsequent screening of the mutant collection. For 277 of these the corresponding gene is not yet identified. Mapping was successful for 80 % of the tested loci. By comparing 21 mutation and gene positions of cloned mutations we have validated the correctness of our linkage group assignments and estimated the standard error of our map positions to be approximately 6 cM. Conclusion By obtaining rough map positions for over 300 zebrafish loci with developmental phenotypes, we have generated a dataset that will be useful not only for cloning of the affected genes, but also to suggest allelism of mutations with similar phenotypes that will be identified in future screens. Furthermore this work validates the usefulness of our methodology for rapid, systematic and inexpensive microsatellite mapping of zebrafish mutations. PMID:17212827

  7. Transposon transgenesis in Xenopus

    PubMed Central

    Yergeau, Donald A.; Kelley, Clair M.; Zhu, Haiqing; Kuliyev, Emin; Mead, Paul E.

    2010-01-01

    Transposon-mediated integration strategies in Xenopus offer simple and robust methods for the generation of germline transgenic animals. Co-injection of fertilized one-cell embryos with plasmid DNA harboring a transposon transgene and synthetic mRNA encoding the cognate transposase enzyme results in mosaic integration of the transposon at early cleavage stages that are frequently passed through the germline in the adult animal. Micro-injection of fertilized embryos is a routine procedure used by many laboratories that use Xenopus as a developmental model and, as such, the transposon transgenesis method can be performed without additional equipment or specialized methodologies. The methods for injecting Xenopus embryos are well documented in the literature so here we provide a step-by-step guide to other aspects of transposon transgenesis, including screening mosaic founders for germline transmission of the transgene and general husbandry considerations related to management of populations of transgenic frogs. PMID:20211730

  8. Transposon transgenesis in Xenopus.

    PubMed

    Yergeau, Donald A; Kelley, Clair M; Zhu, Haiqing; Kuliyev, Emin; Mead, Paul E

    2010-05-01

    Transposon-mediated integration strategies in Xenopus offer simple and robust methods for the generation of germline transgenic animals. Co-injection of fertilized one-cell embryos with plasmid DNA harboring a transposon transgene and synthetic mRNA encoding the cognate transposase enzyme results in mosaic integration of the transposon at early cleavage stages that are frequently passed through the germline in the adult animal. Micro-injection of fertilized embryos is a routine procedure used by many laboratories that use Xenopus as a developmental model and, as such, the transposon transgenesis method can be performed without additional equipment or specialized methodologies. The methods for injecting Xenopus embryos are well documented in the literature so here we provide a step-by-step guide to other aspects of transposon transgenesis, including screening mosaic founders for germline transmission of the transgene and general husbandry considerations related to management of populations of transgenic frogs.

  9. Functional genomics: Probing plant gene function and expression with transposons

    PubMed Central

    Martienssen, Robert A.

    1998-01-01

    Transposable elements provide a convenient and flexible means to disrupt plant genes, so allowing their function to be assessed. By engineering transposons to carry reporter genes and regulatory signals, the expression of target genes can be monitored and to some extent manipulated. Two strategies for using transposons to assess gene function are outlined here: First, the PCR can be used to identify plants that carry insertions into specific genes from among pools of heavily mutagenized individuals (site-selected transposon mutagenesis). This method requires that high copy transposons be used and that a relatively large number of reactions be performed to identify insertions into genes of interest. Second, a large library of plants, each carrying a unique insertion, can be generated. Each insertion site then can be amplified and sequenced systematically. These two methods have been demonstrated in maize, Arabidopsis, and other plant species, and the relative merits of each are discussed in the context of plant genome research. PMID:9482828

  10. Large scale cluster computing workshop

    SciTech Connect

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  11. Large Scale Magnetostrictive Valve Actuator

    NASA Technical Reports Server (NTRS)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  12. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  13. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  14. Phylogenetic and Functional Characterization of the hAT Transposon Superfamily

    PubMed Central

    Arensburger, Peter; Hice, Robert H.; Zhou, Liqin; Smith, Ryan C.; Tom, Ariane C.; Wright, Jennifer A.; Knapp, Joshua; O'Brochta, David A.; Craig, Nancy L.; Atkinson, Peter W.

    2011-01-01

    Transposons are found in virtually all organisms and play fundamental roles in genome evolution. They can also acquire new functions in the host organism and some have been developed as incisive genetic tools for transformation and mutagenesis. The hAT transposon superfamily contains members from the plant and animal kingdoms, some of which are active when introduced into new host organisms. We have identified two new active hAT transposons, AeBuster1, from the mosquito Aedes aegypti and TcBuster from the red flour beetle Tribolium castaneum. Activity of both transposons is illustrated by excision and transposition assays performed in Drosophila melanogaster and Ae. aegypti and by in vitro strand transfer assays. These two active insect transposons are more closely related to the Buster sequences identified in humans than they are to the previously identified active hAT transposons, Ac, Tam3, Tol2, hobo, and Hermes. We therefore reexamined the structural and functional relationships of hAT and hAT-like transposase sequences extracted from genome databases and found that the hAT superfamily is divided into at least two families. This division is supported by a difference in target-site selections generated by active transposons of each family. We name these families the Ac and Buster families after the first identified transposon or transposon-like sequence in each. We find that the recently discovered SPIN transposons of mammals are located within the family of Buster elements. PMID:21368277

  15. Transgenesis in Xenopus using the Sleeping Beauty transposon system.

    PubMed

    Yergeau, Donald A; Johnson Hamlet, Michelle R; Kuliyev, Emin; Zhu, Haiqing; Doherty, Joanne R; Archer, Taylor D; Subhawong, Andrea P; Valentine, Marc B; Kelley, Clair M; Mead, Paul E

    2009-07-01

    Transposon-based integration systems have been widely used for genetic manipulation of invertebrate and plant model systems. In the past decade, these powerful tools have begun to be used in vertebrates for transgenesis, insertional mutagenesis, and gene therapy applications. Sleeping Beauty (SB) is a member of Tc1/mariner class of transposases and is derived from an inactive form of the gene isolated from Atlantic salmon. SB has been used extensively in human cell lines and in whole animal vertebrate model systems such as the mouse, rat, and zebrafish. In this study, we describe the use of SB in the diploid frog Xenopus tropicalis to generate stable transgenic lines. SB transposon transgenes integrate into the X. tropicalis genome by a noncanonical process and are passed through the germline. We compare the activity of SB in this model organism with that of Tol2, a hAT (hobo, Ac1, TAM)-like transposon system.

  16. Automating large-scale reactor systems

    SciTech Connect

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig.

  17. Is the universe homogeneous on large scale?

    NASA Astrophysics Data System (ADS)

    Zhu, Xingfen; Chu, Yaoquan

    Wether the distribution of matter in the universe is homogeneous or fractal on large scale is vastly debated in observational cosmology recently. Pietronero and his co-workers have strongly advocated that the fractal behaviour in the galaxy distribution extends to the largest scale observed (≍1000h-1Mpc) with the fractal dimension D ≍ 2. Most cosmologists who hold the standard model, however, insist that the universe be homogeneous on large scale. The answer of whether the universe is homogeneous or not on large scale should wait for the new results of next generation galaxy redshift surveys.

  18. Multiple independent defective suppressor-mutator transposon insertions in Arabidopsis: a tool for functional genomics.

    PubMed Central

    Tissier, A F; Marillonnet, S; Klimyuk, V; Patel, K; Torres, M A; Murphy, G; Jones, J D

    1999-01-01

    A new system for insertional mutagenesis based on the maize Enhancer/Suppressor-mutator (En/Spm) element was introduced into Arabidopsis. A single T-DNA construct carried a nonautonomous defective Spm (dSpm) element with a phosphinothricin herbicide resistance (BAR) gene, a transposase expression cassette, and a counterselectable gene. This construct was used to select for stable dSpm transpositions. Treatments for both positive (BAR) and negative selection markers were applicable to soil-grown plants, allowing the recovery of new transpositions on a large scale. To date, a total of 48,000 lines in pools of 50 have been recovered, of which approximately 80% result from independent insertion events. DNA extracted from these pools was used in reverse genetic screens, either by polymerase chain reaction (PCR) using primers from the transposon and the targeted gene or by the display of insertions whereby inverse PCR products of insertions from the DNA pools are spotted on a membrane that is then hybridized with the probe of interest. By sequencing PCR-amplified fragments adjacent to insertion sites, we established a sequenced insertion-site database of 1200 sequences. This database permitted a comparison of the chromosomal distribution of transpositions from various T-DNA locations. PMID:10521516

  19. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  20. Probe mapping to facilitate transposon-based DNA sequencing

    SciTech Connect

    Strausbaugh, L.D.; Bourke, M.T.; Sommer, M.T.; Coon, M.E.; Berg, C.M. )

    1990-08-01

    A promising strategy for DNA sequencing exploits transposons to provide mobile sites for the binding of sequencing primers. For such a strategy to be maximally efficient, the location and orientation of the transposon must be readily determined and the insertion sites should be randomly distributed. The authors demonstrate an efficient probe-based method for the localization and orientation of transposon-borne primer sites, which is adaptable to large-scale sequencing strategies. This approach requires no prior restriction enzyme mapping or knowledge of the cloned sequence and eliminates the inefficiency inherent in totally random sequencing methods. To test the efficiency of probe mapping, 49 insertions of the transposon {gamma}{delta} (Tn1000) in a cloned fragment of Drosophila melanogaster DNA were mapped and oriented. In addition, oligonucleotide primers specific for unique subterminal {gamma}{delta} segments were used to prime dideoxynucleotide double-stranded sequencing. These data provided an opportunity to rigorously examine {gamma}{delta} insertion sites. The insertions were quire randomly distributed, even though the target DNA fragment had both A+T-rich and G+C-rich regions; in G+C-rich DNA, the insertions were found in A+T-rich valleys. These data demonstrate that {gamma}{delta} is an excellent choice for supplying mobile primer binding sites to cloned DNA and that transposon-based probe mapping permits the sequences of large cloned segments to be determined without any subcloning.

  1. Large-scale motions in the universe

    SciTech Connect

    Rubin, V.C.; Coyne, G.V.

    1988-01-01

    The present conference on the large-scale motions of the universe discusses topics on the problems of two-dimensional and three-dimensional structures, large-scale velocity fields, the motion of the local group, small-scale microwave fluctuations, ab initio and phenomenological theories, and properties of galaxies at high and low Z. Attention is given to the Pisces-Perseus supercluster, large-scale structure and motion traced by galaxy clusters, distances to galaxies in the field, the origin of the local flow of galaxies, the peculiar velocity field predicted by the distribution of IRAS galaxies, the effects of reionization on microwave background anisotropies, the theoretical implications of cosmological dipoles, and n-body simulations of universe dominated by cold dark matter.

  2. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  3. Large-scale nanophotonic phased array.

    PubMed

    Sun, Jie; Timurdogan, Erman; Yaacobi, Ami; Hosseini, Ehsan Shah; Watts, Michael R

    2013-01-10

    Electromagnetic phased arrays at radio frequencies are well known and have enabled applications ranging from communications to radar, broadcasting and astronomy. The ability to generate arbitrary radiation patterns with large-scale phased arrays has long been pursued. Although it is extremely expensive and cumbersome to deploy large-scale radiofrequency phased arrays, optical phased arrays have a unique advantage in that the much shorter optical wavelength holds promise for large-scale integration. However, the short optical wavelength also imposes stringent requirements on fabrication. As a consequence, although optical phased arrays have been studied with various platforms and recently with chip-scale nanophotonics, all of the demonstrations so far are restricted to one-dimensional or small-scale two-dimensional arrays. Here we report the demonstration of a large-scale two-dimensional nanophotonic phased array (NPA), in which 64 × 64 (4,096) optical nanoantennas are densely integrated on a silicon chip within a footprint of 576 μm × 576 μm with all of the nanoantennas precisely balanced in power and aligned in phase to generate a designed, sophisticated radiation pattern in the far field. We also show that active phase tunability can be realized in the proposed NPA by demonstrating dynamic beam steering and shaping with an 8 × 8 array. This work demonstrates that a robust design, together with state-of-the-art complementary metal-oxide-semiconductor technology, allows large-scale NPAs to be implemented on compact and inexpensive nanophotonic chips. In turn, this enables arbitrary radiation pattern generation using NPAs and therefore extends the functionalities of phased arrays beyond conventional beam focusing and steering, opening up possibilities for large-scale deployment in applications such as communication, laser detection and ranging, three-dimensional holography and biomedical sciences, to name just a few.

  4. Large Scale Shape Optimization for Accelerator Cavities

    SciTech Connect

    Akcelik, Volkan; Lee, Lie-Quan; Li, Zenghai; Ng, Cho; Xiao, Li-Ling; Ko, Kwok; /SLAC

    2011-12-06

    We present a shape optimization method for designing accelerator cavities with large scale computations. The objective is to find the best accelerator cavity shape with the desired spectral response, such as with the specified frequencies of resonant modes, field profiles, and external Q values. The forward problem is the large scale Maxwell equation in the frequency domain. The design parameters are the CAD parameters defining the cavity shape. We develop scalable algorithms with a discrete adjoint approach and use the quasi-Newton method to solve the nonlinear optimization problem. Two realistic accelerator cavity design examples are presented.

  5. Sensitivity analysis for large-scale problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  6. ARPACK: Solving large scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Lehoucq, Rich; Maschhoff, Kristi; Sorensen, Danny; Yang, Chao

    2013-11-01

    ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w

  7. A Large Scale Computer Terminal Output Controller.

    ERIC Educational Resources Information Center

    Tucker, Paul Thomas

    This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…

  8. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  9. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  10. Generation of a Transposon Mutant Library in Staphylococcus aureus and Staphylococcus epidermidis Using bursa aurealis.

    PubMed

    Yajjala, Vijaya Kumar; Widhelm, Todd J; Endres, Jennifer L; Fey, Paul D; Bayles, Kenneth W

    2016-01-01

    Transposon mutagenesis is a genetic process that involves the random insertion of transposons into a genome resulting in the disruption of function of the genes in which they insert. Identification of the insertion sites through DNA sequencing allows for the identification of the genes disrupted and the creation of "libraries" containing a collection of mutants in which a large number of the nonessential genes have been disrupted. These mutant libraries have been a great resource for investigators to understand the various biological functions of individual genes, including those involved in metabolism, antibiotic susceptibility, and pathogenesis. Here, we describe the detailed methodologies for constructing a sequence defined transposon mutant library in both Staphylococcus aureus and S. epidermidis using the mariner-based transposon, bursa aurealis.

  11. Large-scale Advanced Propfan (LAP) program

    NASA Technical Reports Server (NTRS)

    Sagerser, D. A.; Ludemann, S. G.

    1985-01-01

    The propfan is an advanced propeller concept which maintains the high efficiencies traditionally associated with conventional propellers at the higher aircraft cruise speeds associated with jet transports. The large-scale advanced propfan (LAP) program extends the research done on 2 ft diameter propfan models to a 9 ft diameter article. The program includes design, fabrication, and testing of both an eight bladed, 9 ft diameter propfan, designated SR-7L, and a 2 ft diameter aeroelastically scaled model, SR-7A. The LAP program is complemented by the propfan test assessment (PTA) program, which takes the large-scale propfan and mates it with a gas generator and gearbox to form a propfan propulsion system and then flight tests this system on the wing of a Gulfstream 2 testbed aircraft.

  12. Fractals and cosmological large-scale structure

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1992-01-01

    Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.

  13. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  14. Large-scale instabilities of helical flows

    NASA Astrophysics Data System (ADS)

    Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne

    2016-10-01

    Large-scale hydrodynamic instabilities of periodic helical flows of a given wave number K are investigated using three-dimensional Floquet numerical computations. In the Floquet formalism the unstable field is expanded in modes of different spacial periodicity. This allows us (i) to clearly distinguish large from small scale instabilities and (ii) to study modes of wave number q of arbitrarily large-scale separation q ≪K . Different flows are examined including flows that exhibit small-scale turbulence. The growth rate σ of the most unstable mode is measured as a function of the scale separation q /K ≪1 and the Reynolds number Re. It is shown that the growth rate follows the scaling σ ∝q if an AKA effect [Frisch et al., Physica D: Nonlinear Phenomena 28, 382 (1987), 10.1016/0167-2789(87)90026-1] is present or a negative eddy viscosity scaling σ ∝q2 in its absence. This holds both for the Re≪1 regime where previously derived asymptotic results are verified but also for Re=O (1 ) that is beyond their range of validity. Furthermore, for values of Re above a critical value ReSc beyond which small-scale instabilities are present, the growth rate becomes independent of q and the energy of the perturbation at large scales decreases with scale separation. The nonlinear behavior of these large-scale instabilities is also examined in the nonlinear regime where the largest scales of the system are found to be the most dominant energetically. These results are interpreted by low-order models.

  15. Large-scale fibre-array multiplexing

    SciTech Connect

    Cheremiskin, I V; Chekhlova, T K

    2001-05-31

    The possibility of creating a fibre multiplexer/demultiplexer with large-scale multiplexing without any basic restrictions on the number of channels and the spectral spacing between them is shown. The operating capacity of a fibre multiplexer based on a four-fibre array ensuring a spectral spacing of 0.7 pm ({approx} 10 GHz) between channels is demonstrated. (laser applications and other topics in quantum electronics)

  16. Large-scale neuromorphic computing systems

    NASA Astrophysics Data System (ADS)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  17. Large-scale neuromorphic computing systems.

    PubMed

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers. PMID:27529195

  18. Large-Scale Visual Data Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  19. Large scale processes in the solar nebula.

    NASA Astrophysics Data System (ADS)

    Boss, A. P.

    Most proposed chondrule formation mechanisms involve processes occurring inside the solar nebula, so the large scale (roughly 1 to 10 AU) structure of the nebula is of general interest for any chrondrule-forming mechanism. Chondrules and Ca, Al-rich inclusions (CAIs) might also have been formed as a direct result of the large scale structure of the nebula, such as passage of material through high temperature regions. While recent nebula models do predict the existence of relatively hot regions, the maximum temperatures in the inner planet region may not be high enough to account for chondrule or CAI thermal processing, unless the disk mass is considerably greater than the minimum mass necessary to restore the planets to solar composition. Furthermore, it does not seem to be possible to achieve both rapid heating and rapid cooling of grain assemblages in such a large scale furnace. However, if the accretion flow onto the nebula surface is clumpy, as suggested by observations of variability in young stars, then clump-disk impacts might be energetic enough to launch shock waves which could propagate through the nebula to the midplane, thermally processing any grain aggregates they encounter, and leaving behind a trail of chondrules.

  20. Defining essential genes and identifying virulence factors of Porphyromonas gingivalis by massively-parallel sequencing of transposon libraries (Tn-seq)

    PubMed Central

    Klein, Brian A.; Duncan, Margaret J.; Hu, Linden T.

    2016-01-01

    Summary Porphyromonas gingivalis is a keystone pathogen in the development and progression of periodontal disease. Obstacles to the development of saturated transposon libraries have previously limited transposon mutant-based screens as well as essential gene studies. We have developed a system for efficient transposon mutagenesis of P. gingivalis using a modified mariner transposon. Tn-seq is a technique that allows for quantitative assessment of individual mutants within a transposon mutant library by sequencing the transposon-genome junctions and then compiling mutant presence by mapping to a base genome. Using Tn-seq, it is possible to quickly define all the insertional mutants in a library and thus identify non-essential genes under the conditions in which the library was produced. Identification of fitness of individual mutants under specific conditions can be performed by exposing the library to selective pressures. PMID:25636611

  1. Large-scale brightenings associated with flares

    NASA Technical Reports Server (NTRS)

    Mandrini, Cristina H.; Machado, Marcos E.

    1992-01-01

    It is shown that large-scale brightenings (LSBs) associated with solar flares, similar to the 'giant arches' discovered by Svestka et al. (1982) in images obtained by the SSM HXIS hours after the onset of two-ribbon flares, can also occur in association with confined flares in complex active regions. For these events, a clear link between the LSB and the underlying flare is clearly evident from the active-region magnetic field topology. The implications of these findings are discussed within the framework of the interacting loops of flares and the giant arch phenomenology.

  2. Large scale phononic metamaterials for seismic isolation

    SciTech Connect

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  3. Large-scale dynamics and global warming

    SciTech Connect

    Held, I.M. )

    1993-02-01

    Predictions of future climate change raise a variety of issues in large-scale atmospheric and oceanic dynamics. Several of these are reviewed in this essay, including the sensitivity of the circulation of the Atlantic Ocean to increasing freshwater input at high latitudes; the possibility of greenhouse cooling in the southern oceans; the sensitivity of monsoonal circulations to differential warming of the two hemispheres; the response of midlatitude storms to changing temperature gradients and increasing water vapor in the atmosphere; and the possible importance of positive feedback between the mean winds and eddy-induced heating in the polar stratosphere.

  4. Neutrinos and large-scale structure

    SciTech Connect

    Eisenstein, Daniel J.

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  5. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  6. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  7. Fishing for answers with transposons.

    PubMed

    Wadman, Shannon A; Clark, Karl J; Hackett, Perry B

    2005-01-01

    Transposons are one means that nature has used to introduce new genetic material into chromosomes of organisms from every kingdom. They have been extensively used in prokaryotic and lower eukaryotic systems, but until recently there was no transposon that had significant activity in vertebrates. The Sleeping Beauty (SB) transposon system was developed to direct the integration of precise DNA sequences into chromosomes. The SB system was derived from salmonid sequences that had been inactive for more than 10 million years. SB transposons have been used for two principle uses--as a vector for transgenesis and as a method for introducing various trap vectors into (gene-trap) or in the neighborhood of (enhancer-trap) genes to identify their functions. Results of these studies show that SB-mediated transgenesis is more efficient than that by injection of simple plasmids and that expression of transgenesis is stable and reliable following passage through the germline.

  8. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  9. Transposon tagging of disease resistance genes

    SciTech Connect

    Michelmore, R.W. . Dept. of Physics)

    1989-01-01

    We are developing a transposon mutagenesis system for lettuce to clone genes for resistance to the fungal pathogen, Bremia lactucae. Activity of heterologous transposons is being studied in transgenic plants. Southern analysis of T{sub 1} and T{sub 2} plants containing Tam3 from Antirrhinum provided ambiguous results. Multiple endonuclease digests indicated that transposition had occurred; however, in no plant were all endonuclease digests consistent with a simple excision event. Southern or PCR analysis of over 50 plans containing Ac from maize have also failed to reveal clear evidence of transposition; this is contrast to experiments by others with the same constructs who have observed high rates of Ac excision in other plant species. Nearly all of 65 T{sub 2} families containing Ac interrupting a chimeric streptomycin resistance gene (Courtesy J. Jones, Sainsbury Lab., UK) clearly segregated for streptomycin resistance. Southern analyses, however, showed no evidence of transposition, indicating restoration of a functional message by other mechanisms, possibly mRNA processing. Transgenic plants have also been generated containing CaMV 35S or hsp70 promoters fused to transposase coding sequences or a Ds element interrupting a chimeric GUS gene (Courtesy M. Lassner, UC Davis). F{sub 1} plants containing both constructs were analyzed for transposition. Only two plants containing both constructs were obtained from 48 progeny, far fewer than expected, and neither showed evidence of transposition in Southerns and GUS assays. We are currently constructing further chimeric transposase fusions. To test for the stability of the targeted disease resistance genes, 50,000 F{sub 1} plants heterozygous for three resistance genes were generated; no mutants have been identified in the 5000 so far screened.

  10. Horizontal SPINning of transposons.

    PubMed

    Gilbert, Clément; Pace, John K; Feschotte, Cédric

    2009-01-01

    The term 'horizontal transfer (HT)' refers to the transfer of genetic material between two reproductively isolated organisms. HT is thought to occur rarely in eukaryotes compared to vertical inheritance, the transmission of DNA from parent to offspring. In a recent study we have provided evidence that a family of DNA transposons, called SPACE INVADERS or SPIN, independently invaded horizontally the genome of seven distantly related tetrapod species and subsequently amplified to high copy number in each of them. This discovery calls for further investigations to better characterize the extent to which genomes have been shaped through HT events. In this addendum, we briefly discuss some general issues regarding the study of HT and further speculate on the sequence of events that could explain the current taxonomic distribution of SPIN. We propose that the presence of SPIN in the opossum (Monodelphis domestica), a taxon endemic to South America, reflects a transoceanic HT event that occurred from Old to New World, between 46 and 15 million years ago.

  11. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  12. Large scale study of tooth enamel

    SciTech Connect

    Bodart, F.; Deconninck, G.; Martin, M.Th.

    1981-04-01

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. One hundred eighty samples of teeth were first analysed using PIXE, backscattering and nuclear reaction techniques. The results were analysed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population.

  13. Batteries for Large Scale Energy Storage

    SciTech Connect

    Soloveichik, Grigorii L.

    2011-07-15

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β”-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  14. Large-scale databases of proper names.

    PubMed

    Conley, P; Burgess, C; Hage, D

    1999-05-01

    Few tools for research in proper names have been available--specifically, there is no large-scale corpus of proper names. Two corpora of proper names were constructed, one based on U.S. phone book listings, the other derived from a database of Usenet text. Name frequencies from both corpora were compared with human subjects' reaction times (RTs) to the proper names in a naming task. Regression analysis showed that the Usenet frequencies contributed to predictions of human RT, whereas phone book frequencies did not. In addition, semantic neighborhood density measures derived from the HAL corpus were compared with the subjects' RTs and found to be a better predictor of RT than was frequency in either corpus. These new corpora are freely available on line for download. Potentials for these corpora range from using the names as stimuli in experiments to using the corpus data in software applications. PMID:10495803

  15. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05

  16. Large-scale simulations of reionization

    SciTech Connect

    Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  17. Large scale water lens for solar concentration.

    PubMed

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation. PMID:26072893

  18. Large scale structures in transitional pipe flow

    NASA Astrophysics Data System (ADS)

    Hellström, Leo; Ganapathisubramani, Bharathram; Smits, Alexander

    2015-11-01

    We present a dual-plane snapshot POD analysis of transitional pipe flow at a Reynolds number of 3440, based on the pipe diameter. The time-resolved high-speed PIV data were simultaneously acquired in two planes, a cross-stream plane (2D-3C) and a streamwise plane (2D-2C) on the pipe centerline. The two light sheets were orthogonally polarized, allowing particles situated in each plane to be viewed independently. In the snapshot POD analysis, the modal energy is based on the cross-stream plane, while the POD modes are calculated using the dual-plane data. We present results on the emergence and decay of the energetic large scale motions during transition to turbulence, and compare these motions to those observed in fully developed turbulent flow. Supported under ONR Grant N00014-13-1-0174 and ERC Grant No. 277472.

  19. Challenges in large scale distributed computing: bioinformatics.

    SciTech Connect

    Disz, T.; Kubal, M.; Olson, R.; Overbeek, R.; Stevens, R.; Mathematics and Computer Science; Univ. of Chicago; The Fellowship for the Interpretation of Genomes

    2005-01-01

    The amount of genomic data available for study is increasing at a rate similar to that of Moore's law. This deluge of data is challenging bioinformaticians to develop newer, faster and better algorithms for analysis and examination of this data. The growing availability of large scale computing grids coupled with high-performance networking is challenging computer scientists to develop better, faster methods of exploiting parallelism in these biological computations and deploying them across computing grids. In this paper, we describe two computations that are required to be run frequently and which require large amounts of computing resource to complete in a reasonable time. The data for these computations are very large and the sequential computational time can exceed thousands of hours. We show the importance and relevance of these computations, the nature of the data and parallelism and we show how we are meeting the challenge of efficiently distributing and managing these computations in the SEED project.

  20. The challenge of large-scale structure

    NASA Astrophysics Data System (ADS)

    Gregory, S. A.

    1996-03-01

    The tasks that I have assumed for myself in this presentation include three separate parts. The first, appropriate to the particular setting of this meeting, is to review the basic work of the founding of this field; the appropriateness comes from the fact that W. G. Tifft made immense contributions that are not often realized by the astronomical community. The second task is to outline the general tone of the observational evidence for large scale structures. (Here, in particular, I cannot claim to be complete. I beg forgiveness from any workers who are left out by my oversight for lack of space and time.) The third task is to point out some of the major aspects of the field that may represent the clues by which some brilliant sleuth will ultimately figure out how galaxies formed.

  1. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  2. Large-Scale Astrophysical Visualization on Smartphones

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  3. Large scale water lens for solar concentration.

    PubMed

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation.

  4. The XMM Large Scale Structure Survey

    NASA Astrophysics Data System (ADS)

    Pierre, Marguerite

    2005-10-01

    We propose to complete, by an additional 5 deg2, the XMM-LSS Survey region overlying the Spitzer/SWIRE field. This field already has CFHTLS and Integral coverage, and will encompass about 10 deg2. The resulting multi-wavelength medium-depth survey, which complements XMM and Chandra deep surveys, will provide a unique view of large-scale structure over a wide range of redshift, and will show active galaxies in the full range of environments. The complete coverage by optical and IR surveys provides high-quality photometric redshifts, so that cosmological results can quickly be extracted. In the spirit of a Legacy survey, we will make the raw X-ray data immediately public. Multi-band catalogues and images will also be made available on short time scales.

  5. Large-scale sequential quadratic programming algorithms

    SciTech Connect

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  6. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  7. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  8. Supporting large-scale computational science

    SciTech Connect

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  9. Transposon tagging and the study of root development in Arabidopsis

    NASA Technical Reports Server (NTRS)

    Tsugeki, R.; Olson, M. L.; Fedoroff, N. V.

    1998-01-01

    The maize Ac-Ds transposable element family has been used as the basis of transposon mutagenesis systems that function in a variety of plants, including Arabidopsis. We have developed modified transposons and methods which simplify the detection, cloning and analysis of insertion mutations. We have identified and are analyzing two plant lines in which genes expressed either in the root cap cells or in the quiescent cells, cortex/endodermal initial cells and columella cells of the root cap have been tagged with a transposon carrying a reporter gene. A gene expressed in root cap cells tagged with an enhancer-trap Ds was isolated and its corresponding EST cDNA was identified. Nucleotide and deduced amino acid sequences of the gene show no significant similarity to other genes in the database. Genetic ablation experiments have been done by fusing a root cap-specific promoter to the diphtheria toxin A-chain gene and introducing the fusion construct into Arabidopsis plants. We find that in addition to eliminating gravitropism, root cap ablation inhibits elongation of roots by lowering root meristematic activities.

  10. Cancer gene discovery: exploiting insertional mutagenesis

    PubMed Central

    Ranzani, Marco; Annunziato, Stefano; Adams, David J.; Montini, Eugenio

    2013-01-01

    Insertional mutagenesis has been utilized as a functional forward genetics screen for the identification of novel genes involved in the pathogenesis of human cancers. Different insertional mutagens have been successfully used to reveal new cancer genes. For example, retroviruses (RVs) are integrating viruses with the capacity to induce the deregulation of genes in the neighborhood of the insertion site. RVs have been employed for more than 30 years to identify cancer genes in the hematopoietic system and mammary gland. Similarly, another tool that has revolutionized cancer gene discovery is the cut-and-paste transposons. These DNA elements have been engineered to contain strong promoters and stop cassettes that may function to perturb gene expression upon integration proximal to genes. In addition, complex mouse models characterized by tissue-restricted activity of transposons have been developed to identify oncogenes and tumor suppressor genes that control the development of a wide range of solid tumor types, extending beyond those tissues accessible using RV-based approaches. Most recently, lentiviral vectors (LVs) have appeared on the scene for use in cancer gene screens. LVs are replication defective integrating vectors that have the advantage of being able to infect non-dividing cells, in a wide range of cell types and tissues. In this review, we describe the various insertional mutagens focusing on their advantages/limitations and we discuss the new and promising tools that will improve the insertional mutagenesis screens of the future. PMID:23928056

  11. Large-scale mouse knockouts and phenotypes.

    PubMed

    Ramírez-Solis, Ramiro; Ryder, Edward; Houghton, Richard; White, Jacqueline K; Bottomley, Joanna

    2012-01-01

    Standardized phenotypic analysis of mutant forms of every gene in the mouse genome will provide fundamental insights into mammalian gene function and advance human and animal health. The availability of the human and mouse genome sequences, the development of embryonic stem cell mutagenesis technology, the standardization of phenotypic analysis pipelines, and the paradigm-shifting industrialization of these processes have made this a realistic and achievable goal. The size of this enterprise will require global coordination to ensure economies of scale in both the generation and primary phenotypic analysis of the mutant strains, and to minimize unnecessary duplication of effort. To provide more depth to the functional annotation of the genome, effective mechanisms will also need to be developed to disseminate the information and resources produced to the wider community. Better models of disease, potential new drug targets with novel mechanisms of action, and completely unsuspected genotype-phenotype relationships covering broad aspects of biology will become apparent. To reach these goals, solutions to challenges in mouse production and distribution, as well as development of novel, ever more powerful phenotypic analysis modalities will be necessary. It is a challenging and exciting time to work in mouse genetics.

  12. Large-Scale Statistics for Cu Electromigration

    NASA Astrophysics Data System (ADS)

    Hauschildt, M.; Gall, M.; Hernandez, R.

    2009-06-01

    Even after the successful introduction of Cu-based metallization, the electromigration failure risk has remained one of the important reliability concerns for advanced process technologies. The observation of strong bimodality for the electron up-flow direction in dual-inlaid Cu interconnects has added complexity, but is now widely accepted. The failure voids can occur both within the via ("early" mode) or within the trench ("late" mode). More recently, bimodality has been reported also in down-flow electromigration, leading to very short lifetimes due to small, slit-shaped voids under vias. For a more thorough investigation of these early failure phenomena, specific test structures were designed based on the Wheatstone Bridge technique. The use of these structures enabled an increase of the tested sample size close to 675000, allowing a direct analysis of electromigration failure mechanisms at the single-digit ppm regime. Results indicate that down-flow electromigration exhibits bimodality at very small percentage levels, not readily identifiable with standard testing methods. The activation energy for the down-flow early failure mechanism was determined to be 0.83±0.02 eV. Within the small error bounds of this large-scale statistical experiment, this value is deemed to be significantly lower than the usually reported activation energy of 0.90 eV for electromigration-induced diffusion along Cu/SiCN interfaces. Due to the advantages of the Wheatstone Bridge technique, we were also able to expand the experimental temperature range down to 150° C, coming quite close to typical operating conditions up to 125° C. As a result of the lowered activation energy, we conclude that the down-flow early failure mode may control the chip lifetime at operating conditions. The slit-like character of the early failure void morphology also raises concerns about the validity of the Blech-effect for this mechanism. A very small amount of Cu depletion may cause failure even before a

  13. Large scale digital atlases in neuroscience

    NASA Astrophysics Data System (ADS)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  14. Food appropriation through large scale land acquisitions

    NASA Astrophysics Data System (ADS)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  15. Large-scale carbon fiber tests

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  16. Large-scale clustering of cosmic voids

    NASA Astrophysics Data System (ADS)

    Chan, Kwan Chuen; Hamaus, Nico; Desjacques, Vincent

    2014-11-01

    We study the clustering of voids using N -body simulations and simple theoretical models. The excursion-set formalism describes fairly well the abundance of voids identified with the watershed algorithm, although the void formation threshold required is quite different from the spherical collapse value. The void cross bias bc is measured and its large-scale value is found to be consistent with the peak background split results. A simple fitting formula for bc is found. We model the void auto-power spectrum taking into account the void biasing and exclusion effect. A good fit to the simulation data is obtained for voids with radii ≳30 Mpc h-1 , especially when the void biasing model is extended to 1-loop order. However, the best-fit bias parameters do not agree well with the peak-background results. Being able to fit the void auto-power spectrum is particularly important not only because it is the direct observable in galaxy surveys, but also our method enables us to treat the bias parameters as nuisance parameters, which are sensitive to the techniques used to identify voids.

  17. Simulations of Large Scale Structures in Cosmology

    NASA Astrophysics Data System (ADS)

    Liao, Shihong

    Large-scale structures are powerful probes for cosmology. Due to the long range and non-linear nature of gravity, the formation of cosmological structures is a very complicated problem. The only known viable solution is cosmological N-body simulations. In this thesis, we use cosmological N-body simulations to study structure formation, particularly dark matter haloes' angular momenta and dark matter velocity field. The origin and evolution of angular momenta is an important ingredient for the formation and evolution of haloes and galaxies. We study the time evolution of the empirical angular momentum - mass relation for haloes to offer a more complete picture about its origin, dependences on cosmological models and nonlinear evolutions. We also show that haloes follow a simple universal specific angular momentum profile, which is useful in modelling haloes' angular momenta. The dark matter velocity field will become a powerful cosmological probe in the coming decades. However, theoretical predictions of the velocity field rely on N-body simulations and thus may be affected by numerical artefacts (e.g. finite box size, softening length and initial conditions). We study how such numerical effects affect the predicted pairwise velocities, and we propose a theoretical framework to understand and correct them. Our results will be useful for accurately comparing N-body simulations to observational data of pairwise velocities.

  18. Curvature constraints from large scale structure

    NASA Astrophysics Data System (ADS)

    Di Dio, Enea; Montanari, Francesco; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien

    2016-06-01

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter ΩK with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on spatial curvature parameter estimation. We show that constraints on the curvature parameter may be strongly biased if, in particular, cosmic magnification is not included in the analysis. Other relativistic effects turn out to be subdominant in the studied configuration. We analyze how the shift in the estimated best-fit value for the curvature and other cosmological parameters depends on the magnification bias parameter, and find that significant biases are to be expected if this term is not properly considered in the analysis.

  19. Backscatter in Large-Scale Flows

    NASA Astrophysics Data System (ADS)

    Nadiga, Balu

    2009-11-01

    Downgradient mixing of potential-voriticity and its variants are commonly employed to model the effects of unresolved geostrophic turbulence on resolved scales. This is motivated by the (inviscid and unforced) particle-wise conservation of potential-vorticity and the mean forward or down-scale cascade of potential enstrophy in geostrophic turubulence. By examining the statistical distribution of the transfer of potential enstrophy from mean or filtered motions to eddy or sub-filter motions, we find that the mean forward cascade results from the forward-scatter being only slightly greater than the backscatter. Downgradient mixing ideas, do not recognize such equitable mean-eddy or large scale-small scale interactions and consequently model only the mean effect of forward cascade; the importance of capturing the effects of backscatter---the forcing of resolved scales by unresolved scales---are only beginning to be recognized. While recent attempts to model the effects of backscatter on resolved scales have taken a stochastic approach, our analysis suggests that these effects are amenable to being modeled deterministically.

  20. Large scale molecular simulations of nanotoxicity.

    PubMed

    Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong

    2014-01-01

    The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells.

  1. Large scale mechanical metamaterials as seismic shields

    NASA Astrophysics Data System (ADS)

    Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.

    2016-08-01

    Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.

  2. Large-scale wind turbine structures

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  3. Large-scale wind turbine structures

    NASA Astrophysics Data System (ADS)

    Spera, David A.

    1988-05-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  4. An informal paper on large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Ho, Y. C.

    1975-01-01

    Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.

  5. Transposon tagging of disease resistance genes. Final report, May 1, 1988--April 30, 1993

    SciTech Connect

    Michelmore, R.

    1994-09-01

    The goal of this project was to develop a transposon mutagenesis system for lettuce and to clone and characterize disease resistance genes by transposon tagging. The majority of studies were conducted with the Ac/Ds System. Researchers made and tested several constructs as well as utilized constructions shown to be functional in other plant species. Researchers demonstrated movement of Ac and DS in lettuce; however, they transposed at much lower frequencies in lettuce than in other plant species. Therefore, further manipulation of the system, particularly for flower specific expression of transposase, is required before a routine transposon system is available for lettuce. Populations of lettuce were generated and screened to test for the stability of resistance genes and several spontaneous mutations were isolated. Researchers also identified a resistance gene mutant in plants transformed with a Ds element and chimeric transposase gene. This is currently being characterized in detail.

  6. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  7. International space station. Large scale integration approach

    NASA Astrophysics Data System (ADS)

    Cohen, Brad

    The International Space Station is the most complex large scale integration program in development today. The approach developed for specification, subsystem development, and verification lay a firm basis on which future programs of this nature can be based. International Space Station is composed of many critical items, hardware and software, built by numerous International Partners, NASA Institutions, and U.S. Contractors and is launched over a period of five years. Each launch creates a unique configuration that must be safe, survivable, operable, and support ongoing assembly (assemblable) to arrive at the assembly complete configuration in 2003. The approaches to integrating each of the modules into a viable spacecraft and continue the assembly is a challenge in itself. Added to this challenge are the severe schedule constraints and lack of an "Iron Bird", which prevents assembly and checkout of each on-orbit configuration prior to launch. This paper will focus on the following areas: 1) Specification development process explaining how the requirements and specifications were derived using a modular concept driven by launch vehicle capability. Each module is composed of components of subsystems versus completed subsystems. 2) Approach to stage (each stage consists of the launched module added to the current on-orbit spacecraft) specifications. Specifically, how each launched module and stage ensures support of the current and future elements of the assembly. 3) Verification approach, due to the schedule constraints, is primarily analysis supported by testing. Specifically, how are the interfaces ensured to mate and function on-orbit when they cannot be mated before launch. 4) Lessons learned. Where can we improve this complex system design and integration task?

  8. Large Scale Flame Spread Environmental Characterization Testing

    NASA Technical Reports Server (NTRS)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  9. Synchronization of coupled large-scale Boolean networks

    SciTech Connect

    Li, Fangfei

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  10. Large-Scale Spacecraft Fire Safety Tests

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  11. Large-scale assembly of colloidal particles

    NASA Astrophysics Data System (ADS)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  12. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  13. The expanding universe of transposon technologies for gene and cell engineering.

    PubMed

    Ivics, Zoltán; Izsvák, Zsuzsanna

    2010-12-07

    Transposable elements can be viewed as natural DNA transfer vehicles that, similar to integrating viruses, are capable of efficient genomic insertion. The mobility of class II transposable elements (DNA transposons) can be controlled by conditionally providing the transposase component of the transposition reaction. Thus, a DNA of interest (be it a fluorescent marker, a small hairpin (sh)RNA expression cassette, a mutagenic gene trap or a therapeutic gene construct) cloned between the inverted repeat sequences of a transposon-based vector can be used for stable genomic insertion in a regulated and highly efficient manner. This methodological paradigm opened up a number of avenues for genome manipulations in vertebrates, including transgenesis for the generation of transgenic cells in tissue culture, the production of germline transgenic animals for basic and applied research, forward genetic screens for functional gene annotation in model species, and therapy of genetic disorders in humans. Sleeping Beauty (SB) was the first transposon shown to be capable of gene transfer in vertebrate cells, and recent results confirm that SB supports a full spectrum of genetic engineering including transgenesis, insertional mutagenesis, and therapeutic somatic gene transfer both ex vivo and in vivo. The first clinical application of the SB system will help to validate both the safety and efficacy of this approach. In this review, we describe the major transposon systems currently available (with special emphasis on SB), discuss the various parameters and considerations pertinent to their experimental use, and highlight the state of the art in transposon technology in diverse genetic applications.

  14. [The improvement and application of piggyBac transposon system in mammals].

    PubMed

    Qian, Qiujie; Che, Jiaqian; Ye, Lupeng; Zhong, Boxiong

    2014-10-01

    The piggyBac (PB) transposon system is a useful genomic engineering tool due to its high transposition efficiency, precise excision, semi-random insertion and large cargo capacity. But, it still needs to further improve the transgenic efficiency and reduce the risk of endogenous disruption caused by the random insertion of exogenous gene, especially in transgenic experiments of individual mammals. In recent studies, the PB transposase is fused with a DNA binding protein as a chimeric protein, which can guide the transposon to pre-designed loci. Besides, PB transposases obtained by mutagenesis have dramatically enhanced transposition activity and generated a novel function which is excision competent and integration defective. Furthermore, PB transposon system can carry large exogenous DNA fragments up to 207 kb when combining with the bacterial artificial chromosome vector. So far, these modified transposon systems have been widely applied in genome studies, gene therapy and induced pluripotent stem cells (iPS cells). In this study, we review the latest studies on piggyBac transposon system and its application prospect.

  15. Comparative Analysis of the Recently Discovered hAT Transposon TcBuster in Human Cells

    PubMed Central

    Woodard, Lauren E.; Li, Xianghong; Malani, Nirav; Kaja, Aparna; Hice, Robert H.; Atkinson, Peter W.; Bushman, Frederic D.; Craig, Nancy L.; Wilson, Matthew H.

    2012-01-01

    Background Transposons are useful tools for creating transgenic organisms, insertional mutagenesis, and genome engineering. TcBuster, a novel hAT-family transposon system derived from the red flour beetle Tribolium castaneum, was shown to be highly active in previous studies in insect embryoes. Methodology/Principal Findings We tested TcBuster for its activity in human embryonic kidney 293 (HEK-293) cells. Excision footprints obtained from HEK-293 cells contained small insertions and deletions consistent with a hAT-type repair mechanism of hairpin formation and non-homologous end-joining. Genome-wide analysis of 23,417 piggyBac, 30,303 Sleeping Beauty, and 27,985 TcBuster integrations in HEK-293 cells revealed a uniquely different integration pattern when compared to other transposon systems with regards to genomic elements. TcBuster experimental conditions were optimized to assay TcBuster activity in HEK-293 cells by colony assay selection for a neomycin-containing transposon. Increasing transposon plasmid increased the number of colonies, whereas gene transfer activity dependent on codon-optimized transposase plasmid peaked at 100 ng with decreased colonies at the highest doses of transposase DNA. Expression of the related human proteins Buster1, Buster3, and SCAND3 in HEK-293 cells did not result in genomic integration of the TcBuster transposon. TcBuster, Tol2, and piggyBac were compared directly at different ratios of transposon to transposase and found to be approximately comparable while having their own ratio preferences. Conclusions/Significance TcBuster was found to be highly active in mammalian HEK-293 cells and represents a promising tool for mammalian genome engineering. PMID:23166581

  16. Generalized transduction for genetic linkage analysis and transfer of transposon insertions in different Staphylococcus epidermidis strains.

    PubMed

    Nedelmann, M; Sabottke, A; Laufs, R; Mack, D

    1998-01-01

    Staphylococcus epidermidis phage 48 was used to efficiently transduce plasmid pTV1ts and a chromosomal Tn917 insertion M27 from S. epidermidis 13-1 to biofilm-producing clinical S. epidermidis isolates 1457, 9142, and 8400. The Tn917 insertion leading to the biofilm-negative phenotype of transposon mutant M10 was sequentially transduced to biofilm-producing S. epidermidis 1457 using S. epidermidis phage 48 and then, using the resulting biofilm-negative transductant 1457-M10 as a donor, into several unrelated biofilm-producing clinical S. epidermidis isolates using S. epidermidis phage 71. All resultant transductants displayed a completely biofilm-negative phenotype. In addition, S. epidermidis phage 71 was adapted to S. epidermidis 1457 and 8400, which allowed generalized transduction of transposon insertions in these wild-type strains. As Tn917 predominantly transposed into endogenous plasmids of all three strains used, an efficient system for chromosomal transposon mutagenesis was established by curing of S. epidermidis 1457 of a single endogenous plasmid p1457 by sodium dodecylsulfate treatment. After transduction of the resulting derivative, S. epidermidis 1457c with pTV1ts, insertion of transposon Tn917 to different sites of the chromosome of S. epidermidis 1457c was observed. Biofilm-producing S. epidermidis 1457c x pTV1ts was used to isolate a biofilm-negative transposon mutant (1457c-M3) with a chromosomal insertion apparently different from two previously isolated isogenic biofilm-negative transposon mutants, M10 and M11 (Mack, D., M. Nedelmann, A. Krokotsch, A. Schwarzkopf, J. Heesemann, and R. Laufs: Infect Immun 62 [1994] 3244-3253). S. epidermidis phage 71 was used to prove genetic linkage between transposon insertion and altered phenotype by generalized transduction. In combination with phage transduction, 1457c x pTV1ts will be a useful tool facilitating the study of bacterial determinants of the pathogenicity of S. epidermidis.

  17. Multitree Algorithms for Large-Scale Astrostatistics

    NASA Astrophysics Data System (ADS)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    this number every week, resulting in billions of objects. At such scales, even linear-time analysis operations present challenges, particularly since statistical analyses are inherently interactive processes, requiring that computations complete within some reasonable human attention span. The quadratic (or worse) runtimes of straightforward implementations become quickly unbearable. Examples of applications. These analysis subroutines occur ubiquitously in astrostatistical work. We list just a few examples. The need to cross-match objects across different catalogs has led to various algorithms, which at some point perform an AllNN computation. 2-point and higher-order spatial correlations for the basis of spatial statistics, and are utilized in astronomy to compare the spatial structures of two datasets, such as an observed sample and a theoretical sample, for example, forming the basis for two-sample hypothesis testing. Friends-of-friends clustering is often used to identify halos in data from astrophysical simulations. Minimum spanning tree properties have also been proposed as statistics of large-scale structure. Comparison of the distributions of different kinds of objects requires accurate density estimation, for which KDE is the overall statistical method of choice. The prediction of redshifts from optical data requires accurate regression, for which kernel regression is a powerful method. The identification of objects of various types in astronomy, such as stars versus galaxies, requires accurate classification, for which KDA is a powerful method. Overview. In this chapter, we will briefly sketch the main ideas behind recent fast algorithms which achieve, for example, linear runtimes for pairwise-distance problems, or similarly dramatic reductions in computational growth. In some cases, the runtime orders for these algorithms are mathematically provable statements, while in others we have only conjectures backed by experimental observations for the time being

  18. Probes of large-scale structure in the universe

    NASA Technical Reports Server (NTRS)

    Suto, Yasushi; Gorski, Krzysztof; Juszkiewicz, Roman; Silk, Joseph

    1988-01-01

    A general formalism is developed which shows that the gravitational instability theory for the origin of the large-scale structure of the universe is now capable of critically confronting observational results on cosmic background radiation angular anisotropies, large-scale bulk motions, and large-scale clumpiness in the galaxy counts. The results indicate that presently advocated cosmological models will have considerable difficulty in simultaneously explaining the observational results.

  19. Tyrosine Recombinase Retrotransposons and Transposons.

    PubMed

    Poulter, Russell T M; Butler, Margi I

    2015-04-01

    Retrotransposons carrying tyrosine recombinases (YR) are widespread in eukaryotes. The first described tyrosine recombinase mobile element, DIRS1, is a retroelement from the slime mold Dictyostelium discoideum. The YR elements are bordered by terminal repeats related to their replication via free circular dsDNA intermediates. Site-specific recombination is believed to integrate the circle without creating duplications of the target sites. Recently a large number of YR retrotransposons have been described, including elements from fungi (mucorales and basidiomycetes), plants (green algae) and a wide range of animals including nematodes, insects, sea urchins, fish, amphibia and reptiles. YR retrotransposons can be divided into three major groups: the DIRS elements, PAT-like and the Ngaro elements. The three groups form distinct clades on phylogenetic trees based on alignments of reverse transcriptase/ribonuclease H (RT/RH) and YR sequences, and also having some structural distinctions. A group of eukaryote DNA transposons, cryptons, also carry tyrosine recombinases. These DNA transposons do not encode a reverse transcriptase. They have been detected in several pathogenic fungi and oomycetes. Sequence comparisons suggest that the crypton YRs are related to those of the YR retrotransposons. We suggest that the YR retrotransposons arose from the combination of a crypton-like YR DNA transposon and the RT/RH encoding sequence of a retrotransposon. This acquisition must have occurred at a very early point in the evolution of eukaryotes. PMID:26104693

  20. Identification of a Virulence-Associated Determinant, Dihydrolipoamide Dehydrogenase (lpd), in Mycoplasma gallisepticum through In Vivo Screening of Transposon Mutants

    PubMed Central

    Hudson, P.; Gorton, T. S.; Papazisi, L.; Cecchini, K.; Frasca, S.; Geary, S. J.

    2006-01-01

    To effectively analyze Mycoplasma gallisepticum for virulence-associated determinants, the ability to create stable genetic mutations is essential. Global M. gallisepticum mutagenesis is currently limited to the use of transposons. Using the gram-positive transposon Tn4001mod, a mutant library of 110 transformants was constructed and all insertion sites were mapped. To identify transposon insertion points, a unique primer directed outward from the end of Tn4001mod was used to sequence flanking genomic regions. By comparing sequences obtained in this manner to the annotated M. gallisepticum genome, the precise locations of transposon insertions were discerned. After determining the transposon insertion site for each mutant, unique reverse primers were synthesized based on the specific sequences, and PCR was performed. The resultant amplicons were used as unique Tn4001mod mutant identifiers. This procedure is referred to as signature sequence mutagenesis (SSM). SSM permits the comprehensive screening of the M. gallisepticum genome for the identification of novel virulence-associated determinants from a mixed mutant population. To this end, chickens were challenged with a pool of 27 unique Tn4001mod mutants. Two weeks postinfection, the birds were sacrificed, and organisms were recovered from respiratory tract tissues and screened for the presence or absence of various mutants. SSM is a negative-selection screening technique whereby those mutants possessing transposon insertions in genes essential for in vivo survival are not recovered from the host. We have identified a virulence-associated gene encoding dihydrolipoamide dehydrogenase (lpd). A transposon insertion in the middle of the coding sequence resulted in diminished biologic function and reduced virulence of the mutant designated Mg 7. PMID:16428737

  1. Transposon tagging of disease resistance genes. Progress report, May 1, 1988--1992

    SciTech Connect

    Michelmore, R.

    1994-06-01

    Our goal is to clone genes in lettuce determining resistance to downy mildew. One approach involves the mobilization of transposons into resistance genes to mutate and tag the target gene. Because transposons have yet to be isolated and characterized from lettuce, the majority of our experiments have involved Ac from corn as this is increasingly the best characterized transposon. Over the past several years, various labs have contributed to a detailed understanding of the biology of Ac in corn and heterologous plant species. We have collaborated closely with several of these labs, exchanged materials and incorporated their advances into our analysis of transposition in lettuce. The original proposal described the development of a transposon mutagenesis system for lettuce and its subsequent use to tag disease resistance genes. The development phase involved characterization and manipulation of Ac transposition, identification of suitable whole plant selectable markers for the construction of chimeric non-autonomous elements, and investigation of the stability of resistance genes. Investigation of Ac transposition in lettuce has received the majority of our attention. Initially, we made a simple construct with wildtype Ac and introduced it into lettuce. No transposition was observed; although other labs demonstrated that the same construct was functional in tomato. We then focused on assaying for Ac transposition with constructs of increasing sophistication that had been demonstrated by others to be functional in other species. The latest constructs for transposon mutagenesis clearly demonstrated transposition in lettuce. This allowed us to generate seed stocks that we will start to screen for insertional inactivation of resistance genes this year.

  2. Regulation of the Mutator system of transposons in maize.

    PubMed

    Lisch, Damon

    2013-01-01

    The Mutator system has proved to be an invaluable tool for elucidating gene function via insertional mutagenesis. Its high copy number, high transposition frequency, relative lack of insertion specificity, and ease of use has made it the preferred method for gene tagging in maize. Recent advances in high throughput sequencing of insertion sites, combined with the availability of large numbers of pre-mutagenized and sequence-indexed stocks, ensure that this resource will only be more useful in the years ahead. Muk is a locus that can silence Mu-active lines, making it possible to ameliorate the phenotypic effects of high numbers of active Mu transposons and reduce the copy number of these elements during introgressions.

  3. Safeguards instruments for Large-Scale Reprocessing Plants

    SciTech Connect

    Hakkila, E.A.; Case, R.S.; Sonnier, C.

    1993-06-01

    Between 1987 and 1992 a multi-national forum known as LASCAR (Large Scale Reprocessing Plant Safeguards) met to assist the IAEA in development of effective and efficient safeguards for large-scale reprocessing plants. The US provided considerable input for safeguards approaches and instrumentation. This paper reviews and updates instrumentation of importance in measuring plutonium and uranium in these facilities.

  4. The Challenge of Large-Scale Literacy Improvement

    ERIC Educational Resources Information Center

    Levin, Ben

    2010-01-01

    This paper discusses the challenge of making large-scale improvements in literacy in schools across an entire education system. Despite growing interest and rhetoric, there are very few examples of sustained, large-scale change efforts around school-age literacy. The paper reviews 2 instances of such efforts, in England and Ontario. After…

  5. New derivatives of transposon Tn5 suitable for mobilization of replicons, generation of operon fusions and induction of genes in gram-negative bacteria.

    PubMed

    Simon, R; Quandt, J; Klipp, W

    1989-08-01

    Three types of new variants of the broad-host-range transposon Tn5 are described. (i) Tn5-mob derivatives with the new selective resistance (R) markers GmR, SpR and TcR facilitate the efficient mobilization of replicons within a wide range of Gram-negative bacteria. (ii) Promoter probe transposons carry the promoterless reporter genes lacZ, nptII, or luc, and NmR, GmR or TcR as selective markers. These transposons can be used to generate transcriptional fusions upon insertion, thus facilitating accurate determinations of gene expression. (iii) Tn5-P-out derivatives carry the npt- or tac-promoter reading out from the transposon, and TcR, NmR or GmR genes. These variants allow the constitutive expression of downstream genes. The new Tn5 variants are available on mobilizable Escherichia coli vectors suitable as suicidal carriers for transposon mutagenesis of non-E. coli recipients and some on a phage lambda mutant to be used for transposon mutagenesis in E. coli. PMID:2551782

  6. Chemical Mutagens, Transposons, and Transgenes to Interrogate Gene Function in Drosophila melanogaster

    PubMed Central

    Venken, Koen J.T.; Bellen, Hugo J.

    2014-01-01

    The study of genetics, genes, and chromosomal inheritance was initiated by Thomas Morgan in when the first visible mutations were identified in fruit flies. The field expanded upon the work initiated by Herman Muller in 1926 when he used X-rays to develop the first balancer chromosomes. Today, balancers are still invaluable to maintain mutations and transgenes but the arsenal of tools has expanded vastly and numerous new methods have been developed, many relying on the availability of the genome sequence and transposable elements. Forward genetic screens based on chemical mutagenesis or transposable elements have resulted in the unbiased identification of many novel players involved in processes probed by specific phenotypic assays. Reverse genetic approaches have relied on the availability of a carefully selected set of transposon insertions spread throughout the genome to allow the manipulation of the region in the vicinity of each insertion. Lastly, the ability to transform Drosophila with single copy transgenes using transposons or site-specific integration using the ΦC31 integrase has allowed numerous manipulations, including the ability to create and integrate genomic rescue constructs, generate duplications, RNAi knock-out technology, binary expression systems like the GAL4/UAS system as well as other methods. Here, we will discuss the most useful methodologies to interrogate the fruit fly genome in vivo focusing on chemical mutagenesis, transposons and transgenes. Genome engineering approaches based on nucleases and RNAi technology are discussed in following chapters. PMID:24583113

  7. Generation of an inducible and optimized piggyBac transposon system†

    PubMed Central

    Cadiñanos, Juan; Bradley, Allan

    2007-01-01

    Genomic studies in the mouse have been slowed by the lack of transposon-mediated mutagenesis. However, since the resurrection of Sleeping Beauty (SB), the possibility of performing forward genetics in mice has been reinforced. Recently, piggyBac (PB), a functional transposon from insects, was also described to work in mammals. As the activity of PB is higher than that of SB11 and SB12, two hyperactive SB transposases, we have characterized and improved the PB system in mouse ES cells. We have generated a mouse codon-optimized version of the PB transposase coding sequence (CDS) which provides transposition levels greater than the original. We have also found that the promoter sequence predicted in the 5′-terminal repeat of the PB transposon is active in the mammalian context. Finally, we have engineered inducible versions of the optimized piggyBac transposase fused with ERT2. One of them, when induced, provides higher levels of transposition than the native piggyBac CDS, whereas in the absence of induction its activity is indistinguishable from background. We expect that these tools, adaptable to perform mouse-germline mutagenesis, will facilitate the identification of genes involved in pathological and physiological processes, such as cancer or ES cell differentiation. PMID:17576687

  8. Distribution probability of large-scale landslides in central Nepal

    NASA Astrophysics Data System (ADS)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  9. Fungicide-induced transposon movement in Monilinia fructicola.

    PubMed

    Chen, Fengping; Everhart, Sydney E; Bryson, P Karen; Luo, Chaoxi; Song, Xi; Liu, Xili; Schnabel, Guido

    2015-12-01

    Repeated applications of fungicides with a single mode of action are believed to select for pre-existing resistant strains in a pathogen population, while the impact of sub-lethal doses of such fungicides on sensitive members of the population is unknown. In this study, in vitro evidence is presented that continuous exposure of Monilinia fructicola mycelium to some fungicides can induce genetic change in form of transposon transposition. Three fungicide-sensitive M. fructicola isolates were exposed in 12 weekly transfers of mycelia to a dose gradient of demethylation inhibitor fungicide (DMI) SYP-Z048 and quinone outside inhibitor fungicide (QoI) azoxystrobin in solo or mixture treatments. Evidence of mutagenesis was assessed by monitoring Mftc1, a multicopy transposable element of M. fructicola, by PCR and Southern blot analysis. Movement of Mftc1 was observed following azoxystrobin and azoxystrobin plus SYP-Z048 treatments in two of the three isolates, but not in the non-fungicide-treated controls. Interestingly, the upstream promoter region of MfCYP51 was a prime target for Mftc1 transposition in these isolates. Transposition of Mftc1 was verified by Southern blot in two of three isolates from another, similar experiment following prolonged, sublethal azoxystrobin exposure, although in these isolates movement of Mftc1 in the upstream MfCYP51 promoter region was not observed. More research is warranted to determine whether fungicide-induced mutagenesis may also happen under field conditions. PMID:26537535

  10. Polymer Physics of the Large-Scale Structure of Chromatin.

    PubMed

    Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario

    2016-01-01

    We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments. PMID:27659986

  11. Large-scale anisotropy of the cosmic microwave background radiation

    NASA Technical Reports Server (NTRS)

    Silk, J.; Wilson, M. L.

    1981-01-01

    Inhomogeneities in the large-scale distribution of matter inevitably lead to the generation of large-scale anisotropy in the cosmic background radiation. The dipole, quadrupole, and higher order fluctuations expected in an Einstein-de Sitter cosmological model have been computed. The dipole and quadrupole anisotropies are comparable to the measured values, and impose important constraints on the allowable spectrum of large-scale matter density fluctuations. A significant dipole anisotropy is generated by the matter distribution on scales greater than approximately 100 Mpc. The large-scale anisotropy is insensitive to the ionization history of the universe since decoupling, and cannot easily be reconciled with a galaxy formation theory that is based on primordial adiabatic density fluctuations.

  12. Polymer Physics of the Large-Scale Structure of Chromatin.

    PubMed

    Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario

    2016-01-01

    We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments.

  13. Needs, opportunities, and options for large scale systems research

    SciTech Connect

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  14. Large scale anomalies in the microwave background: causation and correlation.

    PubMed

    Aslanyan, Grigor; Easther, Richard

    2013-12-27

    Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.

  15. Large-scale studies of marked birds in North America

    USGS Publications Warehouse

    Tautin, J.; Metras, L.; Smith, G.

    1999-01-01

    The first large-scale, co-operative, studies of marked birds in North America were attempted in the 1950s. Operation Recovery, which linked numerous ringing stations along the east coast in a study of autumn migration of passerines, and the Preseason Duck Ringing Programme in prairie states and provinces, conclusively demonstrated the feasibility of large-scale projects. The subsequent development of powerful analytical models and computing capabilities expanded the quantitative potential for further large-scale projects. Monitoring Avian Productivity and Survivorship, and Adaptive Harvest Management are current examples of truly large-scale programmes. Their exemplary success and the availability of versatile analytical tools are driving changes in the North American bird ringing programme. Both the US and Canadian ringing offices are modifying operations to collect more and better data to facilitate large-scale studies and promote a more project-oriented ringing programme. New large-scale programmes such as the Cornell Nest Box Network are on the horizon.

  16. A study of MLFMA for large-scale scattering problems

    NASA Astrophysics Data System (ADS)

    Hastriter, Michael Larkin

    This research is centered in computational electromagnetics with a focus on solving large-scale problems accurately in a timely fashion using first principle physics. Error control of the translation operator in 3-D is shown. A parallel implementation of the multilevel fast multipole algorithm (MLFMA) was studied as far as parallel efficiency and scaling. The large-scale scattering program (LSSP), based on the ScaleME library, was used to solve ultra-large-scale problems including a 200lambda sphere with 20 million unknowns. As these large-scale problems were solved, techniques were developed to accurately estimate the memory requirements. Careful memory management is needed in order to solve these massive problems. The study of MLFMA in large-scale problems revealed significant errors that stemmed from inconsistencies in constants used by different parts of the algorithm. These were fixed to produce the most accurate data possible for large-scale surface scattering problems. Data was calculated on a missile-like target using both high frequency methods and MLFMA. This data was compared and analyzed to determine possible strategies to increase data acquisition speed and accuracy through multiple computation method hybridization.

  17. Large-scale motions in a plane wall jet

    NASA Astrophysics Data System (ADS)

    Gnanamanickam, Ebenezer; Jonathan, Latim; Shibani, Bhatt

    2015-11-01

    The dynamic significance of large-scale motions in turbulent boundary layers have been the focus of several recent studies, primarily focussing on canonical flows - zero pressure gradient boundary layers, flows within pipes and channels. This work presents an investigation into the large-scale motions in a boundary layer that is used as the prototypical flow field for flows with large-scale mixing and reactions, the plane wall jet. An experimental investigation is carried out in a plane wall jet facility designed to operate at friction Reynolds numbers Reτ > 1000 , which allows for the development of a significant logarithmic region. The streamwise turbulent intensity across the boundary layer is decomposed into small-scale (less than one integral length-scale δ) and large-scale components. The small-scale energy has a peak in the near-wall region associated with the near-wall turbulent cycle as in canonical boundary layers. However, eddies of large-scales are the dominating eddies having significantly higher energy, than the small-scales across almost the entire boundary layer even at the low to moderate Reynolds numbers under consideration. The large-scales also appear to amplitude and frequency modulate the smaller scales across the entire boundary layer.

  18. Natural mutagenesis of human genomes by endogenous retrotransposons

    PubMed Central

    Iskow, Rebecca C.; McCabe, Michael T.; Mills, Ryan E.; Torene, Spencer; Pittard, W. Stephen; Neuwald, Andrew F.; Van Meir, Erwin G.; Vertino, Paula M.; Devine, Scott E.

    2010-01-01

    SUMMARY Two abundant classes of mobile elements, namely Alu and L1 elements, continue to generate new retrotransposon insertions in human genomes. Estimates suggest that these elements have generated millions of new germline insertions in individual human genomes worldwide. Unfortunately, current technologies are not capable of detecting most of these young insertions, and the true extent of germline mutagenesis by endogenous human retrotransposons has been difficult to examine. Here, we describe new technologies for detecting these young retrotransposon insertions and demonstrate that such insertions indeed are abundant in human populations. We also found that new somatic L1 insertions occur at high frequencies in human lung cancer genomes. Genome-wide analysis suggests that altered DNA methylation may be responsible for the high levels of L1 mobilization observed in these tumors. Our data indicate that transposon-mediated mutagenesis is extensive in human genomes, and is likely to have a major impact on human biology and diseases. PMID:20603005

  19. Large-scale filament formation inhibits the activity of CTP synthetase

    PubMed Central

    Barry, Rachael M; Bitbol, Anne-Florence; Lorestani, Alexander; Charles, Emeric J; Habrian, Chris H; Hansen, Jesse M; Li, Hsin-Jung; Baldwin, Enoch P; Wingreen, Ned S; Kollman, Justin M; Gitai, Zemer

    2014-01-01

    CTP Synthetase (CtpS) is a universally conserved and essential metabolic enzyme. While many enzymes form small oligomers, CtpS forms large-scale filamentous structures of unknown function in prokaryotes and eukaryotes. By simultaneously monitoring CtpS polymerization and enzymatic activity, we show that polymerization inhibits activity, and CtpS's product, CTP, induces assembly. To understand how assembly inhibits activity, we used electron microscopy to define the structure of CtpS polymers. This structure suggests that polymerization sterically hinders a conformational change necessary for CtpS activity. Structure-guided mutagenesis and mathematical modeling further indicate that coupling activity to polymerization promotes cooperative catalytic regulation. This previously uncharacterized regulatory mechanism is important for cellular function since a mutant that disrupts CtpS polymerization disrupts E. coli growth and metabolic regulation without reducing CTP levels. We propose that regulation by large-scale polymerization enables ultrasensitive control of enzymatic activity while storing an enzyme subpopulation in a conformationally restricted form that is readily activatable. DOI: http://dx.doi.org/10.7554/eLife.03638.001 PMID:25030911

  20. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ΛCDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ζ. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ζ, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ζ. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  1. Rapid quantification of mutant fitness in diverse bacteria by sequencing randomly bar-coded transposons

    SciTech Connect

    Wetmore, Kelly M.; Price, Morgan N.; Waters, Robert J.; Lamson, Jacob S.; He, Jennifer; Hoover, Cindi A.; Blow, Matthew J.; Bristow, James; Butland, Gareth; Arkin, Adam P.; Deutschbauer, Adam

    2015-05-12

    Transposon mutagenesis with next-generation sequencing (TnSeq) is a powerful approach to annotate gene function in bacteria, but existing protocols for TnSeq require laborious preparation of every sample before sequencing. Thus, the existing protocols are not amenable to the throughput necessary to identify phenotypes and functions for the majority of genes in diverse bacteria. Here, we present a method, random bar code transposon-site sequencing (RB-TnSeq), which increases the throughput of mutant fitness profiling by incorporating random DNA bar codes into Tn5 and mariner transposons and by using bar code sequencing (BarSeq) to assay mutant fitness. RB-TnSeq can be used with any transposon, and TnSeq is performed once per organism instead of once per sample. Each BarSeq assay requires only a simple PCR, and 48 to 96 samples can be sequenced on one lane of an Illumina HiSeq system. We demonstrate the reproducibility and biological significance of RB-TnSeq with Escherichia coli, Phaeobacter inhibens, Pseudomonas stutzeri, Shewanella amazonensis, and Shewanella oneidensis. To demonstrate the increased throughput of RB-TnSeq, we performed 387 successful genome-wide mutant fitness assays representing 130 different bacterium-carbon source combinations and identified 5,196 genes with significant phenotypes across the five bacteria. In P. inhibens, we used our mutant fitness data to identify genes important for the utilization of diverse carbon substrates, including a putative D-mannose isomerase that is required for mannitol catabolism. RB-TnSeq will enable the cost-effective functional annotation of diverse bacteria using mutant fitness profiling. A large challenge in microbiology is the functional assessment of the millions of uncharacterized genes identified by genome sequencing. Transposon mutagenesis coupled to next-generation sequencing (TnSeq) is a powerful approach to assign phenotypes and functions to genes

  2. Rapid quantification of mutant fitness in diverse bacteria by sequencing randomly bar-coded transposons

    DOE PAGES

    Wetmore, Kelly M.; Price, Morgan N.; Waters, Robert J.; Lamson, Jacob S.; He, Jennifer; Hoover, Cindi A.; Blow, Matthew J.; Bristow, James; Butland, Gareth; Arkin, Adam P.; et al

    2015-05-12

    Transposon mutagenesis with next-generation sequencing (TnSeq) is a powerful approach to annotate gene function in bacteria, but existing protocols for TnSeq require laborious preparation of every sample before sequencing. Thus, the existing protocols are not amenable to the throughput necessary to identify phenotypes and functions for the majority of genes in diverse bacteria. Here, we present a method, random bar code transposon-site sequencing (RB-TnSeq), which increases the throughput of mutant fitness profiling by incorporating random DNA bar codes into Tn5 and mariner transposons and by using bar code sequencing (BarSeq) to assay mutant fitness. RB-TnSeq can be used with anymore » transposon, and TnSeq is performed once per organism instead of once per sample. Each BarSeq assay requires only a simple PCR, and 48 to 96 samples can be sequenced on one lane of an Illumina HiSeq system. We demonstrate the reproducibility and biological significance of RB-TnSeq with Escherichia coli, Phaeobacter inhibens, Pseudomonas stutzeri, Shewanella amazonensis, and Shewanella oneidensis. To demonstrate the increased throughput of RB-TnSeq, we performed 387 successful genome-wide mutant fitness assays representing 130 different bacterium-carbon source combinations and identified 5,196 genes with significant phenotypes across the five bacteria. In P. inhibens, we used our mutant fitness data to identify genes important for the utilization of diverse carbon substrates, including a putative D-mannose isomerase that is required for mannitol catabolism. RB-TnSeq will enable the cost-effective functional annotation of diverse bacteria using mutant fitness profiling. A large challenge in microbiology is the functional assessment of the millions of uncharacterized genes identified by genome sequencing. Transposon mutagenesis coupled to next-generation sequencing (TnSeq) is a powerful approach to assign phenotypes and functions to genes. However, the current strategies for TnSeq are

  3. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  4. Do Large-Scale Topological Features Correlate with Flare Properties?

    NASA Astrophysics Data System (ADS)

    DeRosa, Marc L.; Barnes, Graham

    2016-05-01

    In this study, we aim to identify whether the presence or absence of particular topological features in the large-scale coronal magnetic field are correlated with whether a flare is confined or eruptive. To this end, we first determine the locations of null points, spine lines, and separatrix surfaces within the potential fields associated with the locations of several strong flares from the current and previous sunspot cycles. We then validate the topological skeletons against large-scale features in observations, such as the locations of streamers and pseudostreamers in coronagraph images. Finally, we characterize the topological environment in the vicinity of the flaring active regions and identify the trends involving their large-scale topologies and the properties of the associated flares.

  5. Acoustic Studies of the Large Scale Ocean Circulation

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  6. A relativistic signature in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  7. Coupling between convection and large-scale circulation

    NASA Astrophysics Data System (ADS)

    Becker, T.; Stevens, B. B.; Hohenegger, C.

    2014-12-01

    The ultimate drivers of convection - radiation, tropospheric humidity and surface fluxes - are altered both by the large-scale circulation and by convection itself. A quantity to which all drivers of convection contribute is moist static energy, or gross moist stability, respectively. Therefore, a variance analysis of the moist static energy budget in radiative-convective equilibrium helps understanding the interaction of precipitating convection and the large-scale environment. In addition, this method provides insights concerning the impact of convective aggregation on this coupling. As a starting point, the interaction is analyzed with a general circulation model, but a model intercomparison study using a hierarchy of models is planned. Effective coupling parameters will be derived from cloud resolving models and these will in turn be related to assumptions used to parameterize convection in large-scale models.

  8. Human pescadillo induces large-scale chromatin unfolding.

    PubMed

    Zhang, Hao; Fang, Yan; Huang, Cuifen; Yang, Xiao; Ye, Qinong

    2005-06-01

    The human pescadillo gene encodes a protein with a BRCT domain. Pescadillo plays an important role in DNA synthesis, cell proliferation and transformation. Since BRCT domains have been shown to induce chromatin large-scale unfolding, we tested the role of Pescadillo in regulation of large-scale chromatin unfolding. To this end, we isolated the coding region of Pescadillo from human mammary MCF10A cells. Compared with the reported sequence, the isolated Pescadillo contains in-frame deletion from amino acid 580 to 582. Targeting the Pescadillo to an amplified, lac operator-containing chromosome region in the mammalian genome results in large-scale chromatin decondensation. This unfolding activity maps to the BRCT domain of Pescadillo. These data provide a new clue to understanding the vital role of Pescadillo.

  9. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    NASA Astrophysics Data System (ADS)

    Blackman, Eric G.

    2015-05-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.

  10. Clearing and Labeling Techniques for Large-Scale Biological Tissues

    PubMed Central

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-01-01

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  11. Survey of decentralized control methods. [for large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1975-01-01

    An overview is presented of the types of problems that are being considered by control theorists in the area of dynamic large scale systems with emphasis on decentralized control strategies. Approaches that deal directly with decentralized decision making for large scale systems are discussed. It is shown that future advances in decentralized system theory are intimately connected with advances in the stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools associated with the latter are summarized, and recommendations concerning future research are presented.

  12. Corridors Increase Plant Species Richness at Large Scales

    SciTech Connect

    Damschen, Ellen I.; Haddad, Nick M.; Orrock,John L.; Tewksbury, Joshua J.; Levey, Douglas J.

    2006-09-01

    Habitat fragmentation is one of the largest threats to biodiversity. Landscape corridors, which are hypothesized to reduce the negative consequences of fragmentation, have become common features of ecological management plans worldwide. Despite their popularity, there is little evidence documenting the effectiveness of corridors in preserving biodiversity at large scales. Using a large-scale replicated experiment, we showed that habitat patches connected by corridors retain more native plant species than do isolated patches, that this difference increases over time, and that corridors do not promote invasion by exotic species. Our results support the use of corridors in biodiversity conservation.

  13. Large-scale superfluid vortex rings at nonzero temperatures

    NASA Astrophysics Data System (ADS)

    Wacks, D. H.; Baggaley, A. W.; Barenghi, C. F.

    2014-12-01

    We numerically model experiments in which large-scale vortex rings—bundles of quantized vortex loops—are created in superfluid helium by a piston-cylinder arrangement. We show that the presence of a normal-fluid vortex ring together with the quantized vortices is essential to explain the coherence of these large-scale vortex structures at nonzero temperatures, as observed experimentally. Finally we argue that the interaction of superfluid and normal-fluid vortex bundles is relevant to recent investigations of superfluid turbulence.

  14. Genome-Wide Analysis of Transposon and Retroviral Insertions Reveals Preferential Integrations in Regions of DNA Flexibility

    PubMed Central

    Vrljicak, Pavle; Tao, Shijie; Varshney, Gaurav K.; Quach, Helen Ngoc Bao; Joshi, Adita; LaFave, Matthew C.; Burgess, Shawn M.; Sampath, Karuna

    2016-01-01

    DNA transposons and retroviruses are important transgenic tools for genome engineering. An important consideration affecting the choice of transgenic vector is their insertion site preferences. Previous large-scale analyses of Ds transposon integration sites in plants were done on the basis of reporter gene expression or germ-line transmission, making it difficult to discern vertebrate integration preferences. Here, we compare over 1300 Ds transposon integration sites in zebrafish with Tol2 transposon and retroviral integration sites. Genome-wide analysis shows that Ds integration sites in the presence or absence of marker selection are remarkably similar and distributed throughout the genome. No strict motif was found, but a preference for structural features in the target DNA associated with DNA flexibility (Twist, Tilt, Rise, Roll, Shift, and Slide) was observed. Remarkably, this feature is also found in transposon and retroviral integrations in maize and mouse cells. Our findings show that structural features influence the integration of heterologous DNA in genomes, and have implications for targeted genome engineering. PMID:26818075

  15. 2004 Mutagenesis Gordon Conference

    SciTech Connect

    Dr. Sue Jinks-Robertson

    2005-09-16

    Mutations are genetic alterations that drive biological evolution and cause many, if not all, human diseases. Mutation originates via two distinct mechanisms: ''vertical'' variation is de novo change of one or few bases, whereas ''horizontal'' variation occurs by genetic recombination, which creates new mosaics of pre-existing sequences. The Mutagenesis Conference has traditionally focused on the generation of mutagenic intermediates during normal DNA synthesis or in response to environmental insults, as well as the diverse repair mechanisms that prevent the fixation of such intermediates as permanent mutations. While the 2004 Conference will continue to focus on the molecular mechanisms of mutagenesis, there will be increased emphasis on the biological consequences of mutations, both in terms of evolutionary processes and in terms of human disease. The meeting will open with two historical accounts of mutation research that recapitulate the intellectual framework of this field and thereby place the current research paradigms into perspective. The two introductory keynote lectures will be followed by sessions on: (1) mutagenic systems, (2) hypermutable sequences, (3) mechanisms of mutation, (4) mutation avoidance systems, (5) mutation in human hereditary and infectious diseases, (6) mutation rates in evolution and genotype-phenotype relationships, (7) ecology, mutagenesis and the modeling of evolution and (8) genetic diversity of the human population and models for human mutagenesis. The Conference will end with a synthesis of the meeting as the keynote closing lecture.

  16. Computer Simulation of Mutagenesis.

    ERIC Educational Resources Information Center

    North, J. C.; Dent, M. T.

    1978-01-01

    A FORTRAN program is described which simulates point-substitution mutations in the DNA strands of typical organisms. Its objective is to help students to understand the significance and structure of the genetic code, and the mechanisms and effect of mutagenesis. (Author/BB)

  17. The Mutagenesis Assistant Program.

    PubMed

    Verma, Rajni; Wong, Tuck Seng; Schwaneberg, Ulrich; Roccatano, Danilo

    2014-01-01

    Mutagenesis Assistant Program (MAP) is a web-based statistical tool to develop directed evolution strategies by investigating the consequences at the amino acid level of the mutational biases of random mutagenesis methods on any given gene. The latest development of the program, the MAP(2.0)3D server, correlates the generated amino acid substitution patterns of a specific random mutagenesis method to the sequence and structural information of the target protein. The combined information can be used to select an experimental strategy that improves the chances of obtaining functionally efficient and/or stable enzyme variants. Hence, the MAP(2.0)3D server facilitates the "in silico" prescreening of the target gene by predicting the amino acid diversity generated in a random mutagenesis library. Here, we describe the features of MAP(2.0)3D server by analyzing, as an example, the cytochrome P450BM3 monooxygenase (CYP102A1). The MAP(2.0)3D server is available publicly at http://map.jacobs-university.de/map3d.html.

  18. Systematic Mutagenesis of the Escherichia coli Genome†

    PubMed Central

    Kang, Yisheng; Durfee, Tim; Glasner, Jeremy D.; Qiu, Yu; Frisch, David; Winterberg, Kelly M.; Blattner, Frederick R.

    2004-01-01

    A high-throughput method has been developed for the systematic mutagenesis of the Escherichia coli genome. The system is based on in vitro transposition of a modified Tn5 element, the Sce-poson, into linear fragments of each open reading frame. The transposon introduces both positive (kanamycin resistance) and negative (I-SceI recognition site) selectable markers for isolation of mutants and subsequent allele replacement, respectively. Reaction products are then introduced into the genome by homologous recombination via the λRed proteins. The method has yielded insertion alleles for 1976 genes during a first pass through the genome including, unexpectedly, a number of known and putative essential genes. Sce-poson insertions can be easily replaced by markerless mutations by using the I-SceI homing endonuclease to select against retention of the transposon as demonstrated by the substitution of amber and/or in-frame deletions in six different genes. This allows a Sce-poson-containing gene to be specifically targeted for either designed or random modifications, as well as permitting the stepwise engineering of strains with multiple mutations. The promiscuous nature of Tn5 transposition also enables a targeted gene to be dissected by using randomly inserted Sce-posons as shown by a lacZ allelic series. Finally, assessment of the insertion sites by an iterative weighted matrix algorithm reveals that these hyperactive Tn5 complexes generally recognize a highly degenerate asymmetric motif on one end of the target site helping to explain the randomness of Tn5 transposition. PMID:15262929

  19. Large-Scale Machine Learning for Classification and Search

    ERIC Educational Resources Information Center

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  20. Newton Methods for Large Scale Problems in Machine Learning

    ERIC Educational Resources Information Center

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  1. The Large-Scale Structure of Scientific Method

    ERIC Educational Resources Information Center

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  2. Potential and issues in large scale flood inundation modelling

    NASA Astrophysics Data System (ADS)

    Di Baldassarre, Giuliano; Brandimarte, Luigia; Dottori, Francesco; Mazzoleni, Maurizio; Yan, Kun

    2015-04-01

    The last years have seen a growing research interest on large scale flood inundation modelling. Nowadays, modelling tools and datasets allow for analyzing flooding processes at regional, continental and even global scale with an increasing level of detail. As a result, several research works have already addressed this topic using different methodologies of varying complexity. The potential of these studies is certainly enormous. Large scale flood inundation modelling can provide valuable information in areas where few information and studies were previously available. They can provide a consistent framework for a comprehensive assessment of flooding processes in the river basins of world's large rivers, as well as impacts of future climate scenarios. To make the most of such a potential, we believe it is necessary, on the one hand, to understand strengths and limitations of the existing methodologies, and on the other hand, to discuss possibilities and implications of using large scale flood models for operational flood risk assessment and management. Where should researchers put their effort, in order to develop useful and reliable methodologies and outcomes? How the information coming from large scale flood inundation studies can be used by stakeholders? How should we use this information where previous higher resolution studies exist, or where official studies are available?

  3. Global smoothing and continuation for large-scale molecular optimization

    SciTech Connect

    More, J.J.; Wu, Zhijun

    1995-10-01

    We discuss the formulation of optimization problems that arise in the study of distance geometry, ionic systems, and molecular clusters. We show that continuation techniques based on global smoothing are applicable to these molecular optimization problems, and we outline the issues that must be resolved in the solution of large-scale molecular optimization problems.

  4. DESIGN OF LARGE-SCALE AIR MONITORING NETWORKS

    EPA Science Inventory

    The potential effects of air pollution on human health have received much attention in recent years. In the U.S. and other countries, there are extensive large-scale monitoring networks designed to collect data to inform the public of exposure risks to air pollution. A major crit...

  5. International Large-Scale Assessments: What Uses, What Consequences?

    ERIC Educational Resources Information Center

    Johansson, Stefan

    2016-01-01

    Background: International large-scale assessments (ILSAs) are a much-debated phenomenon in education. Increasingly, their outcomes attract considerable media attention and influence educational policies in many jurisdictions worldwide. The relevance, uses and consequences of these assessments are often the focus of research scrutiny. Whilst some…

  6. Large Scale Survey Data in Career Development Research

    ERIC Educational Resources Information Center

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  7. Current Scientific Issues in Large Scale Atmospheric Dynamics

    NASA Technical Reports Server (NTRS)

    Miller, T. L. (Compiler)

    1986-01-01

    Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.

  8. Large-scale drift and Rossby wave turbulence

    NASA Astrophysics Data System (ADS)

    Harper, K. L.; Nazarenko, S. V.

    2016-08-01

    We study drift/Rossby wave turbulence described by the large-scale limit of the Charney–Hasegawa–Mima equation. We define the zonal and meridional regions as Z:= \\{{k} :| {k}y| \\gt \\sqrt{3}{k}x\\} and M:= \\{{k} :| {k}y| \\lt \\sqrt{3}{k}x\\} respectively, where {k}=({k}x,{k}y) is in a plane perpendicular to the magnetic field such that k x is along the isopycnals and k y is along the plasma density gradient. We prove that the only types of resonant triads allowed are M≤ftrightarrow M+Z and Z≤ftrightarrow Z+Z. Therefore, if the spectrum of weak large-scale drift/Rossby turbulence is initially in Z it will remain in Z indefinitely. We present a generalised Fjørtoft’s argument to find transfer directions for the quadratic invariants in the two-dimensional {k}-space. Using direct numerical simulations, we test and confirm our theoretical predictions for weak large-scale drift/Rossby turbulence, and establish qualitative differences with cases when turbulence is strong. We demonstrate that the qualitative features of the large-scale limit survive when the typical turbulent scale is only moderately greater than the Larmor/Rossby radius.

  9. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  10. Large-scale drift and Rossby wave turbulence

    NASA Astrophysics Data System (ADS)

    Harper, K. L.; Nazarenko, S. V.

    2016-08-01

    We study drift/Rossby wave turbulence described by the large-scale limit of the Charney-Hasegawa-Mima equation. We define the zonal and meridional regions as Z:= \\{{k} :| {k}y| \\gt \\sqrt{3}{k}x\\} and M:= \\{{k} :| {k}y| \\lt \\sqrt{3}{k}x\\} respectively, where {k}=({k}x,{k}y) is in a plane perpendicular to the magnetic field such that k x is along the isopycnals and k y is along the plasma density gradient. We prove that the only types of resonant triads allowed are M≤ftrightarrow M+Z and Z≤ftrightarrow Z+Z. Therefore, if the spectrum of weak large-scale drift/Rossby turbulence is initially in Z it will remain in Z indefinitely. We present a generalised Fjørtoft’s argument to find transfer directions for the quadratic invariants in the two-dimensional {k}-space. Using direct numerical simulations, we test and confirm our theoretical predictions for weak large-scale drift/Rossby turbulence, and establish qualitative differences with cases when turbulence is strong. We demonstrate that the qualitative features of the large-scale limit survive when the typical turbulent scale is only moderately greater than the Larmor/Rossby radius.

  11. A bibliographical surveys of large-scale systems

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1970-01-01

    A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.

  12. Resilience of Florida Keys coral communities following large scale disturbances

    EPA Science Inventory

    The decline of coral reefs in the Caribbean over the last 40 years has been attributed to multiple chronic stressors and episodic large-scale disturbances. This study assessed the resilience of coral communities in two different regions of the Florida Keys reef system between 199...

  13. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  14. Large-Scale Networked Virtual Environments: Architecture and Applications

    ERIC Educational Resources Information Center

    Lamotte, Wim; Quax, Peter; Flerackers, Eddy

    2008-01-01

    Purpose: Scalability is an important research topic in the context of networked virtual environments (NVEs). This paper aims to describe the ALVIC (Architecture for Large-scale Virtual Interactive Communities) approach to NVE scalability. Design/methodology/approach: The setup and results from two case studies are shown: a 3-D learning environment…

  15. Large-scale data analysis using the Wigner function

    NASA Astrophysics Data System (ADS)

    Earnshaw, R. A.; Lei, C.; Li, J.; Mugassabi, S.; Vourdas, A.

    2012-04-01

    Large-scale data are analysed using the Wigner function. It is shown that the 'frequency variable' provides important information, which is lost with other techniques. The method is applied to 'sentiment analysis' in data from social networks and also to financial data.

  16. Ecosystem resilience despite large-scale altered hydro climatic conditions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Climate change is predicted to increase both drought frequency and duration, and when coupled with substantial warming, will establish a new hydroclimatological paradigm for many regions. Large-scale, warm droughts have recently impacted North America, Africa, Europe, Amazonia, and Australia result...

  17. Large-scale societal changes and intentionality - an uneasy marriage.

    PubMed

    Bodor, Péter; Fokas, Nikos

    2014-08-01

    Our commentary focuses on juxtaposing the proposed science of intentional change with facts and concepts pertaining to the level of large populations or changes on a worldwide scale. Although we find a unified evolutionary theory promising, we think that long-term and large-scale, scientifically guided - that is, intentional - social change is not only impossible, but also undesirable. PMID:25162863

  18. Implicit solution of large-scale radiation diffusion problems

    SciTech Connect

    Brown, P N; Graziani, F; Otero, I; Woodward, C S

    2001-01-04

    In this paper, we present an efficient solution approach for fully implicit, large-scale, nonlinear radiation diffusion problems. The fully implicit approach is compared to a semi-implicit solution method. Accuracy and efficiency are shown to be better for the fully implicit method on both one- and three-dimensional problems with tabular opacities taken from the LEOS opacity library.

  19. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Neumerski, Christine M.

    2015-01-01

    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  20. Simulation and Analysis of Large-Scale Compton Imaging Detectors

    SciTech Connect

    Manini, H A; Lange, D J; Wright, D M

    2006-12-27

    We perform simulations of two types of large-scale Compton imaging detectors. The first type uses silicon and germanium detector crystals, and the second type uses silicon and CdZnTe (CZT) detector crystals. The simulations use realistic detector geometry and parameters. We analyze the performance of each type of detector, and we present results using receiver operating characteristics (ROC) curves.

  1. US National Large-scale City Orthoimage Standard Initiative

    USGS Publications Warehouse

    Zhou, G.; Song, C.; Benjamin, S.; Schickler, W.

    2003-01-01

    The early procedures and algorithms for National digital orthophoto generation in National Digital Orthophoto Program (NDOP) were based on earlier USGS mapping operations, such as field control, aerotriangulation (derived in the early 1920's), the quarter-quadrangle-centered (3.75 minutes of longitude and latitude in geographic extent), 1:40,000 aerial photographs, and 2.5 D digital elevation models. However, large-scale city orthophotos using early procedures have disclosed many shortcomings, e.g., ghost image, occlusion, shadow. Thus, to provide the technical base (algorithms, procedure) and experience needed for city large-scale digital orthophoto creation is essential for the near future national large-scale digital orthophoto deployment and the revision of the Standards for National Large-scale City Digital Orthophoto in National Digital Orthophoto Program (NDOP). This paper will report our initial research results as follows: (1) High-precision 3D city DSM generation through LIDAR data processing, (2) Spatial objects/features extraction through surface material information and high-accuracy 3D DSM data, (3) 3D city model development, (4) Algorithm development for generation of DTM-based orthophoto, and DBM-based orthophoto, (5) True orthophoto generation by merging DBM-based orthophoto and DTM-based orthophoto, and (6) Automatic mosaic by optimizing and combining imagery from many perspectives.

  2. Considerations for Managing Large-Scale Clinical Trials.

    ERIC Educational Resources Information Center

    Tuttle, Waneta C.; And Others

    1989-01-01

    Research management strategies used effectively in a large-scale clinical trial to determine the health effects of exposure to Agent Orange in Vietnam are discussed, including pre-project planning, organization according to strategy, attention to scheduling, a team approach, emphasis on guest relations, cross-training of personnel, and preparing…

  3. CACHE Guidelines for Large-Scale Computer Programs.

    ERIC Educational Resources Information Center

    National Academy of Engineering, Washington, DC. Commission on Education.

    The Computer Aids for Chemical Engineering Education (CACHE) guidelines identify desirable features of large-scale computer programs including running cost and running-time limit. Also discussed are programming standards, documentation, program installation, system requirements, program testing, and program distribution. Lists of types of…

  4. Over-driven control for large-scale MR dampers

    NASA Astrophysics Data System (ADS)

    Friedman, A. J.; Dyke, S. J.; Phillips, B. M.

    2013-04-01

    As semi-active electro-mechanical control devices increase in scale for use in real-world civil engineering applications, their dynamics become increasingly complicated. Control designs that are able to take these characteristics into account will be more effective in achieving good performance. Large-scale magnetorheological (MR) dampers exhibit a significant time lag in their force-response to voltage inputs, reducing the efficacy of typical controllers designed for smaller scale devices where the lag is negligible. A new control algorithm is presented for large-scale MR devices that uses over-driving and back-driving of the commands to overcome the challenges associated with the dynamics of these large-scale MR dampers. An illustrative numerical example is considered to demonstrate the controller performance. Via simulations of the structure using several seismic ground motions, the merits of the proposed control strategy to achieve reductions in various response parameters are examined and compared against several accepted control algorithms. Experimental evidence is provided to validate the improved capabilities of the proposed controller in achieving the desired control force levels. Through real-time hybrid simulation (RTHS), the proposed controllers are also examined and experimentally evaluated in terms of their efficacy and robust performance. The results demonstrate that the proposed control strategy has superior performance over typical control algorithms when paired with a large-scale MR damper, and is robust for structural control applications.

  5. The Role of Plausible Values in Large-Scale Surveys

    ERIC Educational Resources Information Center

    Wu, Margaret

    2005-01-01

    In large-scale assessment programs such as NAEP, TIMSS and PISA, students' achievement data sets provided for secondary analysts contain so-called "plausible values." Plausible values are multiple imputations of the unobservable latent achievement for each student. In this article it has been shown how plausible values are used to: (1) address…

  6. Large-Scale Environmental Influences on Aquatic Animal Health

    EPA Science Inventory

    In the latter portion of the 20th century, North America experienced numerous large-scale mortality events affecting a broad diversity of aquatic animals. Short-term forensic investigations of these events have sometimes characterized a causative agent or condition, but have rare...

  7. Large-Scale Innovation and Change in UK Higher Education

    ERIC Educational Resources Information Center

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  8. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  9. Assuring Quality in Large-Scale Online Course Development

    ERIC Educational Resources Information Center

    Parscal, Tina; Riemer, Deborah

    2010-01-01

    Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…

  10. Cosmic strings and the large-scale structure

    NASA Technical Reports Server (NTRS)

    Stebbins, Albert

    1988-01-01

    A possible problem for cosmic string models of galaxy formation is presented. If very large voids are common and if loop fragmentation is not much more efficient than presently believed, then it may be impossible for string scenarios to produce the observed large-scale structure with Omega sub 0 = 1 and without strong environmental biasing.

  11. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  12. Improving the Utility of Large-Scale Assessments in Canada

    ERIC Educational Resources Information Center

    Rogers, W. Todd

    2014-01-01

    Principals and teachers do not use large-scale assessment results because the lack of distinct and reliable subtests prevents identifying strengths and weaknesses of students and instruction, the results arrive too late to be used, and principals and teachers need assistance to use the results to improve instruction so as to improve student…

  13. Plant transposons: contributors to evolution?

    PubMed

    Lönnig, W E; Saedler, H

    1997-12-31

    A spectrum of different hypotheses has been presented by various authors, from plant transposable elements as major agents in evolution to the very opposite, transposons as mainly selfish DNA constituting a genetic burden for the organisms. The following review will focus on: (1) a short survey of the two main different assessments of transposable elements (TEs) concerning the origin of species (selfish vs useful DNA); (2) the significance of the hierarchy of gene functions and redundancies for TE activities (selfish in non-redundant parts of the genome, but as a source of variability in the rest); (3) the relevance of the results of TE research in Zea mays and Antirrhinum majus for species formation in the wild (contrast between artificial and natural selection); (4) three areas of research where a synthesis between the two different evaluations of TEs seems possible: regressive evolution, the origin of ecotypes and the origin of cultivated plants; and (5) some possible prospects regarding TE-induced species formation in the angiosperms in general, i.e., the basic difference between systematic and genetic species concepts and the conceivable origin of a large part of angiosperm morphospecies owing to loss of function and further mutations by TE activities. PMID:9461398

  14. Site-directed mutagenesis.

    PubMed

    Bachman, Julia

    2013-01-01

    Site-directed mutagenesis is a PCR-based method to mutate specified nucleotides of a sequence within a plasmid vector. This technique allows one to study the relative importance of a particular amino acid for protein structure and function. Typical mutations are designed to disrupt or map protein-protein interactions, mimic or block posttranslational modifications, or to silence enzymatic activity. Alternatively, noncoding changes are often used to generate rescue constructs that are resistant to knockdown via RNAi.

  15. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  16. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    SciTech Connect

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  17. The Large Scale Synthesis of Aligned Plate Nanostructures

    PubMed Central

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-01-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ′ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential. PMID:27439672

  18. Electron drift in a large scale solid xenon

    DOE PAGES

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor twomore » faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.« less

  19. Electron drift in a large scale solid xenon

    SciTech Connect

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.

  20. Large scale meteorological influence during the Geysers 1979 field experiment

    SciTech Connect

    Barr, S.

    1980-01-01

    A series of meteorological field measurements conducted during July 1979 near Cobb Mountain in Northern California reveals evidence of several scales of atmospheric circulation consistent with the climatic pattern of the area. The scales of influence are reflected in the structure of wind and temperature in vertically stratified layers at a given observation site. Large scale synoptic gradient flow dominates the wind field above about twice the height of the topographic ridge. Below that there is a mixture of effects with evidence of a diurnal sea breeze influence and a sublayer of katabatic winds. The July observations demonstrate that weak migratory circulations in the large scale synoptic meteorological pattern have a significant influence on the day-to-day gradient winds and must be accounted for in planning meteorological programs including tracer experiments.

  1. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    SciTech Connect

    Nusser, Adi; Branchini, Enzo; Davis, Marc E-mail: branchin@fis.uniroma3.it

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.

  2. Lagrangian space consistency relation for large scale structure

    SciTech Connect

    Horn, Bart; Hui, Lam; Xiao, Xiao E-mail: lh399@columbia.edu

    2015-09-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.

  3. The workshop on iterative methods for large scale nonlinear problems

    SciTech Connect

    Walker, H.F.; Pernice, M.

    1995-12-01

    The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.

  4. The Large Scale Synthesis of Aligned Plate Nanostructures

    NASA Astrophysics Data System (ADS)

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-07-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ‧ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential.

  5. Large Scale Deformation of the Western U.S. Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2002-01-01

    The overall objective of the work that was conducted was to understand the present-day large-scale deformations of the crust throughout the western United States and in so doing to improve our ability to assess the potential for seismic hazards in this region. To address this problem, we used a large collection of Global Positioning System (GPS) networks which spans the region to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our results can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  6. Large Scale Deformation of the Western US Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2001-01-01

    Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  7. Startup of large-scale projects casts spotlight on IGCC

    SciTech Connect

    Swanekamp, R.

    1996-06-01

    With several large-scale plants cranking up this year, integrated coal gasification/combined cycle (IGCC) appears poised for growth. The technology may eventually help coal reclaim its former prominence in new plant construction, but developers worldwide are eyeing other feedstocks--such as petroleum coke or residual oil. Of the so-called advanced clean-coal technologies, integrated (IGCC) appears to be having a defining year. Of three large-scale demonstration plants in the US, one is well into startup, a second is expected to begin operating in the fall, and a third should startup by the end of the year; worldwide, over a dozen more projects are in the works. In Italy, for example, several large projects using petroleum coke or refinery residues as feedstocks are proceeding, apparently on a project-finance basis.

  8. Considerations of large scale impact and the early Earth

    NASA Technical Reports Server (NTRS)

    Grieve, R. A. F.; Parmentier, E. M.

    1985-01-01

    Bodies which have preserved portions of their earliest crust indicate that large scale impact cratering was an important process in early surface and upper crustal evolution. Large impact basins form the basic topographic, tectonic, and stratigraphic framework of the Moon and impact was responsible for the characteristics of the second order gravity field and upper crustal seismic properties. The Earth's crustal evolution during the first 800 my of its history is conjectural. The lack of a very early crust may indicate that thermal and mechanical instabilities resulting from intense mantle convection and/or bombardment inhibited crustal preservation. Whatever the case, the potential effects of large scale impact have to be considered in models of early Earth evolution. Preliminary models of the evolution of a large terrestrial impact basin was derived and discussed in detail.

  9. Genetic Signature of Histiocytic Sarcoma Revealed by a Sleeping Beauty Transposon Genetic Screen in Mice

    PubMed Central

    Been, Raha A.; Linden, Michael A.; Hager, Courtney J.; DeCoursin, Krista J.; Abrahante, Juan E.; Landman, Sean R.; Steinbach, Michael; Sarver, Aaron L.; Largaespada, David A.; Starr, Timothy K.

    2014-01-01

    Histiocytic sarcoma is a rare, aggressive neoplasm that responds poorly to therapy. Histiocytic sarcoma is thought to arise from macrophage precursor cells via genetic changes that are largely undefined. To improve our understanding of the etiology of histiocytic sarcoma we conducted a forward genetic screen in mice using the Sleeping Beauty transposon as a mutagen to identify genetic drivers of histiocytic sarcoma. Sleeping Beauty mutagenesis was targeted to myeloid lineage cells using the Lysozyme2 promoter. Mice with activated Sleeping Beauty mutagenesis had significantly shortened lifespan and the majority of these mice developed tumors resembling human histiocytic sarcoma. Analysis of transposon insertions identified 27 common insertion sites containing 28 candidate cancer genes. Several of these genes are known drivers of hematological neoplasms, like Raf1, Fli1, and Mitf, while others are well-known cancer genes, including Nf1, Myc, Jak2, and Pten. Importantly, several new potential drivers of histiocytic sarcoma were identified and could serve as targets for therapy for histiocytic sarcoma patients. PMID:24827933

  10. The large-scale anisotropy with the PAMELA calorimeter

    NASA Astrophysics Data System (ADS)

    Karelin, A.; Adriani, O.; Barbarino, G.; Bazilevskaya, G.; Bellotti, R.; Boezio, M.; Bogomolov, E.; Bongi, M.; Bonvicini, V.; Bottai, S.; Bruno, A.; Cafagna, F.; Campana, D.; Carbone, R.; Carlson, P.; Casolino, M.; Castellini, G.; De Donato, C.; De Santis, C.; De Simone, N.; Di Felice, V.; Formato, V.; Galper, A.; Koldashov, S.; Koldobskiy, S.; Krut'kov, S.; Kvashnin, A.; Leonov, A.; Malakhov, V.; Marcelli, L.; Martucci, M.; Mayorov, A.; Menn, W.; Mergé, M.; Mikhailov, V.; Mocchiutti, E.; Monaco, A.; Mori, N.; Munini, R.; Osteria, G.; Palma, F.; Panico, B.; Papini, P.; Pearce, M.; Picozza, P.; Ricci, M.; Ricciarini, S.; Sarkar, R.; Simon, M.; Scotti, V.; Sparvoli, R.; Spillantini, P.; Stozhkov, Y.; Vacchi, A.; Vannuccini, E.; Vasilyev, G.; Voronov, S.; Yurkin, Y.; Zampa, G.; Zampa, N.

    2015-10-01

    The large-scale anisotropy (or the so-called star-diurnal wave) has been studied using the calorimeter of the space-born experiment PAMELA. The cosmic ray anisotropy has been obtained for the Southern and Northern hemispheres simultaneously in the equatorial coordinate system for the time period 2006-2014. The dipole amplitude and phase have been measured for energies 1-20 TeV n-1.

  11. Report on large scale molten core/magnesia interaction test

    SciTech Connect

    Chu, T.Y.; Bentz, J.H.; Arellano, F.E.; Brockmann, J.E.; Field, M.E.; Fish, J.D.

    1984-08-01

    A molten core/material interaction experiment was performed at the Large-Scale Melt Facility at Sandia National Laboratories. The experiment involved the release of 230 kg of core melt, heated to 2923/sup 0/K, into a magnesia brick crucible. Descriptions of the facility, the melting technology, as well as results of the experiment, are presented. Preliminary evaluations of the results indicate that magnesia brick can be a suitable material for core ladle construction.

  12. Analysis plan for 1985 large-scale tests. Technical report

    SciTech Connect

    McMullan, F.W.

    1983-01-01

    The purpose of this effort is to assist DNA in planning for large-scale (upwards of 5000 tons) detonations of conventional explosives in the 1985 and beyond time frame. Primary research objectives were to investigate potential means to increase blast duration and peak pressures. This report identifies and analyzes several candidate explosives. It examines several charge designs and identifies advantages and disadvantages of each. Other factors including terrain and multiburst techniques are addressed as are test site considerations.

  13. Simulating Weak Lensing by Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Vale, Chris; White, Martin

    2003-08-01

    We model weak gravitational lensing of light by large-scale structure using ray tracing through N-body simulations. The method is described with particular attention paid to numerical convergence. We investigate some of the key approximations in the multiplane ray-tracing algorithm. Our simulated shear and convergence maps are used to explore how well standard assumptions about weak lensing hold, especially near large peaks in the lensing signal.

  14. The Phoenix series large scale LNG pool fire experiments.

    SciTech Connect

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  15. Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their

  16. The Large-scale Structure of Scientific Method

    NASA Astrophysics Data System (ADS)

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of scientific method can reveal the global interconnectedness of scientific knowledge that is an essential part of what makes science scientific.

  17. Space transportation booster engine thrust chamber technology, large scale injector

    NASA Technical Reports Server (NTRS)

    Schneider, J. A.

    1993-01-01

    The objective of the Large Scale Injector (LSI) program was to deliver a 21 inch diameter, 600,000 lbf thrust class injector to NASA/MSFC for hot fire testing. The hot fire test program would demonstrate the feasibility and integrity of the full scale injector, including combustion stability, chamber wall compatibility (thermal management), and injector performance. The 21 inch diameter injector was delivered in September of 1991.

  18. Large-Scale Weather Disturbances in Mars’ Southern Extratropics

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2015-11-01

    Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre

  19. Multivariate Clustering of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Critchlow, T

    2003-06-13

    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatio-temporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial dimensions is important since ''similar'' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building clusters, it is desirable to associate each cluster with its correct spatial region. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  20. Multivariate Clustering of Large-Scale Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Critchlow, T

    2003-03-04

    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatiotemporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial space is important since 'similar' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying the threshold f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building a cluster, it is desirable to associate each cluster with its correct spatial space. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  1. Large-scale Alfvén vortices

    SciTech Connect

    Onishchenko, O. G.; Horton, W.; Scullion, E.; Fedun, V.

    2015-12-15

    The new type of large-scale vortex structures of dispersionless Alfvén waves in collisionless plasma is investigated. It is shown that Alfvén waves can propagate in the form of Alfvén vortices of finite characteristic radius and characterised by magnetic flux ropes carrying orbital angular momentum. The structure of the toroidal and radial velocity, fluid and magnetic field vorticity, the longitudinal electric current in the plane orthogonal to the external magnetic field are discussed.

  2. Relic vector field and CMB large scale anomalies

    SciTech Connect

    Chen, Xingang; Wang, Yi E-mail: yw366@cam.ac.uk

    2014-10-01

    We study the most general effects of relic vector fields on the inflationary background and density perturbations. Such effects are observable if the number of inflationary e-folds is close to the minimum requirement to solve the horizon problem. We show that this can potentially explain two CMB large scale anomalies: the quadrupole-octopole alignment and the quadrupole power suppression. We discuss its effect on the parity anomaly. We also provide analytical template for more detailed data comparison.

  3. Large-scale Alfvén vortices

    NASA Astrophysics Data System (ADS)

    Onishchenko, O. G.; Pokhotelov, O. A.; Horton, W.; Scullion, E.; Fedun, V.

    2015-12-01

    The new type of large-scale vortex structures of dispersionless Alfvén waves in collisionless plasma is investigated. It is shown that Alfvén waves can propagate in the form of Alfvén vortices of finite characteristic radius and characterised by magnetic flux ropes carrying orbital angular momentum. The structure of the toroidal and radial velocity, fluid and magnetic field vorticity, the longitudinal electric current in the plane orthogonal to the external magnetic field are discussed.

  4. Turbulent large-scale structure effects on wake meandering

    NASA Astrophysics Data System (ADS)

    Muller, Y.-A.; Masson, C.; Aubrun, S.

    2015-06-01

    This work studies effects of large-scale turbulent structures on wake meandering using Large Eddy Simulations (LES) over an actuator disk. Other potential source of wake meandering such as the instablility mechanisms associated with tip vortices are not treated in this study. A crucial element of the efficient, pragmatic and successful simulations of large-scale turbulent structures in Atmospheric Boundary Layer (ABL) is the generation of the stochastic turbulent atmospheric flow. This is an essential capability since one source of wake meandering is these large - larger than the turbine diameter - turbulent structures. The unsteady wind turbine wake in ABL is simulated using a combination of LES and actuator disk approaches. In order to dedicate the large majority of the available computing power in the wake, the ABL ground region of the flow is not part of the computational domain. Instead, mixed Dirichlet/Neumann boundary conditions are applied at all the computational surfaces except at the outlet. Prescribed values for Dirichlet contribution of these boundary conditions are provided by a stochastic turbulent wind generator. This allows to simulate large-scale turbulent structures - larger than the computational domain - leading to an efficient simulation technique of wake meandering. Since the stochastic wind generator includes shear, the turbulence production is included in the analysis without the necessity of resolving the flow near the ground. The classical Smagorinsky sub-grid model is used. The resulting numerical methodology has been implemented in OpenFOAM. Comparisons with experimental measurements in porous-disk wakes have been undertaken, and the agreements are good. While temporal resolution in experimental measurements is high, the spatial resolution is often too low. LES numerical results provide a more complete spatial description of the flow. They tend to demonstrate that inflow low frequency content - or large- scale turbulent structures - is

  5. A Cloud Computing Platform for Large-Scale Forensic Computing

    NASA Astrophysics Data System (ADS)

    Roussev, Vassil; Wang, Liqiang; Richard, Golden; Marziale, Lodovico

    The timely processing of massive digital forensic collections demands the use of large-scale distributed computing resources and the flexibility to customize the processing performed on the collections. This paper describes MPI MapReduce (MMR), an open implementation of the MapReduce processing model that outperforms traditional forensic computing techniques. MMR provides linear scaling for CPU-intensive processing and super-linear scaling for indexing-related workloads.

  6. Supporting large scale applications on networks of workstations

    NASA Technical Reports Server (NTRS)

    Cooper, Robert; Birman, Kenneth P.

    1989-01-01

    Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.

  7. Ultra-high frequency ultrasound biomicroscopy and high throughput cardiovascular phenotyping in a large scale mouse mutagenesis screen

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoqin; Francis, Richard; Tobita, Kimimasa; Kim, Andy; Leatherbury, Linda; Lo, Cecilia W.

    2013-02-01

    Ultrasound biomicroscopy (UBM) is ideally suited for phenotyping fetal mice for congenital heart disease (CHD), as imaging can be carried out noninvasively to provide both hemodynamic and structural information essential for CHD diagnosis. Using the UBM (Vevo 2100; 40Hz) in conjunction with the clinical ultrasound system (Acuson Sequioa C512; 15Hz), we developed a two-step screening protocol to scan thousands fetuses derived from ENU mutagenized pedigrees. A wide spectrum of CHD was detected by the UBM, which were subsequently confirmed with follow-up necropsy and histopathology examination with episcopic fluorescence image capture. CHD observed included outflow anomalies, left/right heart obstructive lesions, septal/valvular defects and cardiac situs anomalies. Meanwhile, various extracardiac defects were found, such as polydactyly, craniofacial defects, exencephaly, omphalocele-cleft palate, most of which were associated with cardiac defects. Our analyses showed the UBM was better at assessing cardiac structure and blood flow profiles, while conventional ultrasound allowed higher throughput low-resolution screening. Our study showed the integration of conventional clinical ultrasound imaging with the UBM for fetal mouse cardiovascular phenotyping can maximize the detection and recovery of CHD mutants.

  8. Geospatial Optimization of Siting Large-Scale Solar Projects

    SciTech Connect

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  9. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  10. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances.

    PubMed

    Parker, V Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host.

  11. Large scale anisotropy of UHECRs for the Telescope Array

    SciTech Connect

    Kido, E.

    2011-09-22

    The origin of Ultra High Energy Cosmic Rays (UHECRs) is one of the most interesting questions in astroparticle physics. Despite of the efforts by other previous measurements, there is no consensus of both of the origin and the mechanism of UHECRs generation and propagation yet. In this context, Telescope Array (TA) experiment is expected to play an important role as the largest detector in the northern hemisphere which consists of an array of surface particle detectors (SDs) and fluorescence detectors (FDs) and other important calibration devices. We searched for large scale anisotropy using SD data of TA. UHECRs are expected to be restricted in GZK horizon when the composition of UHECRs is proton, so the observed arrival directions are expected to exhibit local large scale anisotropy if UHECR sources are some astrophysical objects. We used the SD data set from 11 May 2008 to 7 September 2010 to search for large-scale anisotropy. The discrimination power between LSS and isotropy is not enough yet, but the statistics in TA is expected to discriminate between those in about 95% confidence level on average in near future.

  12. How Large Scales Flows May Influence Solar Activity

    NASA Technical Reports Server (NTRS)

    Hathaway, D. H.

    2004-01-01

    Large scale flows within the solar convection zone are the primary drivers of the Sun's magnetic activity cycle and play important roles in shaping the Sun's magnetic field. Differential rotation amplifies the magnetic field through its shearing action and converts poloidal field into toroidal field. Poleward meridional flow near the surface carries magnetic flux that reverses the magnetic poles at about the time of solar maximum. The deeper, equatorward meridional flow can carry magnetic flux back toward the lower latitudes where it erupts through the surface to form tilted active regions that convert toroidal fields into oppositely directed poloidal fields. These axisymmetric flows are themselves driven by large scale convective motions. The effects of the Sun's rotation on convection produce velocity correlations that can maintain both the differential rotation and the meridional circulation. These convective motions can also influence solar activity directly by shaping the magnetic field pattern. While considerable theoretical advances have been made toward understanding these large scale flows, outstanding problems in matching theory to observations still remain.

  13. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. PMID:25731989

  14. Large-scale flow experiments for managing river systems

    USGS Publications Warehouse

    Konrad, C.P.; Olden, J.D.; Lytle, D.A.; Melis, T.S.; Schmidt, J.C.; Bray, E.N.; Freeman, Mary C.; Gido, K.B.; Hemphill, N.P.; Kennard, M.J.; McMullen, L.E.; Mims, M.C.; Pyron, M.; Robinson, C.T.; Williams, J.G.

    2011-01-01

    Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems. ?? 2011 by American Institute of Biological Sciences. All rights reserved.

  15. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies.

  16. A visualization framework for large-scale virtual astronomy

    NASA Astrophysics Data System (ADS)

    Fu, Chi-Wing

    Motivated by advances in modern positional astronomy, this research attempts to digitally model the entire Universe through computer graphics technology. Our first challenge is space itself. The gigantic size of the Universe makes it impossible to put everything into a typical graphics system at its own scale. The graphics rendering process can easily fail because of limited computational precision, The second challenge is that the enormous amount of data could slow down the graphics; we need clever techniques to speed up the rendering. Third, since the Universe is dominated by empty space, objects are widely separated; this makes navigation difficult. We attempt to tackle these problems through various techniques designed to extend and optimize the conventional graphics framework, including the following: power homogeneous coordinates for large-scale spatial representations, generalized large-scale spatial transformations, and rendering acceleration via environment caching and object disappearance criteria. Moreover, we implemented an assortment of techniques for modeling and rendering a variety of astronomical bodies, ranging from the Earth up to faraway galaxies, and attempted to visualize cosmological time; a method we call the Lightcone representation was introduced to visualize the whole space-time of the Universe at a single glance. In addition, several navigation models were developed to handle the large-scale navigation problem. Our final results include a collection of visualization tools, two educational animations appropriate for planetarium audiences, and state-of-the-art-advancing rendering techniques that can be transferred to practice in digital planetarium systems.

  17. Line segment extraction for large scale unorganized point clouds

    NASA Astrophysics Data System (ADS)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  18. Impact of Large-scale Geological Architectures On Recharge

    NASA Astrophysics Data System (ADS)

    Troldborg, L.; Refsgaard, J. C.; Engesgaard, P.; Jensen, K. H.

    Geological and hydrogeological data constitutes the basis for assessment of ground- water flow pattern and recharge zones. The accessibility and applicability of hard ge- ological data is often a major obstacle in deriving plausible conceptual models. Nev- ertheless focus is often on parameter uncertainty caused by the effect of geological heterogeneity due to lack of hard geological data, thus neglecting the possibility of alternative conceptualizations of the large-scale geological architecture. For a catchment in the eastern part of Denmark we have constructed different geologi- cal models based on different conceptualization of the major geological trends and fa- cies architecture. The geological models are equally plausible in a conceptually sense and they are all calibrated to well head and river flow measurements. Comparison of differences in recharge zones and subsequently well protection zones emphasize the importance of assessing large-scale geological architecture in hydrological modeling on regional scale in a non-deterministic way. Geostatistical modeling carried out in a transitional probability framework shows the possibility of assessing multiple re- alizations of large-scale geological architecture from a combination of soft and hard geological information.

  19. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances

    PubMed Central

    Parker, V. Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host. PMID:26151560

  20. Reliability assessment for components of large scale photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar

    2014-10-01

    Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.

  1. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances.

    PubMed

    Parker, V Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host. PMID:26151560

  2. Large-scale flow experiments for managing river systems

    USGS Publications Warehouse

    Konrad, Christopher P.; Olden, Julian D.; Lytle, David A.; Melis, Theodore S.; Schmidt, John C.; Bray, Erin N.; Freeman, Mary C.; Gido, Keith B.; Hemphill, Nina P.; Kennard, Mark J.; McMullen, Laura E.; Mims, Meryl C.; Pyron, Mark; Robinson, Christopher T.; Williams, John G.

    2011-01-01

    Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems.

  3. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  4. Large-scale data mining pilot project in human genome

    SciTech Connect

    Musick, R.; Fidelis, R.; Slezak, T.

    1997-05-01

    This whitepaper briefly describes a new, aggressive effort in large- scale data Livermore National Labs. The implications of `large- scale` will be clarified Section. In the short term, this effort will focus on several @ssion-critical questions of Genome project. We will adapt current data mining techniques to the Genome domain, to quantify the accuracy of inference results, and lay the groundwork for a more extensive effort in large-scale data mining. A major aspect of the approach is that we will be fully-staffed data warehousing effort in the human Genome area. The long term goal is strong applications- oriented research program in large-@e data mining. The tools, skill set gained will be directly applicable to a wide spectrum of tasks involving a for large spatial and multidimensional data. This includes applications in ensuring non-proliferation, stockpile stewardship, enabling Global Ecology (Materials Database Industrial Ecology), advancing the Biosciences (Human Genome Project), and supporting data for others (Battlefield Management, Health Care).

  5. Foundational perspectives on causality in large-scale brain networks

    NASA Astrophysics Data System (ADS)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  6. Foundational perspectives on causality in large-scale brain networks.

    PubMed

    Mannino, Michael; Bressler, Steven L

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  7. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write

  8. Rapid Quantification of Mutant Fitness in Diverse Bacteria by Sequencing Randomly Bar-Coded Transposons

    PubMed Central

    Wetmore, Kelly M.; Price, Morgan N.; Waters, Robert J.; Lamson, Jacob S.; He, Jennifer; Hoover, Cindi A.; Blow, Matthew J.; Bristow, James; Butland, Gareth

    2015-01-01

    ABSTRACT Transposon mutagenesis with next-generation sequencing (TnSeq) is a powerful approach to annotate gene function in bacteria, but existing protocols for TnSeq require laborious preparation of every sample before sequencing. Thus, the existing protocols are not amenable to the throughput necessary to identify phenotypes and functions for the majority of genes in diverse bacteria. Here, we present a method, random bar code transposon-site sequencing (RB-TnSeq), which increases the throughput of mutant fitness profiling by incorporating random DNA bar codes into Tn5 and mariner transposons and by using bar code sequencing (BarSeq) to assay mutant fitness. RB-TnSeq can be used with any transposon, and TnSeq is performed once per organism instead of once per sample. Each BarSeq assay requires only a simple PCR, and 48 to 96 samples can be sequenced on one lane of an Illumina HiSeq system. We demonstrate the reproducibility and biological significance of RB-TnSeq with Escherichia coli, Phaeobacter inhibens, Pseudomonas stutzeri, Shewanella amazonensis, and Shewanella oneidensis. To demonstrate the increased throughput of RB-TnSeq, we performed 387 successful genome-wide mutant fitness assays representing 130 different bacterium-carbon source combinations and identified 5,196 genes with significant phenotypes across the five bacteria. In P. inhibens, we used our mutant fitness data to identify genes important for the utilization of diverse carbon substrates, including a putative d-mannose isomerase that is required for mannitol catabolism. RB-TnSeq will enable the cost-effective functional annotation of diverse bacteria using mutant fitness profiling. PMID:25968644

  9. Optimization of Combinatorial Mutagenesis

    NASA Astrophysics Data System (ADS)

    Parker, Andrew S.; Griswold, Karl E.; Bailey-Kellogg, Chris

    Protein engineering by combinatorial site-directed mutagenesis evaluates a portion of the sequence space near a target protein, seeking variants with improved properties (stability, activity, immunogenicity, etc.). In order to improve the hit-rate of beneficial variants in such mutagenesis libraries, we develop methods to select optimal positions and corresponding sets of the mutations that will be used, in all combinations, in constructing a library for experimental evaluation. Our approach, OCoM (Optimization of Combinatorial Mutagenesis), encompasses both degenerate oligonucleotides and specified point mutations, and can be directed accordingly by requirements of experimental cost and library size. It evaluates the quality of the resulting library by one- and two-body sequence potentials, averaged over the variants. To ensure that it is not simply recapitulating extant sequences, it balances the quality of a library with an explicit evaluation of the novelty of its members. We show that, despite dealing with a combinatorial set of variants, in our approach the resulting library optimization problem is actually isomorphic to single-variant optimization. By the same token, this means that the two-body sequence potential results in an NP-hard optimization problem. We present an efficient dynamic programming algorithm for the one-body case and a practically-efficient integer programming approach for the general two-body case. We demonstrate the effectiveness of our approach in designing libraries for three different case study proteins targeted by previous combinatorial libraries - a green fluorescent protein, a cytochrome P450, and a beta lactamase. We found that OCoM worked quite efficiently in practice, requiring only 1 hour even for the massive design problem of selecting 18 mutations to generate 107 variants of a 443-residue P450. We demonstrate the general ability of OCoM in enabling the protein engineer to explore and evaluate trade-offs between quality and

  10. Solving large scale structure in ten easy steps with COLA

    SciTech Connect

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J. E-mail: matiasz@ias.edu

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  11. LARGE-SCALE CO2 TRANSPORTATION AND DEEP OCEAN SEQUESTRATION

    SciTech Connect

    Hamid Sarv

    1999-03-01

    Technical and economical feasibility of large-scale CO{sub 2} transportation and ocean sequestration at depths of 3000 meters or grater was investigated. Two options were examined for transporting and disposing the captured CO{sub 2}. In one case, CO{sub 2} was pumped from a land-based collection center through long pipelines laid on the ocean floor. Another case considered oceanic tanker transport of liquid carbon dioxide to an offshore floating structure for vertical injection to the ocean floor. In the latter case, a novel concept based on subsurface towing of a 3000-meter pipe, and attaching it to the offshore structure was considered. Budgetary cost estimates indicate that for distances greater than 400 km, tanker transportation and offshore injection through a 3000-meter vertical pipe provides the best method for delivering liquid CO{sub 2} to deep ocean floor depressions. For shorter distances, CO{sub 2} delivery by parallel-laid, subsea pipelines is more cost-effective. Estimated costs for 500-km transport and storage at a depth of 3000 meters by subsea pipelines and tankers were 1.5 and 1.4 dollars per ton of stored CO{sub 2}, respectively. At these prices, economics of ocean disposal are highly favorable. Future work should focus on addressing technical issues that are critical to the deployment of a large-scale CO{sub 2} transportation and disposal system. Pipe corrosion, structural design of the transport pipe, and dispersion characteristics of sinking CO{sub 2} effluent plumes have been identified as areas that require further attention. Our planned activities in the next Phase include laboratory-scale corrosion testing, structural analysis of the pipeline, analytical and experimental simulations of CO{sub 2} discharge and dispersion, and the conceptual economic and engineering evaluation of large-scale implementation.

  12. Solving large scale structure in ten easy steps with COLA

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  13. Large-Scale Hybrid Motor Testing. Chapter 10

    NASA Technical Reports Server (NTRS)

    Story, George

    2006-01-01

    Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.

  14. Statistical analysis of large-scale neuronal recording data

    PubMed Central

    Reed, Jamie L.; Kaas, Jon H.

    2010-01-01

    Relating stimulus properties to the response properties of individual neurons and neuronal networks is a major goal of sensory research. Many investigators implant electrode arrays in multiple brain areas and record from chronically implanted electrodes over time to answer a variety of questions. Technical challenges related to analyzing large-scale neuronal recording data are not trivial. Several analysis methods traditionally used by neurophysiologists do not account for dependencies in the data that are inherent in multi-electrode recordings. In addition, when neurophysiological data are not best modeled by the normal distribution and when the variables of interest may not be linearly related, extensions of the linear modeling techniques are recommended. A variety of methods exist to analyze correlated data, even when data are not normally distributed and the relationships are nonlinear. Here we review expansions of the Generalized Linear Model designed to address these data properties. Such methods are used in other research fields, and the application to large-scale neuronal recording data will enable investigators to determine the variable properties that convincingly contribute to the variances in the observed neuronal measures. Standard measures of neuron properties such as response magnitudes can be analyzed using these methods, and measures of neuronal network activity such as spike timing correlations can be analyzed as well. We have done just that in recordings from 100-electrode arrays implanted in the primary somatosensory cortex of owl monkeys. Here we illustrate how one example method, Generalized Estimating Equations analysis, is a useful method to apply to large-scale neuronal recordings. PMID:20472395

  15. Statistical Modeling of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  16. Infectious diseases in large-scale cat hoarding investigations.

    PubMed

    Polak, K C; Levy, J K; Crawford, P C; Leutenegger, C M; Moriello, K A

    2014-08-01

    Animal hoarders accumulate animals in over-crowded conditions without adequate nutrition, sanitation, and veterinary care. As a result, animals rescued from hoarding frequently have a variety of medical conditions including respiratory infections, gastrointestinal disease, parasitism, malnutrition, and other evidence of neglect. The purpose of this study was to characterize the infectious diseases carried by clinically affected cats and to determine the prevalence of retroviral infections among cats in large-scale cat hoarding investigations. Records were reviewed retrospectively from four large-scale seizures of cats from failed sanctuaries from November 2009 through March 2012. The number of cats seized in each case ranged from 387 to 697. Cats were screened for feline leukemia virus (FeLV) and feline immunodeficiency virus (FIV) in all four cases and for dermatophytosis in one case. A subset of cats exhibiting signs of upper respiratory disease or diarrhea had been tested for infections by PCR and fecal flotation for treatment planning. Mycoplasma felis (78%), calicivirus (78%), and Streptococcus equi subspecies zooepidemicus (55%) were the most common respiratory infections. Feline enteric coronavirus (88%), Giardia (56%), Clostridium perfringens (49%), and Tritrichomonas foetus (39%) were most common in cats with diarrhea. The seroprevalence of FeLV and FIV were 8% and 8%, respectively. In the one case in which cats with lesions suspicious for dermatophytosis were cultured for Microsporum canis, 69/76 lesional cats were culture-positive; of these, half were believed to be truly infected and half were believed to be fomite carriers. Cats from large-scale hoarding cases had high risk for enteric and respiratory infections, retroviruses, and dermatophytosis. Case responders should be prepared for mass treatment of infectious diseases and should implement protocols to prevent transmission of feline or zoonotic infections during the emergency response and when

  17. Improving Design Efficiency for Large-Scale Heterogeneous Circuits

    NASA Astrophysics Data System (ADS)

    Gregerson, Anthony

    Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency

  18. Large-scale molten core/material interaction experiments

    SciTech Connect

    Chu, T.Y.

    1984-01-01

    The paper described the facility and melting technology for large-scale molten core/material interaction experiments being carried out at Sandia National Laboratories. The facility is largest of its kind anywhere. It is capable of producing core melts up to 500 kg at a temperature of 3000/sup 0/K. Results of a recent experiment involving the release of 230 kg of core melt into a magnesia brick crucible is discussed in detail. Data on thermal and mechanical responses of magnesia brick, heat flux partitioning, melt penetration, gas and aerosol generation are presented.

  19. Laser Welding of Large Scale Stainless Steel Aircraft Structures

    NASA Astrophysics Data System (ADS)

    Reitemeyer, D.; Schultz, V.; Syassen, F.; Seefeld, T.; Vollertsen, F.

    In this paper a welding process for large scale stainless steel structures is presented. The process was developed according to the requirements of an aircraft application. Therefore, stringers are welded on a skin sheet in a t-joint configuration. The 0.6 mm thickness parts are welded with a thin disc laser, seam length up to 1920 mm are demonstrated. The welding process causes angular distortions of the skin sheet which are compensated by a subsequent laser straightening process. Based on a model straightening process parameters matching the induced welding distortion are predicted. The process combination is successfully applied to stringer stiffened specimens.

  20. Large-scale genotoxicity assessments in the marine environment.

    PubMed

    Hose, J E

    1994-12-01

    There are a number of techniques for detecting genotoxicity in the marine environment, and many are applicable to large-scale field assessments. Certain tests can be used to evaluate responses in target organisms in situ while others utilize surrogate organisms exposed to field samples in short-term laboratory bioassays. Genotoxicity endpoints appear distinct from traditional toxicity endpoints, but some have chemical or ecotoxicologic correlates. One versatile end point, the frequency of anaphase aberrations, has been used in several large marine assessments to evaluate genotoxicity in the New York Bight, in sediment from San Francisco Bay, and following the Exxon Valdez oil spill.

  1. Locally Biased Galaxy Formation and Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Narayanan, Vijay K.; Berlind, Andreas A.; Weinberg, David H.

    2000-01-01

    We examine the influence of the morphology-density relation and a wide range of simple models for biased galaxy formation on statistical measures of large-scale structure. We contrast the behavior of local biasing models, in which the efficiency of galaxy formation is determined by the density, geometry, or velocity dispersion of the local mass distribution, with that of nonlocal biasing models, in which galaxy formation is modulated coherently over scales larger than the galaxy correlation length. If morphological segregation of galaxies is governed by a local morphology-density relation, then the correlation function of E/S0 galaxies should be steeper and stronger than that of spiral galaxies on small scales, as observed, while on large scales the E/S0 and spiral galaxies should have correlation functions with the same shape but different amplitudes. Similarly, all of our local bias models produce scale-independent amplification of the correlation function and power spectrum in the linear and mildly nonlinear regimes; only a nonlocal biasing mechanism can alter the shape of the power spectrum on large scales. Moments of the biased galaxy distribution retain the hierarchical pattern of the mass moments, but biasing alters the values and scale dependence of the hierarchical amplitudes S3 and S4. Pair-weighted moments of the galaxy velocity distribution are sensitive to the details of the bias prescription even if galaxies have the same local velocity distribution as the underlying dark matter. The nonlinearity of the relation between galaxy density and mass density depends on the biasing prescription and the smoothing scale, and the scatter in this relation is a useful diagnostic of the physical parameters that determine the bias. While the assumption that galaxy formation is governed by local physics leads to some important simplifications on large scales, even local biasing is a multifaceted phenomenon whose impact cannot be described by a single parameter or

  2. Why large-scale seasonal streamflow forecasts are feasible

    NASA Astrophysics Data System (ADS)

    Bierkens, M. F.; Candogan Yossef, N.; Van Beek, L. P.

    2011-12-01

    Seasonal forecasts of precipitation and temperature, using either statistical or dynamic prediction, have been around for almost 2 decades. The skill of these forecasts differ both in space and time, with highest skill in areas heavily influenced by SST anomalies such as El Nino or areas where land surface properties have a major impact on e.g. Monsoon strength, such as the vegetation cover of the Sahel region or the snow cover of the Tibetan plateau. However, the skill of seasonal forecasts is limited in most regions, with anomaly correlation coefficients varying between 0.2 and 0.5 for 1-3 month precipitation totals. This raises the question whether seasonal hydrological forecasting is feasible. Here, we make the case that it is. Using the example of statistical forecasts of NAO-strength and related precipitation anomalies over Europe, we show that the skill of large-scale streamflow forecasts is generally much higher than the precipitation forecasts itself, provided that the initial state of the system is accurately estimated. In the latter case, even the precipitation climatology can produce skillful results. This is due to the inertia of the hydrological system rooted in the storage of soil moisture, groundwater and snow pack, as corroborated by a recent study using snow observations for seasonal streamflow forecasting in the Western US. These examples seem to suggest that for accurate seasonal hydrological forecasting, correct state estimation is more important than accurate seasonal meteorological forecasts. However, large-scale estimation of hydrological states is difficult and validation of large-scale hydrological models often reveals large biases in e.g. streamflow estimates. Fortunately, as shown with a validation study of the global model PCR-GLOBWB, these biases are of less importance when seasonal forecasts are evaluated in terms of their ability to reproduce anomalous flows and extreme events, i.e. by anomaly correlations or categorical quantile

  3. Towards large scale production and separation of carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Alvarez, Noe T.

    Since their discovery, carbon nanotubes (CNTs) have boosted the research and applications of nanotechnology; however, many applications of CNTs are inaccessible because they depend upon large-scale CNT production and separations. Type, chirality and diameter control of CNTs determine many of their physical properties, and such control is still not accesible. This thesis studies the fundamentals for scalable selective reactions of HiPCo CNTs as well as the early phase of routes to an inexpensive approach for large-scale CNT production. In the growth part, this thesis covers a complete wet-chemistry process of catalyst and catalyst support deposition for growth of vertically aligned (VA) CNTs. A wet-chemistry preparation process has significant importance for CNT synthesis through chemical vapor deposition (CVD). CVD is by far, the most suitable and inexpensive process for large-scale CNT production when compared to other common processes such as laser ablation and arc discharge. However, its potential has been limited by low-yielding and difficult preparation processes of catalyst and its support, therefore its competitiveness has been reduced. The wet-chemistry process takes advantage of current nanoparticle technology to deposit the catalyst and the catalyst support as a thin film of nanoparticles, making the protocol simple compared to electron beam evaporation and sputtering processes. In the CNT selective reactions part, this thesis studies UV irradiation of individually dispersed HiPCo CNTs that generates auto-selective reactions in the liquid phase with good control over their diameter and chirality. This technique is ideal for large-scale and continuous-process of separations of CNTs by diameter and type. Additionally, an innovative simple catalyst deposition through abrasion is demonstrated. Simple friction between the catalyst and the substrates deposit a high enough density of metal catalyst particles for successful CNT growth. This simple approach has

  4. Novel algorithm of large-scale simultaneous linear equations.

    PubMed

    Fujiwara, T; Hoshi, T; Yamamoto, S; Sogabe, T; Zhang, S-L

    2010-02-24

    We review our recently developed methods of solving large-scale simultaneous linear equations and applications to electronic structure calculations both in one-electron theory and many-electron theory. This is the shifted COCG (conjugate orthogonal conjugate gradient) method based on the Krylov subspace, and the most important issue for applications is the shift equation and the seed switching method, which greatly reduce the computational cost. The applications to nano-scale Si crystals and the double orbital extended Hubbard model are presented.

  5. Quantum computation for large-scale image classification

    NASA Astrophysics Data System (ADS)

    Ruan, Yue; Chen, Hanwu; Tan, Jianing; Li, Xi

    2016-10-01

    Due to the lack of an effective quantum feature extraction method, there is currently no effective way to perform quantum image classification or recognition. In this paper, for the first time, a global quantum feature extraction method based on Schmidt decomposition is proposed. A revised quantum learning algorithm is also proposed that will classify images by computing the Hamming distance of these features. From the experimental results derived from the benchmark database Caltech 101, and an analysis of the algorithm, an effective approach to large-scale image classification is derived and proposed against the background of big data.

  6. Generation of Large-Scale Winds in Horizontally Anisotropic Convection.

    PubMed

    von Hardenberg, J; Goluskin, D; Provenzale, A; Spiegel, E A

    2015-09-25

    We simulate three-dimensional, horizontally periodic Rayleigh-Bénard convection, confined between free-slip horizontal plates and rotating about a distant horizontal axis. When both the temperature difference between the plates and the rotation rate are sufficiently large, a strong horizontal wind is generated that is perpendicular to both the rotation vector and the gravity vector. The wind is turbulent, large-scale, and vertically sheared. Horizontal anisotropy, engendered here by rotation, appears necessary for such wind generation. Most of the kinetic energy of the flow resides in the wind, and the vertical turbulent heat flux is much lower on average than when there is no wind. PMID:26451558

  7. Large Scale Composite Manufacturing for Heavy Lift Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Stavana, Jacob; Cohen, Leslie J.; Houseal, Keth; Pelham, Larry; Lort, Richard; Zimmerman, Thomas; Sutter, James; Western, Mike; Harper, Robert; Stuart, Michael

    2012-01-01

    Risk reduction for the large scale composite manufacturing is an important goal to produce light weight components for heavy lift launch vehicles. NASA and an industry team successfully employed a building block approach using low-cost Automated Tape Layup (ATL) of autoclave and Out-of-Autoclave (OoA) prepregs. Several large, curved sandwich panels were fabricated at HITCO Carbon Composites. The aluminum honeycomb core sandwich panels are segments of a 1/16th arc from a 10 meter cylindrical barrel. Lessons learned highlight the manufacturing challenges required to produce light weight composite structures such as fairings for heavy lift launch vehicles.

  8. Search for Large Scale Anisotropies with the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Bonino, R.; Pierre Auger Collaboration

    The Pierre Auger Observatory studies the nature and the origin of Ultra High Energy Cosmic Rays (>3\\cdot1018 eV). Completed at the end of 2008, it has been continuously operating for more than six years. Using data collected from 1 January 2004 until 31 March 2009, we search for large scale anisotropies with two complementary analyses in different energy windows. No significant anisotropies are observed, resulting in bounds on the first harmonic amplitude at the 1% level at EeV energies.

  9. Large-Scale periodic solar velocities: An observational study

    NASA Technical Reports Server (NTRS)

    Dittmer, P. H.

    1977-01-01

    Observations of large-scale solar velocities were made using the mean field telescope and Babcock magnetograph of the Stanford Solar Observatory. Observations were made in the magnetically insensitive ion line at 5124 A, with light from the center (limb) of the disk right (left) circularly polarized, so that the magnetograph measures the difference in wavelength between center and limb. Computer calculations are made of the wavelength difference produced by global pulsations for spherical harmonics up to second order and of the signal produced by displacing the solar image relative to polarizing optics or diffraction grating.

  10. Evaluation of uncertainty in large-scale fusion metrology

    NASA Astrophysics Data System (ADS)

    Zhang, Fumin; Qu, Xinghua; Wu, Hongyan; Ye, Shenghua

    2008-12-01

    The expression system of uncertainty in conventional scale has been perfect, however, due to varies of error sources, it is still hard to obtain the uncertainty of large-scale instruments by common methods. In this paper, the uncertainty is evaluated by Monte Carlo simulation. The point-clouds created by this method are shown through computer visualization and point by point analysis is made. Thus, in fusion measurement, apart from the uncertainty of every instrument being expressed directly, the contribution every error source making for the whole uncertainty becomes easy to calculate. Finally, the application of this method in measuring tunnel component is given.

  11. Large-scale sodium spray fire code validation (SOFICOV) test

    SciTech Connect

    Jeppson, D.W.; Muhlestein, L.D.

    1985-01-01

    A large-scale, sodium, spray fire code validation test was performed in the HEDL 850-m/sup 3/ Containment System Test Facility (CSTF) as part of the Sodium Spray Fire Code Validation (SOFICOV) program. Six hundred fifty eight kilograms of sodium spray was sprayed in an air atmosphere for a period of 2400 s. The sodium spray droplet sizes and spray pattern distribution were estimated. The containment atmosphere temperature and pressure response, containment wall temperature response and sodium reaction rate with oxygen were measured. These results are compared to post-test predictions using SPRAY and NACOM computer codes.

  12. Large scale obscuration and related climate effects open literature bibliography

    SciTech Connect

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  13. Large-Scale Compton Imaging for Wide-Area Surveillance

    SciTech Connect

    Lange, D J; Manini, H A; Wright, D M

    2006-03-01

    We study the performance of a large-scale Compton imaging detector placed in a low-flying aircraft, used to search wide areas for rad/nuc threat sources. In this paper we investigate the performance potential of equipping aerial platforms with gamma-ray detectors that have photon sensitivity up to a few MeV. We simulate the detector performance, and present receiver operating characteristics (ROC) curves for a benchmark scenario using a {sup 137}Cs source. The analysis uses a realistic environmental background energy spectrum and includes air attenuation.

  14. Frequency domain multiplexing for large-scale bolometer arrays

    SciTech Connect

    Spieler, Helmuth

    2002-05-31

    The development of planar fabrication techniques for superconducting transition-edge sensors has brought large-scale arrays of 1000 pixels or more to the realm of practicality. This raises the problem of reading out a large number of sensors with a tractable number of connections. A possible solution is frequency-domain multiplexing. I summarize basic principles, present various circuit topologies, and discuss design trade-offs, noise performance, cross-talk and dynamic range. The design of a practical device and its readout system is described with a discussion of fabrication issues, practical limits and future prospects.

  15. Quantum computation for large-scale image classification

    NASA Astrophysics Data System (ADS)

    Ruan, Yue; Chen, Hanwu; Tan, Jianing; Li, Xi

    2016-07-01

    Due to the lack of an effective quantum feature extraction method, there is currently no effective way to perform quantum image classification or recognition. In this paper, for the first time, a global quantum feature extraction method based on Schmidt decomposition is proposed. A revised quantum learning algorithm is also proposed that will classify images by computing the Hamming distance of these features. From the experimental results derived from the benchmark database Caltech 101, and an analysis of the algorithm, an effective approach to large-scale image classification is derived and proposed against the background of big data.

  16. Radiative shocks on large scale lasers. Preliminary results

    NASA Astrophysics Data System (ADS)

    Leygnac, S.; Bouquet, S.; Stehle, C.; Barroso, P.; Batani, D.; Benuzzi, A.; Cathala, B.; Chièze, J.-P.; Fleury, X.; Grandjouan, N.; Grenier, J.; Hall, T.; Henry, E.; Koenig, M.; Lafon, J. P. J.; Malka, V.; Marchet, B.; Merdji, H.; Michaut, C.; Poles, L.; Thais, F.

    2001-05-01

    Radiative shocks, those structure is strongly influenced by the radiation field, are present in various astrophysical objects (circumstellar envelopes of variable stars, supernovae ...). Their modeling is very difficult and thus will take benefit from experimental informations. This approach is now possible using large scale lasers. Preliminary experiments have been performed with the nanosecond LULI laser at Ecole Polytechnique (France) in 2000. A radiative shock has been obtained in a low pressure xenon cell. The preparation of such experiments and their interpretation is performed using analytical calculations and numerical simulations.

  17. Solving Large-scale Eigenvalue Problems in SciDACApplications

    SciTech Connect

    Yang, Chao

    2005-06-29

    Large-scale eigenvalue problems arise in a number of DOE applications. This paper provides an overview of the recent development of eigenvalue computation in the context of two SciDAC applications. We emphasize the importance of Krylov subspace methods, and point out its limitations. We discuss the value of alternative approaches that are more amenable to the use of preconditioners, and report the progression using the multi-level algebraic sub-structuring techniques to speed up eigenvalue calculation. In addition to methods for linear eigenvalue problems, we also examine new approaches to solving two types of non-linear eigenvalue problems arising from SciDAC applications.

  18. Large-scale genotoxicity assessments in the marine environment.

    PubMed

    Hose, J E

    1994-12-01

    There are a number of techniques for detecting genotoxicity in the marine environment, and many are applicable to large-scale field assessments. Certain tests can be used to evaluate responses in target organisms in situ while others utilize surrogate organisms exposed to field samples in short-term laboratory bioassays. Genotoxicity endpoints appear distinct from traditional toxicity endpoints, but some have chemical or ecotoxicologic correlates. One versatile end point, the frequency of anaphase aberrations, has been used in several large marine assessments to evaluate genotoxicity in the New York Bight, in sediment from San Francisco Bay, and following the Exxon Valdez oil spill. PMID:7713029

  19. [National Strategic Promotion for Large-Scale Clinical Cancer Research].

    PubMed

    Toyama, Senya

    2016-04-01

    The number of clinical research by clinical cancer study groups has been decreasing this year in Japan. They say the reason is the abolition of donations to the groups from the pharmaceutical companies after the Diovan scandal. But I suppose fundamental problem is that government-supported large-scale clinical cancer study system for evidence based medicine (EBM) has not been fully established. An urgent establishment of the system based on the national strategy is needed for the cancer patients and the public health promotion.

  20. Enabling Large-Scale Biomedical Analysis in the Cloud

    PubMed Central

    Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen

    2013-01-01

    Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665

  1. Large-scale deformation associated with ridge subduction

    USGS Publications Warehouse

    Geist, E.L.; Fisher, M.A.; Scholl, D. W.

    1993-01-01

    Continuum models are used to investigate the large-scale deformation associated with the subduction of aseismic ridges. Formulated in the horizontal plane using thin viscous sheet theory, these models measure the horizontal transmission of stress through the arc lithosphere accompanying ridge subduction. Modelling was used to compare the Tonga arc and Louisville ridge collision with the New Hebrides arc and d'Entrecasteaux ridge collision, which have disparate arc-ridge intersection speeds but otherwise similar characteristics. Models of both systems indicate that diffuse deformation (low values of the effective stress-strain exponent n) are required to explain the observed deformation. -from Authors

  2. A multilevel optimization of large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Sundareshan, M. K.

    1976-01-01

    A multilevel feedback control scheme is proposed for optimization of large-scale systems composed of a number of (not necessarily weakly coupled) subsystems. Local controllers are used to optimize each subsystem, ignoring the interconnections. Then, a global controller may be applied to minimize the effect of interconnections and improve the performance of the overall system. At the cost of suboptimal performance, this optimization strategy ensures invariance of suboptimality and stability of the systems under structural perturbations whereby subsystems are disconnected and again connected during operation.

  3. Large-Scale Purification of Peroxisomes for Preparative Applications.

    PubMed

    Cramer, Jana; Effelsberg, Daniel; Girzalsky, Wolfgang; Erdmann, Ralf

    2015-09-01

    This protocol is designed for large-scale isolation of highly purified peroxisomes from Saccharomyces cerevisiae using two consecutive density gradient centrifugations. Instructions are provided for harvesting up to 60 g of oleic acid-induced yeast cells for the preparation of spheroplasts and generation of organellar pellets (OPs) enriched in peroxisomes and mitochondria. The OPs are loaded onto eight continuous 36%-68% (w/v) sucrose gradients. After centrifugation, the peak peroxisomal fractions are determined by measurement of catalase activity. These fractions are subsequently pooled and subjected to a second density gradient centrifugation using 20%-40% (w/v) Nycodenz. PMID:26330621

  4. Large-scale Direct Targeting for Drug Repositioning and Discovery.

    PubMed

    Zheng, Chunli; Guo, Zihu; Huang, Chao; Wu, Ziyin; Li, Yan; Chen, Xuetong; Fu, Yingxue; Ru, Jinlong; Ali Shar, Piar; Wang, Yuan; Wang, Yonghua

    2015-01-01

    A system-level identification of drug-target direct interactions is vital to drug repositioning and discovery. However, the biological means on a large scale remains challenging and expensive even nowadays. The available computational models mainly focus on predicting indirect interactions or direct interactions on a small scale. To address these problems, in this work, a novel algorithm termed weighted ensemble similarity (WES) has been developed to identify drug direct targets based on a large-scale of 98,327 drug-target relationships. WES includes: (1) identifying the key ligand structural features that are highly-related to the pharmacological properties in a framework of ensemble; (2) determining a drug's affiliation of a target by evaluation of the overall similarity (ensemble) rather than a single ligand judgment; and (3) integrating the standardized ensemble similarities (Z score) by Bayesian network and multi-variate kernel approach to make predictions. All these lead WES to predict drug direct targets with external and experimental test accuracies of 70% and 71%, respectively. This shows that the WES method provides a potential in silico model for drug repositioning and discovery.

  5. Strong CP Violation in Large Scale Magnetic Fields

    SciTech Connect

    Faccioli, P.; Millo, R.

    2007-11-19

    We explore the possibility of improving on the present experimental bounds on Strong CP violation, by studying processes in which the smallness of {theta} is compensated by the presence of some other very large scale. In particular, we study the response of the {theta} vacuum to large-scale magnetic fields, whose correlation lengths can be as large as the size of galaxy clusters. We find that, if strong interactions break CP, an external magnetic field would induce an electric vacuum polarization along the same direction. As a consequence, u,d-bar and d,u-bar quarks would accumulate in the opposite regions of the space, giving raise to an electric dipole moment. We estimate the magnitude of this effect both at T = 0 and for 0

  6. Alignment of quasar polarizations with large-scale structures

    NASA Astrophysics Data System (ADS)

    Hutsemékers, D.; Braibant, L.; Pelgrims, V.; Sluse, D.

    2014-12-01

    We have measured the optical linear polarization of quasars belonging to Gpc scale quasar groups at redshift z ~ 1.3. Out of 93 quasars observed, 19 are significantly polarized. We found that quasar polarization vectors are either parallel or perpendicular to the directions of the large-scale structures to which they belong. Statistical tests indicate that the probability that this effect can be attributed to randomly oriented polarization vectors is on the order of 1%. We also found that quasars with polarization perpendicular to the host structure preferentially have large emission line widths while objects with polarization parallel to the host structure preferentially have small emission line widths. Considering that quasar polarization is usually either parallel or perpendicular to the accretion disk axis depending on the inclination with respect to the line of sight, and that broader emission lines originate from quasars seen at higher inclinations, we conclude that quasar spin axes are likely parallel to their host large-scale structures. Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under program ID 092.A-0221.Table 1 is available in electronic form at http://www.aanda.org

  7. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  8. Ecohydrological modeling for large-scale environmental impact assessment.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model. PMID:26595397

  9. Simulating the large-scale structure of HI intensity maps

    NASA Astrophysics Data System (ADS)

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel

    2016-03-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 20483 particles (particle mass 1.6 × 1011 Msolar / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (108 Msolar / h < Mhalo < 1013 Msolar / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 lesssim z lesssim 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  10. Large scale floodplain mapping using a hydrogeomorphic method

    NASA Astrophysics Data System (ADS)

    Nardi, F.; Yan, K.; Di Baldassarre, G.; Grimaldi, S.

    2013-12-01

    Floodplain landforms are clearly distinguishable as respect to adjacent hillslopes being the trace of severe floods that shaped the terrain. As a result digital topography intrinsically contains the floodplain information, this works presents the results of the application of a DEM-based large scale hydrogeomorphic floodplain delineation method. The proposed approach, based on the integration of terrain analysis algorithms in a GIS framework, automatically identifies the potentially frequently saturated zones of riparian areas by analysing the maximum flood flow heights associated to stream network nodes as respect to surrounding uplands. Flow heights are estimated by imposing a Leopold's law that scales with the contributing area. Presented case studies include the floodplain map of large river basins for the entire Italian territory , that are also used for calibrating the Leopold scaling parameters, as well as additional large international river basins in different climatic and geomorphic characteristics posing the base for the use of such approach for global floodplain mapping. The proposed tool could be useful to detect the hydrological change since it can easily provide maps to verify the flood impact on human activities and vice versa how the human activities changed in floodplain areas at large scale.

  11. Large-scale network-level processes during entrainment

    PubMed Central

    Lithari, Chrysa; Sánchez-García, Carolina; Ruhnau, Philipp; Weisz, Nathan

    2016-01-01

    Visual rhythmic stimulation evokes a robust power increase exactly at the stimulation frequency, the so-called steady-state response (SSR). Localization of visual SSRs normally shows a very focal modulation of power in visual cortex and led to the treatment and interpretation of SSRs as a local phenomenon. Given the brain network dynamics, we hypothesized that SSRs have additional large-scale effects on the brain functional network that can be revealed by means of graph theory. We used rhythmic visual stimulation at a range of frequencies (4–30 Hz), recorded MEG and investigated source level connectivity across the whole brain. Using graph theoretical measures we observed a frequency-unspecific reduction of global density in the alpha band “disconnecting” visual cortex from the rest of the network. Also, a frequency-specific increase of connectivity between occipital cortex and precuneus was found at the stimulation frequency that exhibited the highest resonance (30 Hz). In conclusion, we showed that SSRs dynamically re-organized the brain functional network. These large-scale effects should be taken into account not only when attempting to explain the nature of SSRs, but also when used in various experimental designs. PMID:26835557

  12. Scalable WIM: effective exploration in large-scale astrophysical environments.

    PubMed

    Li, Yinggang; Fu, Chi-Wing; Hanson, Andrew J

    2006-01-01

    Navigating through large-scale virtual environments such as simulations of the astrophysical Universe is difficult. The huge spatial range of astronomical models and the dominance of empty space make it hard for users to travel across cosmological scales effectively, and the problem of wayfinding further impedes the user's ability to acquire reliable spatial knowledge of astronomical contexts. We introduce a new technique called the scalable world-in-miniature (WIM) map as a unifying interface to facilitate travel and wayfinding in a virtual environment spanning gigantic spatial scales: Power-law spatial scaling enables rapid and accurate transitions among widely separated regions; logarithmically mapped miniature spaces offer a global overview mode when the full context is too large; 3D landmarks represented in the WIM are enhanced by scale, positional, and directional cues to augment spatial context awareness; a series of navigation models are incorporated into the scalable WIM to improve the performance of travel tasks posed by the unique characteristics of virtual cosmic exploration. The scalable WIM user interface supports an improved physical navigation experience and assists pragmatic cognitive understanding of a visualization context that incorporates the features of large-scale astronomy.

  13. Large-scale network-level processes during entrainment.

    PubMed

    Lithari, Chrysa; Sánchez-García, Carolina; Ruhnau, Philipp; Weisz, Nathan

    2016-03-15

    Visual rhythmic stimulation evokes a robust power increase exactly at the stimulation frequency, the so-called steady-state response (SSR). Localization of visual SSRs normally shows a very focal modulation of power in visual cortex and led to the treatment and interpretation of SSRs as a local phenomenon. Given the brain network dynamics, we hypothesized that SSRs have additional large-scale effects on the brain functional network that can be revealed by means of graph theory. We used rhythmic visual stimulation at a range of frequencies (4-30 Hz), recorded MEG and investigated source level connectivity across the whole brain. Using graph theoretical measures we observed a frequency-unspecific reduction of global density in the alpha band "disconnecting" visual cortex from the rest of the network. Also, a frequency-specific increase of connectivity between occipital cortex and precuneus was found at the stimulation frequency that exhibited the highest resonance (30 Hz). In conclusion, we showed that SSRs dynamically re-organized the brain functional network. These large-scale effects should be taken into account not only when attempting to explain the nature of SSRs, but also when used in various experimental designs. PMID:26835557

  14. Exploring Cloud Computing for Large-scale Scientific Applications

    SciTech Connect

    Lin, Guang; Han, Binh; Yin, Jian; Gorton, Ian

    2013-06-27

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address these challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.

  15. Very sparse LSSVM reductions for large-scale data.

    PubMed

    Mall, Raghvendra; Suykens, Johan A K

    2015-05-01

    Least squares support vector machines (LSSVMs) have been widely applied for classification and regression with comparable performance with SVMs. The LSSVM model lacks sparsity and is unable to handle large-scale data due to computational and memory constraints. A primal fixed-size LSSVM (PFS-LSSVM) introduce sparsity using Nyström approximation with a set of prototype vectors (PVs). The PFS-LSSVM model solves an overdetermined system of linear equations in the primal. However, this solution is not the sparsest. We investigate the sparsity-error tradeoff by introducing a second level of sparsity. This is done by means of L0 -norm-based reductions by iteratively sparsifying LSSVM and PFS-LSSVM models. The exact choice of the cardinality for the initial PV set is not important then as the final model is highly sparse. The proposed method overcomes the problem of memory constraints and high computational costs resulting in highly sparse reductions to LSSVM models. The approximations of the two models allow to scale the models to large-scale datasets. Experiments on real-world classification and regression data sets from the UCI repository illustrate that these approaches achieve sparse models without a significant tradeoff in errors.

  16. Large-scale anisotropy in stably stratified rotating flows.

    PubMed

    Marino, R; Mininni, P D; Rosenberg, D L; Pouquet, A

    2014-08-01

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up to 1024(3) grid points and Reynolds numbers of ≈1000. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the kinetic energy displays a perpendicular (horizontal) spectrum with power-law behavior compatible with ∼k(⊥)(-5/3), including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.

  17. Large-scale anisotropy in stably stratified rotating flows

    SciTech Connect

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; Pouquet, A.

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up to $1024^3$ grid points and Reynolds numbers of $\\approx 1000$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $\\sim k_\\perp^{-5/3}$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.

  18. Large-scale anisotropy in stably stratified rotating flows

    DOE PAGES

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; Pouquet, A.

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less

  19. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  20. Large-scale magnetic fields in magnetohydrodynamic turbulence.

    PubMed

    Alexakis, Alexandros

    2013-02-22

    High Reynolds number magnetohydrodynamic turbulence in the presence of zero-flux large-scale magnetic fields is investigated as a function of the magnetic field strength. For a variety of flow configurations, the energy dissipation rate [symbol: see text] follows the scaling [Symbol: see text] proportional U(rms)(3)/ℓ even when the large-scale magnetic field energy is twenty times larger than the kinetic energy. A further increase of the magnetic energy showed a transition to the [Symbol: see text] proportional U(rms)(2) B(rms)/ℓ scaling implying that magnetic shear becomes more efficient at this point at cascading the energy than the velocity fluctuations. Strongly helical configurations form nonturbulent helicity condensates that deviate from these scalings. Weak turbulence scaling was absent from the investigation. Finally, the magnetic energy spectra support the Kolmogorov spectrum k(-5/3) while kinetic energy spectra are closer to the Iroshnikov-Kraichnan spectrum k(-3/2) as observed in the solar wind.

  1. The combustion behavior of large scale lithium titanate battery.

    PubMed

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-01

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(Ni(x)Co(y)Mn(z))O2/Li(4)Ti(5)O(12) batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(Ni(x)Co(y)Mn(z))O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112-121 °C on anode tab and 139 to 147 °C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li(+) distribution are the main causes that lead to the difference. PMID:25586064

  2. Measuring Cosmic Expansion and Large Scale Structure with Destiny

    NASA Technical Reports Server (NTRS)

    Benford, Dominic J.; Lauer, Tod R.

    2007-01-01

    Destiny is a simple, direct, low cost mission to determine the properties of dark energy by obtaining a cosmologically deep supernova (SN) type Ia Hubble diagram and by measuring the large-scale mass power spectrum over time. Its science instrument is a 1.65m space telescope, featuring a near-infrared survey camera/spectrometer with a large field of view. During its first two years, Destiny will detect, observe, and characterize 23000 SN Ia events over the redshift interval 0.4lo00 square degrees to measure the large-scale mass power spectrum. The combination of surveys is much more powerful than either technique on its own, and will have over an order of magnitude greater sensitivity than will be provided by ongoing ground-based projects.

  3. Exact-Differential Large-Scale Traffic Simulation

    SciTech Connect

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios; Perumalla, Kalyan S

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) a key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.

  4. Ecohydrological modeling for large-scale environmental impact assessment.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model.

  5. Systematic renormalization of the effective theory of Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Akbar Abolhasani, Ali; Mirbabayi, Mehrdad; Pajer, Enrico

    2016-05-01

    A perturbative description of Large Scale Structure is a cornerstone of our understanding of the observed distribution of matter in the universe. Renormalization is an essential and defining step to make this description physical and predictive. Here we introduce a systematic renormalization procedure, which neatly associates counterterms to the UV-sensitive diagrams order by order, as it is commonly done in quantum field theory. As a concrete example, we renormalize the one-loop power spectrum and bispectrum of both density and velocity. In addition, we present a series of results that are valid to all orders in perturbation theory. First, we show that while systematic renormalization requires temporally non-local counterterms, in practice one can use an equivalent basis made of local operators. We give an explicit prescription to generate all counterterms allowed by the symmetries. Second, we present a formal proof of the well-known general argument that the contribution of short distance perturbations to large scale density contrast δ and momentum density π(k) scale as k2 and k, respectively. Third, we demonstrate that the common practice of introducing counterterms only in the Euler equation when one is interested in correlators of δ is indeed valid to all orders.

  6. Detecting differential protein expression in large-scale population proteomics

    SciTech Connect

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  7. The combustion behavior of large scale lithium titanate battery

    PubMed Central

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-01

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(NixCoyMnz)O2/Li4Ti5O12 batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(NixCoyMnz)O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112–121°C on anode tab and 139 to 147°C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li+ distribution are the main causes that lead to the difference. PMID:25586064

  8. Large scale reconstruction of the solar coronal magnetic field

    NASA Astrophysics Data System (ADS)

    Amari, T.; Aly, J.-J.; Chopin, P.; Canou, A.; Mikic, Z.

    2014-10-01

    It is now becoming necessary to access the global magnetic structure of the solar low corona at a large scale in order to understand its physics and more particularly the conditions of energization of the magnetic fields and the multiple connections between distant active regions (ARs) which may trigger eruptive events in an almost coordinated way. Various vector magnetographs, either on board spacecraft or ground-based, currently allow to obtain vector synoptic maps, composite magnetograms made of multiple interactive ARs, and full disk magnetograms. We present a method recently developed for reconstructing the global solar coronal magnetic field as a nonlinear force-free magnetic field in spherical geometry, generalizing our previous results in Cartesian geometry. This method is implemented in the new code XTRAPOLS, which thus appears as an extension of our active region scale code XTRAPOL. We apply our method by performing a reconstruction at a specific time for which we dispose of a set of composite data constituted of a vector magnetogram provided by SDO/HMI, embedded in a larger full disk vector magnetogram provided by the same instrument, finally embedded in a synoptic map provided by SOLIS. It turns out to be possible to access the large scale structure of the corona and its energetic contents, and also the AR scale, at which we recover the presence of a twisted flux rope in equilibrium.

  9. THE LARGE-SCALE MAGNETIC FIELDS OF THIN ACCRETION DISKS

    SciTech Connect

    Cao Xinwu; Spruit, Hendrik C. E-mail: henk@mpa-garching.mpg.de

    2013-03-10

    Large-scale magnetic field threading an accretion disk is a key ingredient in the jet formation model. The most attractive scenario for the origin of such a large-scale field is the advection of the field by the gas in the accretion disk from the interstellar medium or a companion star. However, it is realized that outward diffusion of the accreted field is fast compared with the inward accretion velocity in a geometrically thin accretion disk if the value of the Prandtl number P{sub m} is around unity. In this work, we revisit this problem considering the angular momentum of the disk to be removed predominantly by the magnetically driven outflows. The radial velocity of the disk is significantly increased due to the presence of the outflows. Using a simplified model for the vertical disk structure, we find that even moderately weak fields can cause sufficient angular momentum loss via a magnetic wind to balance outward diffusion. There are two equilibrium points, one at low field strengths corresponding to a plasma-beta at the midplane of order several hundred, and one for strong accreted fields, {beta} {approx} 1. We surmise that the first is relevant for the accretion of weak, possibly external, fields through the outer parts of the disk, while the latter one could explain the tendency, observed in full three-dimensional numerical simulations, of strong flux bundles at the centers of disk to stay confined in spite of strong magnetororational instability turbulence surrounding them.

  10. Online education in a large scale rehabilitation institution.

    PubMed

    Mazzoleni, M Cristina; Rognoni, Carla; Pagani, Marco; Imbriani, Marcello

    2012-01-01

    Large scale multiple venue institutions face problems when delivering educations to their healthcare staff. The present study is aimed at evaluating the feasibility of relying on e-learning for at least part of the training of the Salvatore Maugeri Foundation healthcare staff. The paper reports the results of the delivery of e-learning courses to the personnel during a span of time of 7 months in order to assess the attitude to online courses attendance, the proportion between administered online education and administered traditional education, the economic sustainability of the online education delivery process. 37% of the total healthcare staff have attended online courses and 46% of nurses have proved to be the very active. The ratio between total number of credits and total number of courses for online and traditional education are respectively 18268/5 and 20354/96. These results point out that eLearning is not at all a niche tool used (or usable) by a limited number of people. Economic sustainability, assessed via personnel work hour saving, has been demonstrated. When distance learning is appropriate, online education is an effective, sustainable, well accepted mean to support and promote healthcare staff's education in a large scale institution. PMID:22491113

  11. High Speed Networking and Large-scale Simulation in Geodynamics

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Gary, Patrick; Seablom, Michael; Truszkowski, Walt; Odubiyi, Jide; Jiang, Weiyuan; Liu, Dong

    2004-01-01

    Large-scale numerical simulation has been one of the most important approaches for understanding global geodynamical processes. In this approach, peta-scale floating point operations (pflops) are often required to carry out a single physically-meaningful numerical experiment. For example, to model convective flow in the Earth's core and generation of the geomagnetic field (geodynamo), simulation for one magnetic free-decay time (approximately 15000 years) with a modest resolution of 150 in three spatial dimensions would require approximately 0.2 pflops. If such a numerical model is used to predict geomagnetic secular variation over decades and longer, with e.g. an ensemble Kalman filter assimilation approach, approximately 30 (and perhaps more) independent simulations of similar scales would be needed for one data assimilation analysis. Obviously, such a simulation would require an enormous computing resource that exceeds the capacity of a single facility currently available at our disposal. One solution is to utilize a very fast network (e.g. 10Gb optical networks) and available middleware (e.g. Globus Toolkit) to allocate available but often heterogeneous resources for such large-scale computing efforts. At NASA GSFC, we are experimenting with such an approach by networking several clusters for geomagnetic data assimilation research. We shall present our initial testing results in the meeting.

  12. Large-Scale Low-Boom Inlet Test Overview

    NASA Technical Reports Server (NTRS)

    Hirt, Stefanie

    2011-01-01

    This presentation provides a high level overview of the Large-Scale Low-Boom Inlet Test and was presented at the Fundamental Aeronautics 2011 Technical Conference. In October 2010 a low-boom supersonic inlet concept with flow control was tested in the 8'x6' supersonic wind tunnel at NASA Glenn Research Center (GRC). The primary objectives of the test were to evaluate the inlet stability and operability of a large-scale low-boom supersonic inlet concept by acquiring performance and flowfield validation data, as well as evaluate simple, passive, bleedless inlet boundary layer control options. During this effort two models were tested: a dual stream inlet intended to model potential flight hardware and a single stream design to study a zero-degree external cowl angle and to permit surface flow visualization of the vortex generator flow control on the internal centerbody surface. The tests were conducted by a team of researchers from NASA GRC, Gulfstream Aerospace Corporation, University of Illinois at Urbana-Champaign, and the University of Virginia

  13. Large-scale Direct Targeting for Drug Repositioning and Discovery

    PubMed Central

    Zheng, Chunli; Guo, Zihu; Huang, Chao; Wu, Ziyin; Li, Yan; Chen, Xuetong; Fu, Yingxue; Ru, Jinlong; Ali Shar, Piar; Wang, Yuan; Wang, Yonghua

    2015-01-01

    A system-level identification of drug-target direct interactions is vital to drug repositioning and discovery. However, the biological means on a large scale remains challenging and expensive even nowadays. The available computational models mainly focus on predicting indirect interactions or direct interactions on a small scale. To address these problems, in this work, a novel algorithm termed weighted ensemble similarity (WES) has been developed to identify drug direct targets based on a large-scale of 98,327 drug-target relationships. WES includes: (1) identifying the key ligand structural features that are highly-related to the pharmacological properties in a framework of ensemble; (2) determining a drug’s affiliation of a target by evaluation of the overall similarity (ensemble) rather than a single ligand judgment; and (3) integrating the standardized ensemble similarities (Z score) by Bayesian network and multi-variate kernel approach to make predictions. All these lead WES to predict drug direct targets with external and experimental test accuracies of 70% and 71%, respectively. This shows that the WES method provides a potential in silico model for drug repositioning and discovery. PMID:26155766

  14. Topographically Engineered Large Scale Nanostructures for Plasmonic Biosensing

    PubMed Central

    Xiao, Bo; Pradhan, Sangram K.; Santiago, Kevin C.; Rutherford, Gugu N.; Pradhan, Aswini K.

    2016-01-01

    We demonstrate that a nanostructured metal thin film can achieve enhanced transmission efficiency and sharp resonances and use a large-scale and high-throughput nanofabrication technique for the plasmonic structures. The fabrication technique combines the features of nanoimprint and soft lithography to topographically construct metal thin films with nanoscale patterns. Metal nanogratings developed using this method show significantly enhanced optical transmission (up to a one-order-of-magnitude enhancement) and sharp resonances with full width at half maximum (FWHM) of ~15nm in the zero-order transmission using an incoherent white light source. These nanostructures are sensitive to the surrounding environment, and the resonance can shift as the refractive index changes. We derive an analytical method using a spatial Fourier transformation to understand the enhancement phenomenon and the sensing mechanism. The use of real-time monitoring of protein-protein interactions in microfluidic cells integrated with these nanostructures is demonstrated to be effective for biosensing. The perpendicular transmission configuration and large-scale structures provide a feasible platform without sophisticated optical instrumentation to realize label-free surface plasmon resonance (SPR) sensing. PMID:27072067

  15. Extending large-scale forest inventories to assess urban forests.

    PubMed

    Corona, Piermaria; Agrimi, Mariagrazia; Baffetta, Federica; Barbati, Anna; Chiriacò, Maria Vincenza; Fattorini, Lorenzo; Pompei, Enrico; Valentini, Riccardo; Mattioli, Walter

    2012-03-01

    Urban areas are continuously expanding today, extending their influence on an increasingly large proportion of woods and trees located in or nearby urban and urbanizing areas, the so-called urban forests. Although these forests have the potential for significantly improving the quality the urban environment and the well-being of the urban population, data to quantify the extent and characteristics of urban forests are still lacking or fragmentary on a large scale. In this regard, an expansion of the domain of multipurpose forest inventories like National Forest Inventories (NFIs) towards urban forests would be required. To this end, it would be convenient to exploit the same sampling scheme applied in NFIs to assess the basic features of urban forests. This paper considers approximately unbiased estimators of abundance and coverage of urban forests, together with estimators of the corresponding variances, which can be achieved from the first phase of most large-scale forest inventories. A simulation study is carried out in order to check the performance of the considered estimators under various situations involving the spatial distribution of the urban forests over the study area. An application is worked out on the data from the Italian NFI.

  16. The combustion behavior of large scale lithium titanate battery.

    PubMed

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-14

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(Ni(x)Co(y)Mn(z))O2/Li(4)Ti(5)O(12) batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(Ni(x)Co(y)Mn(z))O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112-121 °C on anode tab and 139 to 147 °C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li(+) distribution are the main causes that lead to the difference.

  17. The combustion behavior of large scale lithium titanate battery

    NASA Astrophysics Data System (ADS)

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-01

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(NixCoyMnz)O2/Li4Ti5O12 batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(NixCoyMnz)O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112-121°C on anode tab and 139 to 147°C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li+ distribution are the main causes that lead to the difference.

  18. The effective field theory of cosmological large scale structures

    SciTech Connect

    Carrasco, John Joseph M.; Hertzberg, Mark P.; Senatore, Leonardo

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ≈ 10–6c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations δ(k) for all the observables. As an example, we calculate the correction to the power spectrum at order δ(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ≃ 0.24h Mpc–1.

  19. [Privacy and public benefit in using large scale health databases].

    PubMed

    Yamamoto, Ryuichi

    2014-01-01

    In Japan, large scale heath databases were constructed in a few years, such as National Claim insurance and health checkup database (NDB) and Japanese Sentinel project. But there are some legal issues for making adequate balance between privacy and public benefit by using such databases. NDB is carried based on the act for elderly person's health care but in this act, nothing is mentioned for using this database for general public benefit. Therefore researchers who use this database are forced to pay much concern about anonymization and information security that may disturb the research work itself. Japanese Sentinel project is a national project to detecting drug adverse reaction using large scale distributed clinical databases of large hospitals. Although patients give the future consent for general such purpose for public good, it is still under discussion using insufficiently anonymized data. Generally speaking, researchers of study for public benefit will not infringe patient's privacy, but vague and complex requirements of legislation about personal data protection may disturb the researches. Medical science does not progress without using clinical information, therefore the adequate legislation that is simple and clear for both researchers and patients is strongly required. In Japan, the specific act for balancing privacy and public benefit is now under discussion. The author recommended the researchers including the field of pharmacology should pay attention to, participate in the discussion of, and make suggestion to such act or regulations.

  20. Power suppression at large scales in string inflation

    SciTech Connect

    Cicoli, Michele; Downes, Sean; Dutta, Bhaskar E-mail: sddownes@physics.tamu.edu

    2013-12-01

    We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflation is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.

  1. NIC-based Reduction Algorithms for Large-scale Clusters

    SciTech Connect

    Petrini, F; Moody, A T; Fernandez, J; Frachtenberg, E; Panda, D K

    2004-07-30

    Efficient algorithms for reduction operations across a group of processes are crucial for good performance in many large-scale, parallel scientific applications. While previous algorithms limit processing to the host CPU, we utilize the programmable processors and local memory available on modern cluster network interface cards (NICs) to explore a new dimension in the design of reduction algorithms. In this paper, we present the benefits and challenges, design issues and solutions, analytical models, and experimental evaluations of a family of NIC-based reduction algorithms. Performance and scalability evaluations were conducted on the ASCI Linux Cluster (ALC), a 960-node, 1920-processor machine at Lawrence Livermore National Laboratory, which uses the Quadrics QsNet interconnect. We find NIC-based reductions on modern interconnects to be more efficient than host-based implementations in both scalability and consistency. In particular, at large-scale--1812 processes--NIC-based reductions of small integer and floating-point arrays provided respective speedups of 121% and 39% over the host-based, production-level MPI implementation.

  2. Evaluating Unmanned Aerial Platforms for Cultural Heritage Large Scale Mapping

    NASA Astrophysics Data System (ADS)

    Georgopoulos, A.; Oikonomou, C.; Adamopoulos, E.; Stathopoulou, E. K.

    2016-06-01

    When it comes to large scale mapping of limited areas especially for cultural heritage sites, things become critical. Optical and non-optical sensors are developed to such sizes and weights that can be lifted by such platforms, like e.g. LiDAR units. At the same time there is an increase in emphasis on solutions that enable users to get access to 3D information faster and cheaper. Considering the multitude of platforms, cameras and the advancement of algorithms in conjunction with the increase of available computing power this challenge should and indeed is further investigated. In this paper a short review of the UAS technologies today is attempted. A discussion follows as to their applicability and advantages, depending on their specifications, which vary immensely. The on-board cameras available are also compared and evaluated for large scale mapping. Furthermore a thorough analysis, review and experimentation with different software implementations of Structure from Motion and Multiple View Stereo algorithms, able to process such dense and mostly unordered sequence of digital images is also conducted and presented. As test data set, we use a rich optical and thermal data set from both fixed wing and multi-rotor platforms over an archaeological excavation with adverse height variations and using different cameras. Dense 3D point clouds, digital terrain models and orthophotos have been produced and evaluated for their radiometric as well as metric qualities.

  3. Halo detection via large-scale Bayesian inference

    NASA Astrophysics Data System (ADS)

    Merson, Alexander I.; Jasche, Jens; Abdalla, Filipe B.; Lahav, Ofer; Wandelt, Benjamin; Jones, D. Heath; Colless, Matthew

    2016-08-01

    We present a proof-of-concept of a novel and fully Bayesian methodology designed to detect haloes of different masses in cosmological observations subject to noise and systematic uncertainties. Our methodology combines the previously published Bayesian large-scale structure inference algorithm, HAmiltonian Density Estimation and Sampling algorithm (HADES), and a Bayesian chain rule (the Blackwell-Rao estimator), which we use to connect the inferred density field to the properties of dark matter haloes. To demonstrate the capability of our approach, we construct a realistic galaxy mock catalogue emulating the wide-area 6-degree Field Galaxy Survey, which has a median redshift of approximately 0.05. Application of HADES to the catalogue provides us with accurately inferred three-dimensional density fields and corresponding quantification of uncertainties inherent to any cosmological observation. We then use a cosmological simulation to relate the amplitude of the density field to the probability of detecting a halo with mass above a specified threshold. With this information, we can sum over the HADES density field realisations to construct maps of detection probabilities and demonstrate the validity of this approach within our mock scenario. We find that the probability of successful detection of haloes in the mock catalogue increases as a function of the signal to noise of the local galaxy observations. Our proposed methodology can easily be extended to account for more complex scientific questions and is a promising novel tool to analyse the cosmic large-scale structure in observations.

  4. Large scale CMB anomalies from thawing cosmic strings

    NASA Astrophysics Data System (ADS)

    Ringeval, Christophe; Yamauchi, Daisuke; Yokoyama, Jun'ichi; Bouchet, François R.

    2016-02-01

    Cosmic strings formed during inflation are expected to be either diluted over super-Hubble distances, i.e., invisible today, or to have crossed our past light cone very recently. We discuss the latter situation in which a few strings imprint their signature in the Cosmic Microwave Background (CMB) Anisotropies after recombination. Being almost frozen in the Hubble flow, these strings are quasi static and evade almost all of the previously derived constraints on their tension while being able to source large scale anisotropies in the CMB sky. Using a local variance estimator on thousand of numerically simulated Nambu-Goto all sky maps, we compute the expected signal and show that it can mimic a dipole modulation at large angular scales while being negligible at small angles. Interestingly, such a scenario generically produces one cold spot from the thawing of a cosmic string loop. Mixed with anisotropies of inflationary origin, we find that a few strings of tension GU = Script O(1) × 10-6 match the amplitude of the dipole modulation reported in the Planck satellite measurements and could be at the origin of other large scale anomalies.

  5. Very large-scale motions in a turbulent pipe flow

    NASA Astrophysics Data System (ADS)

    Lee, Jae Hwa; Jang, Seong Jae; Sung, Hyung Jin

    2011-11-01

    Direct numerical simulation of a turbulent pipe flow with ReD=35000 was performed to investigate the spatially coherent structures associated with very large-scale motions. The corresponding friction Reynolds number, based on pipe radius R, is R+=934, and the computational domain length is 30 R. The computed mean flow statistics agree well with previous DNS data at ReD=44000 and 24000. Inspection of the instantaneous fields and two-point correlation of the streamwise velocity fluctuations showed that the very long meandering motions exceeding 25R exist in logarithmic and wake regions, and the streamwise length scale is almost linearly increased up to y/R ~0.3, while the structures in the turbulent boundary layer only reach up to the edge of the log-layer. Time-resolved instantaneous fields revealed that the hairpin packet-like structures grow with continuous stretching along the streamwise direction and create the very large-scale structures with meandering in the spanwise direction, consistent with the previous conceptual model of Kim & Adrian (1999). This work was supported by the Creative Research Initiatives of NRF/MEST of Korea (No. 2011-0000423).

  6. IP over optical multicasting for large-scale video delivery

    NASA Astrophysics Data System (ADS)

    Jin, Yaohui; Hu, Weisheng; Sun, Weiqiang; Guo, Wei

    2007-11-01

    In the IPTV systems, multicasting will play a crucial role in the delivery of high-quality video services, which can significantly improve bandwidth efficiency. However, the scalability and the signal quality of current IPTV can barely compete with the existing broadcast digital TV systems since it is difficult to implement large-scale multicasting with end-to-end guaranteed quality of service (QoS) in packet-switched IP network. China 3TNet project aimed to build a high performance broadband trial network to support large-scale concurrent streaming media and interactive multimedia services. The innovative idea of 3TNet is that an automatic switched optical networks (ASON) with the capability of dynamic point-to-multipoint (P2MP) connections replaces the conventional IP multicasting network in the transport core, while the edge remains an IP multicasting network. In this paper, we will introduce the network architecture and discuss challenges in such IP over Optical multicasting for video delivery.

  7. A study of synthetic large scales in turbulent boundary layers

    NASA Astrophysics Data System (ADS)

    Duvvuri, Subrahmanyam; Luhar, Mitul; Barnard, Casey; Sheplak, Mark; McKeon, Beverley

    2013-11-01

    Synthetic spanwise-constant spatio-temporal disturbances are excited in a turbulent boundary layer through a spatially impulsive patch of dynamic wall-roughness. The downstream flow response is studied through hot wire anemometry, pressure measurements at the wall and direct measurements of wall-shear-stress made using a novel micro-machined capacitive floating element sensor. These measurements are phase-locked to the input perturbation to recover the synthetic large-scale motion and characterize its structure and wall signature. The phase relationship between the synthetic large scale and small scale activity provides further insights into the apparent amplitude modulation effect between them, and the dynamics of wall-bounded turbulent flows in general. Results from these experiments will be discussed in the context of the critical-layer behavior revealed by the resolvent analysis of McKeon & Sharma (J Fluid Mech, 2010), and compared with similar earlier work by Jacobi & McKeon (J Fluid Mech, 2011). Model predictions are shown to be in broad agreement with experiments. The support of AFOSR grant #FA 9550-12-1-0469, Resnick Institute Graduate Research Fellowship (S.D.) and Sandia Graduate Fellowship (C.B.) are gratefully acknowledged.

  8. Large-scale climatic control on European precipitation

    NASA Astrophysics Data System (ADS)

    Lavers, David; Prudhomme, Christel; Hannah, David

    2010-05-01

    Precipitation variability has a significant impact on society. Sectors such as agriculture and water resources management are reliant on predictable and reliable precipitation supply with extreme variability having potentially adverse socio-economic impacts. Therefore, understanding the climate drivers of precipitation is of human relevance. This research examines the strength, location and seasonality of links between precipitation and large-scale Mean Sea Level Pressure (MSLP) fields across Europe. In particular, we aim to evaluate whether European precipitation is correlated with the same atmospheric circulation patterns or if there is a strong spatial and/or seasonal variation in the strength and location of centres of correlations. The work exploits time series of gridded ERA-40 MSLP on a 2.5˚×2.5˚ grid (0˚N-90˚N and 90˚W-90˚E) and gridded European precipitation from the Ensemble project on a 0.5°×0.5° grid (36.25˚N-74.25˚N and 10.25˚W-24.75˚E). Monthly Spearman rank correlation analysis was performed between MSLP and precipitation. During winter, a significant MSLP-precipitation correlation dipole pattern exists across Europe. Strong negative (positive) correlation located near the Icelandic Low and positive (negative) correlation near the Azores High pressure centres are found in northern (southern) Europe. These correlation dipoles resemble the structure of the North Atlantic Oscillation (NAO). The reversal in the correlation dipole patterns occurs at the latitude of central France, with regions to the north (British Isles, northern France, Scandinavia) having a positive relationship with the NAO, and regions to the south (Italy, Portugal, southern France, Spain) exhibiting a negative relationship with the NAO. In the lee of mountain ranges of eastern Britain and central Sweden, correlation with North Atlantic MSLP is reduced, reflecting a reduced influence of westerly flow on precipitation generation as the mountains act as a barrier to moist

  9. Nearly incompressible fluids: hydrodynamics and large scale inhomogeneity.

    PubMed

    Hunana, P; Zank, G P; Shaikh, D

    2006-08-01

    A system of hydrodynamic equations in the presence of large-scale inhomogeneities for a high plasma beta solar wind is derived. The theory is derived under the assumption of low turbulent Mach number and is developed for the flows where the usual incompressible description is not satisfactory and a full compressible treatment is too complex for any analytical studies. When the effects of compressibility are incorporated only weakly, a new description, referred to as "nearly incompressible hydrodynamics," is obtained. The nearly incompressible theory, was originally applied to homogeneous flows. However, large-scale gradients in density, pressure, temperature, etc., are typical in the solar wind and it was unclear how inhomogeneities would affect the usual incompressible and nearly incompressible descriptions. In the homogeneous case, the lowest order expansion of the fully compressible equations leads to the usual incompressible equations, followed at higher orders by the nearly incompressible equations, as introduced by Zank and Matthaeus. With this work we show that the inclusion of large-scale inhomogeneities (in this case time-independent and radially symmetric background solar wind) modifies the leading-order incompressible description of solar wind flow. We find, for example, that the divergence of velocity fluctuations is nonsolenoidal and that density fluctuations can be described to leading order as a passive scalar. Locally (for small lengthscales), this system of equations converges to the usual incompressible equations and we therefore use the term "locally incompressible" to describe the equations. This term should be distinguished from the term "nearly incompressible," which is reserved for higher-order corrections. Furthermore, we find that density fluctuations scale with Mach number linearly, in contrast to the original homogeneous nearly incompressible theory, in which density fluctuations scale with the square of Mach number. Inhomogeneous nearly

  10. Climatological context for large-scale coral bleaching

    NASA Astrophysics Data System (ADS)

    Barton, A. D.; Casey, K. S.

    2005-12-01

    Large-scale coral bleaching was first observed in 1979 and has occurred throughout virtually all of the tropics since that time. Severe bleaching may result in the loss of live coral and in a decline of the integrity of the impacted coral reef ecosystem. Despite the extensive scientific research and increased public awareness of coral bleaching, uncertainties remain about the past and future of large-scale coral bleaching. In order to reduce these uncertainties and place large-scale coral bleaching in the longer-term climatological context, specific criteria and methods for using historical sea surface temperature (SST) data to examine coral bleaching-related thermal conditions are proposed by analyzing three, 132 year SST reconstructions: ERSST, HadISST1, and GISST2.3b. These methodologies are applied to case studies at Discovery Bay, Jamaica (77.27°W, 18.45°N), Sombrero Reef, Florida, USA (81.11°W, 24.63°N), Academy Bay, Galápagos, Ecuador (90.31°W, 0.74°S), Pearl and Hermes Reef, Northwest Hawaiian Islands, USA (175.83°W, 27.83°N), Midway Island, Northwest Hawaiian Islands, USA (177.37°W, 28.25°N), Davies Reef, Australia (147.68°E, 18.83°S), and North Male Atoll, Maldives (73.35°E, 4.70°N). The results of this study show that (1) The historical SST data provide a useful long-term record of thermal conditions in reef ecosystems, giving important insight into the thermal history of coral reefs and (2) While coral bleaching and anomalously warm SSTs have occurred over much of the world in recent decades, case studies in the Caribbean, Northwest Hawaiian Islands, and parts of other regions such as the Great Barrier Reef exhibited SST conditions and cumulative thermal stress prior to 1979 that were comparable to those conditions observed during the strong, frequent coral bleaching events since 1979. This climatological context and knowledge of past environmental conditions in reef ecosystems may foster a better understanding of how coral reefs will

  11. Large-Scale Graphene Film Deposition for Monolithic Device Fabrication

    NASA Astrophysics Data System (ADS)

    Al-shurman, Khaled

    Since 1958, the concept of integrated circuit (IC) has achieved great technological developments and helped in shrinking electronic devices. Nowadays, an IC consists of more than a million of compacted transistors. The majority of current ICs use silicon as a semiconductor material. According to Moore's law, the number of transistors built-in on a microchip can be double every two years. However, silicon device manufacturing reaches its physical limits. To explain, there is a new trend to shrinking circuitry to seven nanometers where a lot of unknown quantum effects such as tunneling effect can not be controlled. Hence, there is an urgent need for a new platform material to replace Si. Graphene is considered a promising material with enormous potential applications in many electronic and optoelectronics devices due to its superior properties. There are several techniques to produce graphene films. Among these techniques, chemical vapor deposition (CVD) offers a very convenient method to fabricate films for large-scale graphene films. Though CVD method is suitable for large area growth of graphene, the need for transferring a graphene film to silicon-based substrates is required. Furthermore, the graphene films thus achieved are, in fact, not single crystalline. Also, graphene fabrication utilizing Cu and Ni at high growth temperature contaminates the substrate that holds Si CMOS circuitry and CVD chamber as well. So, lowering the deposition temperature is another technological milestone for the successful adoption of graphene in integrated circuits fabrication. In this research, direct large-scale graphene film fabrication on silicon based platform (i.e. SiO2 and Si3N4) at low temperature was achieved. With a focus on low-temperature graphene growth, hot-filament chemical vapor deposition (HF-CVD) was utilized to synthesize graphene film using 200 nm thick nickel film. Raman spectroscopy was utilized to examine graphene formation on the bottom side of the Ni film

  12. Penetration of Large Scale Electric Field to Inner Magnetosphere

    NASA Astrophysics Data System (ADS)

    Chen, S. H.; Fok, M. C. H.; Sibeck, D. G.; Wygant, J. R.; Spence, H. E.; Larsen, B.; Reeves, G. D.; Funsten, H. O.

    2015-12-01

    The direct penetration of large scale global electric field to the inner magnetosphere is a critical element in controlling how the background thermal plasma populates within the radiation belts. These plasma populations provide the source of particles and free energy needed for the generation and growth of various plasma waves that, at critical points of resonances in time and phase space, can scatter or energize radiation belt particles to regulate the flux level of the relativistic electrons in the system. At high geomagnetic activity levels, the distribution of large scale electric fields serves as an important indicator of how prevalence of strong wave-particle interactions extend over local times and radial distances. To understand the complex relationship between the global electric fields and thermal plasmas, particularly due to the ionospheric dynamo and the magnetospheric convection effects, and their relations to the geomagnetic activities, we analyze the electric field and cold plasma measurements from Van Allen Probes over more than two years period and simulate a geomagnetic storm event using Coupled Inner Magnetosphere-Ionosphere Model (CIMI). Our statistical analysis of the measurements from Van Allan Probes and CIMI simulations of the March 17, 2013 storm event indicate that: (1) Global dawn-dusk electric field can penetrate the inner magnetosphere inside the inner belt below L~2. (2) Stronger convections occurred in the dusk and midnight sectors than those in the noon and dawn sectors. (3) Strong convections at multiple locations exist at all activity levels but more complex at higher activity levels. (4) At the high activity levels, strongest convections occur in the midnight sectors at larger distances from the Earth and in the dusk sector at closer distances. (5) Two plasma populations of distinct ion temperature isotropies divided at L-Shell ~2, indicating distinct heating mechanisms between inner and outer radiation belts. (6) CIMI

  13. Large-scale dimension densities for heart rate variability analysis

    NASA Astrophysics Data System (ADS)

    Raab, Corinna; Wessel, Niels; Schirdewan, Alexander; Kurths, Jürgen

    2006-04-01

    In this work, we reanalyze the heart rate variability (HRV) data from the 2002 Computers in Cardiology (CiC) Challenge using the concept of large-scale dimension densities and additionally apply this technique to data of healthy persons and of patients with cardiac diseases. The large-scale dimension density (LASDID) is estimated from the time series using a normalized Grassberger-Procaccia algorithm, which leads to a suitable correction of systematic errors produced by boundary effects in the rather large scales of a system. This way, it is possible to analyze rather short, nonstationary, and unfiltered data, such as HRV. Moreover, this method allows us to analyze short parts of the data and to look for differences between day and night. The circadian changes in the dimension density enable us to distinguish almost completely between real data and computer-generated data from the CiC 2002 challenge using only one parameter. In the second part we analyzed the data of 15 patients with atrial fibrillation (AF), 15 patients with congestive heart failure (CHF), 15 elderly healthy subjects (EH), as well as 18 young and healthy persons (YH). With our method we are able to separate completely the AF (ρlsμ=0.97±0.02) group from the others and, especially during daytime, the CHF patients show significant differences from the young and elderly healthy volunteers (CHF, 0.65±0.13 ; EH, 0.54±0.05 ; YH, 0.57±0.05 ; p<0.05 for both comparisons). Moreover, for the CHF patients we find no circadian changes in ρlsμ (day, 0.65±0.13 ; night, 0.66±0.12 ; n.s.) in contrast to healthy controls (day, 0.54±0.05 ; night, 0.61±0.05 ; p=0.002 ). Correlation analysis showed no statistical significant relation between standard HRV and circadian LASDID, demonstrating a possibly independent application of our method for clinical risk stratification.

  14. Modelling large-scale halo bias using the bispectrum

    NASA Astrophysics Data System (ADS)

    Pollack, Jennifer E.; Smith, Robert E.; Porciani, Cristiano

    2012-03-01

    We study the relation between the density distribution of tracers for large-scale structure and the underlying matter distribution - commonly termed bias - in the Λ cold dark matter framework. In particular, we examine the validity of the local model of biasing at quadratic order in the matter density. This model is characterized by parameters b1 and b2. Using an ensemble of N-body simulations, we apply several statistical methods to estimate the parameters. We measure halo and matter fluctuations smoothed on various scales. We find that, whilst the fits are reasonably good, the parameters vary with smoothing scale. We argue that, for real-space measurements, owing to the mixing of wavemodes, no smoothing scale can be found for which the parameters are independent of smoothing. However, this is not the case in Fourier space. We measure halo and halo-mass power spectra and from these construct estimates of the effective large-scale bias as a guide for b1. We measure the configuration dependence of the halo bispectra Bhhh and reduced bispectra Qhhh for very large-scale k-space triangles. From these data, we constrain b1 and b2, taking into account the full bispectrum covariance matrix. Using the lowest order perturbation theory, we find that for Bhhh the best-fitting parameters are in reasonable agreement with one another as the triangle scale is varied, although the fits become poor as smaller scales are included. The same is true for Qhhh. The best-fitting values were found to depend on the discreteness correction. This led us to consider halo-mass cross-bispectra. The results from these statistics supported our earlier findings. We then developed a test to explore whether the inconsistency in the recovered bias parameters could be attributed to missing higher order corrections in the models. We prove that low-order expansions are not sufficiently accurate to model the data, even on scales k1˜ 0.04 h Mpc-1. If robust inferences concerning bias are to be drawn

  15. Optimization of combinatorial mutagenesis.

    PubMed

    Parker, Andrew S; Griswold, Karl E; Bailey-Kellogg, Chris

    2011-11-01

    Protein engineering by combinatorial site-directed mutagenesis evaluates a portion of the sequence space near a target protein, seeking variants with improved properties (e.g., stability, activity, immunogenicity). In order to improve the hit-rate of beneficial variants in such mutagenesis libraries, we develop methods to select optimal positions and corresponding sets of the mutations that will be used, in all combinations, in constructing a library for experimental evaluation. Our approach, OCoM (Optimization of Combinatorial Mutagenesis), encompasses both degenerate oligonucleotides and specified point mutations, and can be directed accordingly by requirements of experimental cost and library size. It evaluates the quality of the resulting library by one- and two-body sequence potentials, averaged over the variants. To ensure that it is not simply recapitulating extant sequences, it balances the quality of a library with an explicit evaluation of the novelty of its members. We show that, despite dealing with a combinatorial set of variants, in our approach the resulting library optimization problem is actually isomorphic to single-variant optimization. By the same token, this means that the two-body sequence potential results in an NP-hard optimization problem. We present an efficient dynamic programming algorithm for the one-body case and a practically-efficient integer programming approach for the general two-body case. We demonstrate the effectiveness of our approach in designing libraries for three different case study proteins targeted by previous combinatorial libraries--a green fluorescent protein, a cytochrome P450, and a beta lactamase. We found that OCoM worked quite efficiently in practice, requiring only 1 hour even for the massive design problem of selecting 18 mutations to generate 10⁷ variants of a 443-residue P450. We demonstrate the general ability of OCoM in enabling the protein engineer to explore and evaluate trade-offs between quality and

  16. Applications of large-scale density functional theory in biology.

    PubMed

    Cole, Daniel J; Hine, Nicholas D M

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality. PMID:27494095

  17. Large scale Hugoniot material properties for Danby Marble

    SciTech Connect

    Rinehart, E.J.

    1993-11-01

    This paper presents the results of simulation experiments of nuclear underground testing carried out using the HYDROPLUS methodology for yield verifications of non-standard tests. The objective of this test series was to demonstrate the accuracy of stress and velocity measurements in hard, low porosity rock, to obtain comparisons of large-scale material properties with those obtained from laboratory testing of the same material, and to address the problems posed by a material having a clear precursor wave preceding the main shock wave. The test series consisted of three individual experimental tests. The first established material properties of the Danby marble selected for use in the experiments. The second and third tests looked at stress and velocity gage errors obtained when gages were placed in boreholes and grouted into place.

  18. Large-scale structure in f(T) gravity

    SciTech Connect

    Li Baojiu; Sotiriou, Thomas P.; Barrow, John D.

    2011-05-15

    In this work we study the cosmology of the general f(T) gravity theory. We express the modified Einstein equations using covariant quantities, and derive the gauge-invariant perturbation equations in covariant form. We consider a specific choice of f(T), designed to explain the observed late-time accelerating cosmic expansion without including an exotic dark energy component. Our numerical solution shows that the extra degree of freedom of such f(T) gravity models generally decays as one goes to smaller scales, and consequently its effects on scales such as galaxies and galaxies clusters are small. But on large scales, this degree of freedom can produce large deviations from the standard {Lambda}CDM scenario, leading to severe constraints on the f(T) gravity models as an explanation to the cosmic acceleration.

  19. Planning under uncertainty solving large-scale stochastic linear programs

    SciTech Connect

    Infanger, G. . Dept. of Operations Research Technische Univ., Vienna . Inst. fuer Energiewirtschaft)

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  20. Shock waves in the large scale structure of the universe

    NASA Astrophysics Data System (ADS)

    Ryu, Dongsu

    Cosmological shock waves result from the supersonic flow motions induced by hierarchical formation of nonlinear structures in the universe. Like most astrophysical shocks, they are collisionless shocks which form in the tenuous intergalactic plasma via collective electromagnetic interactions between particles and electromagnetic fields. The gravitational energy released during the structure formation is transferred by these shocks to the intergalactic gas in several different forms. In addition to the gas entropy, cosmic rays are produced via diffusive shock acceleration, magnetic fields are generated via the Biermann battery mechanism and Weibel instability as well as the Bell-Lucek mechanism, and vorticity is generated at curved shocks. Here we review the properties, roles, and consequences of the shock waves in the context of the large scale structure of the universe.

  1. Shock Waves in the Large Scale Structure of the Universe

    NASA Astrophysics Data System (ADS)

    Ryu, Dongsu

    2008-04-01

    Cosmological shock waves result from the supersonic flow motions induced by hierarchical formation of nonlinear structures in the universe. Like most astrophysical shocks, they are collisionless shocks which form in the tenuous intergalactic plasma via collective electromagnetic interactions between particles and electromagnetic fields. The gravitational energy released during the structure formation is transferred by these shocks to the intergalactic gas in several different forms: in addition to the gas entropy, cosmic rays are produced via diffusive shock acceleration, magnetic fields are generated via the Biermann battery mechanism and Weibel instability, and vorticity is generated at curved shocks. Here I review the properties, roles, and consequences of the shock waves in the context of the large scale structure of the universe.

  2. Large scale simulations of the great 1906 San Francisco earthquake

    NASA Astrophysics Data System (ADS)

    Nilsson, S.; Petersson, A.; Rodgers, A.; Sjogreen, B.; McCandless, K.

    2006-12-01

    As part of a multi-institutional simulation effort, we present large scale computations of the ground motion during the great 1906 San Francisco earthquake using a new finite difference code called WPP. The material data base for northern California provided by USGS together with the rupture model by Song et al. is demonstrated to lead to a reasonable match with historical data. In our simulations, the computational domain covered 550 km by 250 km of northern California down to 40 km depth, so a 125 m grid size corresponds to about 2.2 Billion grid points. To accommodate these large grids, the simulations were run on 512-1024 processors on one of the supercomputers at Lawrence Livermore National Lab. A wavelet compression algorithm enabled storage of time-dependent volumetric data. Nevertheless, the first 45 seconds of the earthquake still generated 1.2 TByte of disk space and the 3-D post processing was done in parallel.

  3. Performance evaluation of large-scale photovoltaic systems

    SciTech Connect

    Fuentes, M.K.; Fernandez, J.P.

    1984-05-01

    Over the past several years, the US Department of Energy has fielded a number of large-scale photovoltaic (PV) systems as initial experiments for assessing the performance of various PV designs. The array power and power conditioning subsystem (PCS) data have been analyzed from the following six sites: Sky Harbor Airport, Dallas-Fort Worth Airport, Newman Power Station, Lovington Shopping Center, Beverly High School, and the Oklahoma Center for Science and Arts. For all these systems, the peak power was determined to be within 67% of the rated peak. The differences between the actual peak power and rated peak power has been attributed to a number of factors, includ-module failures and array degradation. The peak PCS efficiencies range from 88% to 93%.

  4. Identification of Extremely Large Scale Structures in SDSS-III

    NASA Astrophysics Data System (ADS)

    Sankhyayan, Shishir; Bagchi, J.; Sarkar, P.; Sahni, V.; Jacob, J.

    2016-10-01

    We have initiated the search and detailed study of large scale structures present in the universe using galaxy redshift surveys. In this process, we take the volume-limited sample of galaxies from Sloan Digital Sky Survey III and find very large structures even beyond the redshift of 0.2. One of the structures is even greater than 600 Mpc which raises a question on the homogeneity scale of the universe. The shapes of voids-structures (adjacent to each other) seem to be correlated, which supports the physical existence of the observed structures. The other observational supports include galaxy clusters' and QSO distribution's correlation with the density peaks of the volume limited sample of galaxies.

  5. Large-scale coherent structures as drivers of combustion instability

    SciTech Connect

    Schadow, K.C.; Gutmark, E.; Parr, T.P.; Parr, D.M.; Wilson, K.J.

    1987-06-01

    The role of flow coherent structures as drivers of combustion instabilities in a dump combustor was studied. Results of nonreacting tests in air and water flows as well as combustion experiments in a diffusion flame and dump combustor are discussed to provide insight into the generation process of large-scale structures in the combustor flow and their interaction with the combustion process. It is shown that the flow structures, or vortices, are formed by interaction between the flow instabilities and the chamber acoustic resonance. When these vortices dominate the reacting flow, the combustion is confined to their cores, leading to periodic heat release, which may result in the driving of high amplitude pressure oscillations. These oscillations are typical to the occurrence of combustion instabilities for certain operating conditions. The basic understanding of the interaction between flow dynamics and the combustion process opens up the possibility for rational control of combustion-induced pressure oscillations. 42 references.

  6. Policy Driven Development: Flexible Policy Insertion for Large Scale Systems.

    PubMed

    Demchak, Barry; Krüger, Ingolf

    2012-07-01

    The success of a software system depends critically on how well it reflects and adapts to stakeholder requirements. Traditional development methods often frustrate stakeholders by creating long latencies between requirement articulation and system deployment, especially in large scale systems. One source of latency is the maintenance of policy decisions encoded directly into system workflows at development time, including those involving access control and feature set selection. We created the Policy Driven Development (PDD) methodology to address these development latencies by enabling the flexible injection of decision points into existing workflows at runtime, thus enabling policy composition that integrates requirements furnished by multiple, oblivious stakeholder groups. Using PDD, we designed and implemented a production cyberinfrastructure that demonstrates policy and workflow injection that quickly implements stakeholder requirements, including features not contemplated in the original system design. PDD provides a path to quickly and cost effectively evolve such applications over a long lifetime. PMID:25383258

  7. Engineering large-scale agent-based systems with consensus

    NASA Technical Reports Server (NTRS)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  8. Possible implications of large scale radiation processing of food

    NASA Astrophysics Data System (ADS)

    Zagórski, Z. P.

    Large scale irradiation has been discussed in terms of the participation of processing cost in the final value of the improved product. Another factor has been taken into account and that is the saturation of the market with the new product. In the case of succesful projects the participation of irradiation cost is low, and the demand for the better product is covered. A limited availability of sources makes the modest saturation of the market difficult with all food subjected to correct radiation treatment. The implementation of the preservation of food needs a decided selection of these kinds of food which comply to all conditions i.e. of acceptance by regulatory bodies, real improvement of quality and economy. The last condition prefers the possibility of use of electron beams of low energy. The best fullfilment of conditions for succesful processing is observed in the group of dry food, in expensive spices in particular.

  9. Large scale protein separations: engineering aspects of chromatography.

    PubMed

    Chisti, Y; Moo-Young, M

    1990-01-01

    The engineering considerations common to large scale chromatographic purification of proteins are reviewed. A discussion of the industrial chromatography fundamentals is followed by aspects which affect the scale of separation. The separation column geometry, the effect of the main operational parameters on separation performance, and the physical characteristics of column packing are treated. Throughout, the emphasis is on ion exchange and size exclusion techniques which together constitute the major portion of commercial chromatographic protein purifications. In all cases, the state of current technology is examined and areas in need of further development are noted. The physico-chemical advances now underway in chromatographic separation of biopolymers would ensure a substantially enhanced role for these techniques in industrial production of products of new biotechnology.

  10. Towards Online Multiresolution Community Detection in Large-Scale Networks

    PubMed Central

    Huang, Jianbin; Sun, Heli; Liu, Yaguang; Song, Qinbao; Weninger, Tim

    2011-01-01

    The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks. PMID:21887325

  11. Atypical Behavior Identification in Large Scale Network Traffic

    SciTech Connect

    Best, Daniel M.; Hafen, Ryan P.; Olsen, Bryan K.; Pike, William A.

    2011-10-23

    Cyber analysts are faced with the daunting challenge of identifying exploits and threats within potentially billions of daily records of network traffic. Enterprise-wide cyber traffic involves hundreds of millions of distinct IP addresses and results in data sets ranging from terabytes to petabytes of raw data. Creating behavioral models and identifying trends based on those models requires data intensive architectures and techniques that can scale as data volume increases. Analysts need scalable visualization methods that foster interactive exploration of data and enable identification of behavioral anomalies. Developers must carefully consider application design, storage, processing, and display to provide usability and interactivity with large-scale data. We present an application that highlights atypical behavior in enterprise network flow records. This is accomplished by utilizing data intensive architectures to store the data, aggregation techniques to optimize data access, statistical techniques to characterize behavior, and a visual analytic environment to render the behavioral trends, highlight atypical activity, and allow for exploration.

  12. Performance Health Monitoring of Large-Scale Systems

    SciTech Connect

    Rajamony, Ram

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  13. Computational solutions to large-scale data management and analysis.

    PubMed

    Schadt, Eric E; Linderman, Michael D; Sorenson, Jon; Lee, Lawrence; Nolan, Garry P

    2010-09-01

    Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US$5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist - such as cloud and heterogeneous computing - to successfully tackle our big data problems.

  14. Scalable parallel distance field construction for large-scale applications

    SciTech Connect

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; Kolla, Hemanth; Chen, Jacqueline H.

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.

  15. Large-scale testing of structural clay tile infilled frames

    SciTech Connect

    Flanagan, R.D.; Bennett, R.M.

    1993-03-18

    A summary of large-scale cyclic static tests of structural clay tile infilled frames is given. In-plane racking tests examined the effects of varying frame stiffness, varying infill size, infill offset from frame centerline, and single and double wythe infill construction. Out-of-plane tests examined infilled frame response to inertial loadings and inter-story drift loadings. Sequential in-plane and out-of-plane loadings were performed to determine the effects of orthogonal damage and degradation on both strength and stiffness. A combined out-of-plane inertial and in-plane racking test was conducted to investigate the interaction of multi-directional loading. To determine constitutive properties of the infills, prism compression, mortar compression and various unit tile tests were performed.

  16. Hierarchical features of large-scale cortical connectivity

    NASA Astrophysics Data System (ADS)

    da F. Costa, L.; Sporns, O.

    2005-12-01

    The analysis of complex networks has revealed patterns of organization in a variety of natural and artificial systems, including neuronal networks of the brain at multiple scales. In this paper, we describe a novel analysis of the large-scale connectivity between regions of the mammalian cerebral cortex, utilizing a set of hierarchical measurements proposed recently. We examine previously identified functional clusters of brain regions in macaque visual cortex and cat cortex and find significant differences between such clusters in terms of several hierarchical measures, revealing differences in how these clusters are embedded in the overall cortical architecture. For example, the ventral cluster of visual cortex maintains structurally more segregated, less divergent connections than the dorsal cluster, which may point to functionally different roles of their constituent brain regions.

  17. Scalable Parallel Distance Field Construction for Large-Scale Applications.

    PubMed

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan-Liu; Kolla, Hemanth; Chen, Jacqueline H

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. A new distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking over time, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. Our work greatly extends the usability of distance fields for demanding applications. PMID:26357251

  18. Large scale anisotropy studies with the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Bonino, R.

    2012-11-01

    Completed at the end of 2008, the Pierre Auger Observatory has been continuously operating for more than seven years. We present here the analysis techniques and the results about the search for large scale anisotropies in the sky distribution of cosmic rays, reporting both the phase and the amplitude measurements of the first harmonic modulation in right ascension in different energy ranges above 2.5×1017 eV. Thanks to the collected statistics, a sensitivity of 1% at EeV energies can be reached. No significant anisotropies have been observed, upper limits on the amplitudes have been derived and are here compared with the results of previous experiments and with some theoretical expectations.

  19. Distributed Coordinated Control of Large-Scale Nonlinear Networks

    SciTech Connect

    Kundu, Soumya; Anghel, Marian

    2015-11-08

    We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinate with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.

  20. Distributed Coordinated Control of Large-Scale Nonlinear Networks

    DOE PAGES

    Kundu, Soumya; Anghel, Marian

    2015-11-08

    We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinatemore » with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.« less

  1. Automated Sequence Preprocessing in a Large-Scale Sequencing Environment

    PubMed Central

    Wendl, Michael C.; Dear, Simon; Hodgson, Dave; Hillier, LaDeana

    1998-01-01

    A software system for transforming fragments from four-color fluorescence-based gel electrophoresis experiments into assembled sequence is described. It has been developed for large-scale processing of all trace data, including shotgun and finishing reads, regardless of clone origin. Design considerations are discussed in detail, as are programming implementation and graphic tools. The importance of input validation, record tracking, and use of base quality values is emphasized. Several quality analysis metrics are proposed and applied to sample results from recently sequenced clones. Such quantities prove to be a valuable aid in evaluating modifications of sequencing protocol. The system is in full production use at both the Genome Sequencing Center and the Sanger Centre, for which combined weekly production is ∼100,000 sequencing reads per week. PMID:9750196

  2. Large scale rigidity-based flexibility analysis of biomolecules

    PubMed Central

    Streinu, Ileana

    2016-01-01

    KINematics And RIgidity (KINARI) is an on-going project for in silico flexibility analysis of proteins. The new version of the software, Kinari-2, extends the functionality of our free web server KinariWeb, incorporates advanced web technologies, emphasizes the reproducibility of its experiments, and makes substantially improved tools available to the user. It is designed specifically for large scale experiments, in particular, for (a) very large molecules, including bioassemblies with high degree of symmetry such as viruses and crystals, (b) large collections of related biomolecules, such as those obtained through simulated dilutions, mutations, or conformational changes from various types of dynamics simulations, and (c) is intended to work as seemlessly as possible on the large, idiosyncratic, publicly available repository of biomolecules, the Protein Data Bank. We describe the system design, along with the main data processing, computational, mathematical, and validation challenges underlying this phase of the KINARI project. PMID:26958583

  3. A large-scale crop protection bioassay data set.

    PubMed

    Gaulton, Anna; Kale, Namrata; van Westen, Gerard J P; Bellis, Louisa J; Bento, A Patrícia; Davies, Mark; Hersey, Anne; Papadatos, George; Forster, Mark; Wege, Philip; Overington, John P

    2015-01-01

    ChEMBL is a large-scale drug discovery database containing bioactivity information primarily extracted from scientific literature. Due to the medicinal chemistry focus of the journals from which data are extracted, the data are currently of most direct value in the field of human health research. However, many of the scientific use-cases for the current data set are equally applicable in other fields, such as crop protection research: for example, identification of chemical scaffolds active against a particular target or endpoint, the de-convolution of the potential targets of a phenotypic assay, or the potential targets/pathways for safety liabilities. In order to broaden the applicability of the ChEMBL database and allow more widespread use in crop protection research, an extensive data set of bioactivity data of insecticidal, fungicidal and herbicidal compounds and assays was collated and added to the database.

  4. U-shaped Vortex Structures in Large Scale Cloud Cavitation

    NASA Astrophysics Data System (ADS)

    Cao, Yantao; Peng, Xiaoxing; Xu, Lianghao; Hong, Fangwen

    2015-12-01

    The control of cloud cavitation, especially large scale cloud cavitation(LSCC), is always a hot issue in the field of cavitation research. However, there has been little knowledge on the evolution of cloud cavitation since it is associated with turbulence and vortex flow. In this article, the structure of cloud cavitation shed by sheet cavitation around different hydrofoils and a wedge were observed in detail with high speed camera (HSC). It was found that the U-shaped vortex structures always existed in the development process of LSCC. The results indicated that LSCC evolution was related to this kind of vortex structures, and it may be a universal character for LSCC. Then vortex strength of U-shaped vortex structures in a cycle was analyzed with numerical results.

  5. Statistics of Caustics in Large-Scale Structure Formation

    NASA Astrophysics Data System (ADS)

    Feldbrugge, Job L.; Hidding, Johan; van de Weygaert, Rien

    2016-10-01

    The cosmic web is a complex spatial pattern of walls, filaments, cluster nodes and underdense void regions. It emerged through gravitational amplification from the Gaussian primordial density field. Here we infer analytical expressions for the spatial statistics of caustics in the evolving large-scale mass distribution. In our analysis, following the quasi-linear Zel'dovich formalism and confined to the 1D and 2D situation, we compute number density and correlation properties of caustics in cosmic density fields that evolve from Gaussian primordial conditions. The analysis can be straightforwardly extended to the 3D situation. We moreover, are currently extending the approach to the non-linear regime of structure formation by including higher order Lagrangian approximations and Lagrangian effective field theory.

  6. Large-scale structure non-Gaussianities with modal methods

    NASA Astrophysics Data System (ADS)

    Schmittfull, Marcel

    2016-10-01

    Relying on a separable modal expansion of the bispectrum, the implementation of a fast estimator for the full bispectrum of a 3d particle distribution is presented. The computational cost of accurate bispectrum estimation is negligible relative to simulation evolution, so the bispectrum can be used as a standard diagnostic whenever the power spectrum is evaluated. As an application, the time evolution of gravitational and primordial dark matter bispectra was measured in a large suite of N-body simulations. The bispectrum shape changes characteristically when the cosmic web becomes dominated by filaments and halos, therefore providing a quantitative probe of 3d structure formation. Our measured bispectra are determined by ~ 50 coefficients, which can be used as fitting formulae in the nonlinear regime and for non-Gaussian initial conditions. We also compare the measured bispectra with predictions from the Effective Field Theory of Large Scale Structures (EFTofLSS).

  7. Large scale animal cell cultivation for production of cellular biologicals.

    PubMed

    van Wezel, A L; van der Velden-de Groot, C A; de Haan, H H; van den Heuvel, N; Schasfoort, R

    1985-01-01

    Through the developments in molecular biology the interest for large scale animal cell cultivation has sharply increased during the last 5 years. At our laboratory, four different cultivation systems were studied, all of which are homogeneous culture systems, as they lend themselves best for scaling up and for the control of culture conditions. The four different systems which were compared are: batch culture, continuous chemostat, continuous recycling and continuous perfusion culture system, both for cells growing in suspension and for anchorage dependent cells in microcarrier culture. Our results indicate that for the production of virus vaccines and cells the batch and recycling culture system are most suitable. Disadvantages of the continued chemostat culture system are: the system is only applicable for cells growing in suspension; relatively low concentrations of cells and cellular products are obtained. The continuous perfusion system appears to be very suitable for the production of cellular components and also for the production of viruses which do not give cell lysis.

  8. Nonzero Density-Velocity Consistency Relations for Large Scale Structures.

    PubMed

    Rizzo, Luca Alberto; Mota, David F; Valageas, Patrick

    2016-08-19

    We present exact kinematic consistency relations for cosmological structures that do not vanish at equal times and can thus be measured in surveys. These rely on cross correlations between the density and velocity, or momentum, fields. Indeed, the uniform transport of small-scale structures by long-wavelength modes, which cannot be detected at equal times by looking at density correlations only, gives rise to a shift in the amplitude of the velocity field that could be measured. These consistency relations only rely on the weak equivalence principle and Gaussian initial conditions. They remain valid in the nonlinear regime and for biased galaxy fields. They can be used to constrain nonstandard cosmological scenarios or the large-scale galaxy bias. PMID:27588842

  9. Large-scale dynamic compaction of natural salt

    SciTech Connect

    Hansen, F.D.; Ahrens, E.H.

    1996-05-01

    A large-scale dynamic compaction demonstration of natural salt was successfully completed. About 40 m{sup 3} of salt were compacted in three, 2-m lifts by dropping a 9,000-kg weight from a height of 15 m in a systematic pattern to achieve desired compaction energy. To enhance compaction, 1 wt% water was added to the relatively dry mine-run salt. The average compacted mass fractional density was 0.90 of natural intact salt, and in situ nitrogen permeabilities averaged 9X10{sup -14}m{sup 2}. This established viability of dynamic compacting for placing salt shaft seal components. The demonstration also provided compacted salt parameters needed for shaft seal system design and performance assessments of the Waste Isolation Pilot Plant.

  10. Lightweight computational steering of very large scale molecular dynamics simulations

    SciTech Connect

    Beazley, D.M.; Lomdahl, P.S.

    1996-09-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.

  11. Hydrokinetic approach to large-scale cardiovascular blood flow

    NASA Astrophysics Data System (ADS)

    Melchionna, Simone; Bernaschi, Massimo; Succi, Sauro; Kaxiras, Efthimios; Rybicki, Frank J.; Mitsouras, Dimitris; Coskun, Ahmet U.; Feldman, Charles L.

    2010-03-01

    We present a computational method for commodity hardware-based clinical cardiovascular diagnosis based on accurate simulation of cardiovascular blood flow. Our approach leverages the flexibility of the Lattice Boltzmann method to implementation on high-performance, commodity hardware, such as Graphical Processing Units. We developed the procedure for the analysis of real-life cardiovascular blood flow case studies, namely, anatomic data acquisition, geometry and mesh generation, flow simulation and data analysis and visualization. We demonstrate the usefulness of our computational tool through a set of large-scale simulations of the flow patterns associated with the arterial tree of a patient which involves two hundred million computational cells. The simulations show evidence of a very rich and heterogeneous endothelial shear stress pattern (ESS), a quantity of recognized key relevance to the localization and progression of major cardiovascular diseases, such as atherosclerosis, and set the stage for future studies involving pulsatile flows.

  12. Applications of large-scale density functional theory in biology.

    PubMed

    Cole, Daniel J; Hine, Nicholas D M

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.

  13. Large-scale volcano-ground ice interactions on Mars

    NASA Technical Reports Server (NTRS)

    Squyres, Steven W.; Wilhelms, Don E.; Moosman, Ann C.

    1987-01-01

    In the present investigation of Martian volcano-ground ice interaction processes, a numerical model is developed that encompasses conductive heat transport, surface radiation, heat transfer to the atmosphere, and H2O phase-changes in an ice-rich permafrost over which lave flows erupt, followed by the intrusion of sills. An examination is made of the two large scale interaction regions formed near Aeolis Mensae and near the volcano Hadriaca Patera, northeast of Hellas. The inferred channel discharges are compared to discharge rates calculated for lava-ground ice interactions, showing that meltwater (probably accumulated under the surface) was rapidly released at a discharge rate that was limited by soil permeability. The volcano-ground ice interactions have been an important Martian geologic process, and could account for the palagonites constituting the Martian dust.

  14. Large-Scale All-Dielectric Metamaterial Perfect Reflectors

    SciTech Connect

    Moitra, Parikshit; Slovick, Brian A.; li, Wei; Kravchencko, Ivan I.; Briggs, Dayrl P.; Krishnamurthy, S.; Valentine, Jason

    2015-05-08

    All-dielectric metamaterials offer a potential low-loss alternative to plasmonic metamaterials at optical frequencies. In this paper, we take advantage of the low absorption loss as well as the simple unit cell geometry to demonstrate large-scale (centimeter-sized) all-dielectric metamaterial perfect reflectors made from silicon cylinder resonators. These perfect reflectors, operating in the telecommunications band, were fabricated using self-assembly based nanosphere lithography. In spite of the disorder originating from the self-assembly process, the average reflectance of the metamaterial perfect reflectors is 99.7% at 1530 nm, surpassing the reflectance of metallic mirrors. Moreover, the spectral separation of the electric and magnetic resonances can be chosen to achieve the required reflection bandwidth while maintaining a high tolerance to disorder. Finally, the scalability of this design could lead to new avenues of manipulating light for low-loss and large-area photonic applications.

  15. Large-Scale All-Dielectric Metamaterial Perfect Reflectors

    DOE PAGES

    Moitra, Parikshit; Slovick, Brian A.; li, Wei; Kravchencko, Ivan I.; Briggs, Dayrl P.; Krishnamurthy, S.; Valentine, Jason

    2015-05-08

    All-dielectric metamaterials offer a potential low-loss alternative to plasmonic metamaterials at optical frequencies. In this paper, we take advantage of the low absorption loss as well as the simple unit cell geometry to demonstrate large-scale (centimeter-sized) all-dielectric metamaterial perfect reflectors made from silicon cylinder resonators. These perfect reflectors, operating in the telecommunications band, were fabricated using self-assembly based nanosphere lithography. In spite of the disorder originating from the self-assembly process, the average reflectance of the metamaterial perfect reflectors is 99.7% at 1530 nm, surpassing the reflectance of metallic mirrors. Moreover, the spectral separation of the electric and magnetic resonances canmore » be chosen to achieve the required reflection bandwidth while maintaining a high tolerance to disorder. Finally, the scalability of this design could lead to new avenues of manipulating light for low-loss and large-area photonic applications.« less

  16. Scoring Large Scale Affinity Purification Mass Spectrometry Datasets with MIST

    PubMed Central

    Verschueren, Erik; Von Dollen, John; Cimermancic, Peter; Gulbahce, Natali; Sali, Andrej; Krogan, Nevan

    2015-01-01

    High-throughput Affinity Purification Mass Spectrometry (AP-MS) experiments can identify a large number of protein interactions but only a fraction of these interactions are biologically relevant. Here, we describe a comprehensive computational strategy to process raw AP-MS data, perform quality controls and prioritize biologically relevant bait-prey pairs in a set of replicated AP-MS experiments with Mass spectrometry interaction STatistics (MiST). The MiST score is a linear combination of prey quantity (abundance), abundance invariability across repeated experiments (reproducibility), and prey uniqueness relative to other baits (specificity); We describe how to run the full MiST analysis pipeline in an R environment and discuss a number of configurable options that allow the lay user to convert any large-scale AP-MS data into an interpretable, biologically relevant protein-protein interaction network. PMID:25754993

  17. A large-scale crop protection bioassay data set.

    PubMed

    Gaulton, Anna; Kale, Namrata; van Westen, Gerard J P; Bellis, Louisa J; Bento, A Patrícia; Davies, Mark; Hersey, Anne; Papadatos, George; Forster, Mark; Wege, Philip; Overington, John P

    2015-01-01

    ChEMBL is a large-scale drug discovery database containing bioactivity information primarily extracted from scientific literature. Due to the medicinal chemistry focus of the journals from which data are extracted, the data are currently of most direct value in the field of human health research. However, many of the scientific use-cases for the current data set are equally applicable in other fields, such as crop protection research: for example, identification of chemical scaffolds active against a particular target or endpoint, the de-convolution of the potential targets of a phenotypic assay, or the potential targets/pathways for safety liabilities. In order to broaden the applicability of the ChEMBL database and allow more widespread use in crop protection research, an extensive data set of bioactivity data of insecticidal, fungicidal and herbicidal compounds and assays was collated and added to the database. PMID:26175909

  18. Computational solutions to large-scale data management and analysis

    PubMed Central

    Schadt, Eric E.; Linderman, Michael D.; Sorenson, Jon; Lee, Lawrence; Nolan, Garry P.

    2011-01-01

    Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US$5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist — such as cloud and heterogeneous computing — to successfully tackle our big data problems. PMID:20717155

  19. Complex modular structure of large-scale brain networks

    NASA Astrophysics Data System (ADS)

    Valencia, M.; Pastor, M. A.; Fernández-Seara, M. A.; Artieda, J.; Martinerie, J.; Chavez, M.

    2009-06-01

    Modular structure is ubiquitous among real-world networks from related proteins to social groups. Here we analyze the modular organization of brain networks at a large scale (voxel level) extracted from functional magnetic resonance imaging signals. By using a random-walk-based method, we unveil the modularity of brain webs and show modules with a spatial distribution that matches anatomical structures with functional significance. The functional role of each node in the network is studied by analyzing its patterns of inter- and intramodular connections. Results suggest that the modular architecture constitutes the structural basis for the coexistence of functional integration of distant and specialized brain areas during normal brain activities at rest.

  20. Large Scale Bacterial Colony Screening of Diversified FRET Biosensors

    PubMed Central

    Litzlbauer, Julia; Schifferer, Martina; Ng, David; Fabritius, Arne; Thestrup, Thomas; Griesbeck, Oliver

    2015-01-01

    Biosensors based on Förster Resonance Energy Transfer (FRET) between fluorescent protein mutants have started to revolutionize physiology and biochemistry. However, many types of FRET biosensors show relatively small FRET changes, making measurements with these probes challenging when used under sub-optimal experimental conditions. Thus, a major effort in the field currently lies in designing new optimization strategies for these types of sensors. Here we describe procedures for optimizing FRET changes by large scale screening of mutant biosensor libraries in bacterial colonies. We describe optimization of biosensor expression, permeabilization of bacteria, software tools for analysis, and screening conditions. The procedures reported here may help in improving FRET changes in multiple suitable classes of biosensors. PMID:26061878

  1. Recovery Act - Large Scale SWNT Purification and Solubilization

    SciTech Connect

    Michael Gemano; Dr. Linda B. McGown

    2010-10-07

    The goal of this Phase I project was to establish a quantitative foundation for development of binary G-gels for large-scale, commercial processing of SWNTs and to develop scientific insight into the underlying mechanisms of solubilization, selectivity and alignment. In order to accomplish this, we performed systematic studies to determine the effects of G-gel composition and experimental conditions that will enable us to achieve our goals that include (1) preparation of ultra-high purity SWNTs from low-quality, commercial SWNT starting materials, (2) separation of MWNTs from SWNTs, (3) bulk, non-destructive solubilization of individual SWNTs in aqueous solution at high concentrations (10-100 mg/mL) without sonication or centrifugation, (4) tunable enrichment of subpopulations of the SWNTs based on metallic vs. semiconductor properties, diameter, or chirality and (5) alignment of individual SWNTs.

  2. Large-scale electric fields in the earth's magnetosphere

    NASA Technical Reports Server (NTRS)

    Stern, D. P.

    1977-01-01

    Studies of the earth's magnetosphere have indicated that a large-scale electric field E plays a central role in its electrodynamics and in the flow and acceleration of charged particles there; while many observations relevant to E have accumulated, quite a few basic problems involving the origin and structure of this field remain unsolved. The ultimate source of E is presumably the flow of the solar wind past the earth, but the mechanism by which E arises is still unclear, and several independent sources may contribute to it, some of them being of a rather transient nature. This review attempts to sum up the main observed facts and theoretical concepts related to E.

  3. Battery technologies for large-scale stationary energy storage.

    PubMed

    Soloveichik, Grigorii L

    2011-01-01

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β″-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical. PMID:22432629

  4. Black hole jets without large-scale net magnetic flux

    NASA Astrophysics Data System (ADS)

    Parfrey, Kyle; Giannios, Dimitrios; Beloborodov, Andrei M.

    2015-01-01

    We propose a scenario for launching relativistic jets from rotating black holes, in which small-scale magnetic flux loops, sustained by disc turbulence, are forced to inflate and open by differential rotation between the black hole and the accretion flow. This mechanism does not require a large-scale net magnetic flux in the accreting plasma. Estimates suggest that the process could operate effectively in many systems, and particularly naturally and efficiently when the accretion flow is retrograde. We present the results of general-relativistic force-free electrodynamic simulations demonstrating the time evolution of the black hole's magnetosphere, the cyclic formation of jets, and the effect of magnetic reconnection. The jets are highly variable on time-scales ˜10-103rg/c, where rg is the black hole's gravitational radius. The reconnecting current sheets observed in the simulations may be responsible for the hard X-ray emission from accreting black holes.

  5. Optimal Wind Energy Integration in Large-Scale Electric Grids

    NASA Astrophysics Data System (ADS)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  6. Large Scale Deformation of the Western U.S. Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2002-01-01

    Over the past couple of years, with support from NASA, we used a large collection of data from GPS, VLBI, SLR, and DORIS networks which span the Western U.S. Cordillera (WUSC) to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work was roughly divided into an analysis of these space geodetic observations to infer the deformation field across and within the entire plate boundary zone, and an investigation of the implications of this deformation field regarding plate boundary dynamics. Following the determination of the first generation WUSC velocity solution, we placed high priority on the dissemination of the velocity estimates. With in-kind support from the Smithsonian Astrophysical Observatory, we constructed a web-site which allows anyone to access the data, and to determine their own velocity reference frame.

  7. Large Scale Deformation of the Western U.S. Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2002-01-01

    Over the past couple of years, with support from NASA, we used a large collection of data from GPS, VLBI, SLR, and DORIS networks which span the Westem U.S. Cordillera (WUSC) to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work was roughly divided into an analysis of these space geodetic observations to infer the deformation field across and within the entire plate boundary zone, and an investigation of the implications of this deformation field regarding plate boundary dynamics. Following the determination of the first generation WUSC velocity solution, we placed high priority on the dissemination of the velocity estimates. With in-kind support from the Smithsonian Astrophysical Observatory, we constructed a web-site which allows anyone to access the data, and to determine their own velocity reference frame.

  8. Large-scale quantum networks based on graphs

    NASA Astrophysics Data System (ADS)

    Epping, Michael; Kampermann, Hermann; Bruß, Dagmar

    2016-05-01

    Society relies and depends increasingly on information exchange and communication. In the quantum world, security and privacy is a built-in feature for information processing. The essential ingredient for exploiting these quantum advantages is the resource of entanglement, which can be shared between two or more parties. The distribution of entanglement over large distances constitutes a key challenge for current research and development. Due to losses of the transmitted quantum particles, which typically scale exponentially with the distance, intermediate quantum repeater stations are needed. Here we show how to generalise the quantum repeater concept to the multipartite case, by describing large-scale quantum networks, i.e. network nodes and their long-distance links, consistently in the language of graphs and graph states. This unifying approach comprises both the distribution of multipartite entanglement across the network, and the protection against errors via encoding. The correspondence to graph states also provides a tool for optimising the architecture of quantum networks.

  9. Innovation cycle for small- and large-scale change.

    PubMed

    Scott, Kathy; Steinbinder, Amy

    2009-01-01

    In today's complex healthcare systems, transformation requires 2 major efforts: (1) a fundamental changes in the underlying beliefs and assumptions that perpetuate the current system and (2) a fundamental redesign of the multiplicity of diverse and complex subsystems that result in unpredictable aggregate behavior and outcomes. Through an Intelligent Complex Adaptive System framework combined with an innovation process a transformation process and cycle was created for a large healthcare system that resulted in both small- and large-scale changes. This process not only challenges the underlying beliefs and assumptions but also creates new possibilities and prototypes for care delivery through a change-management process that is inclusive and honors the contributions of the entire team.

  10. Large-scale structure of time evolving citation networks

    NASA Astrophysics Data System (ADS)

    Leicht, E. A.; Clarkson, G.; Shedden, K.; Newman, M. E. J.

    2007-09-01

    In this paper we examine a number of methods for probing and understanding the large-scale structure of networks that evolve over time. We focus in particular on citation networks, networks of references between documents such as papers, patents, or court cases. We describe three different methods of analysis, one based on an expectation-maximization algorithm, one based on modularity optimization, and one based on eigenvector centrality. Using the network of citations between opinions of the United States Supreme Court as an example, we demonstrate how each of these methods can reveal significant structural divisions in the network and how, ultimately, the combination of all three can help us develop a coherent overall picture of the network's shape.

  11. Measuring Large-Scale Social Networks with High Resolution

    PubMed Central

    Stopczynski, Arkadiusz; Sekara, Vedran; Sapiezynski, Piotr; Cuttone, Andrea; Madsen, Mette My; Larsen, Jakob Eg; Lehmann, Sune

    2014-01-01

    This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years—the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions, telecommunication, social networks, location, and background information (personality, demographics, health, politics) for a densely connected population of 1 000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation and research agenda driving the study. Additionally, the paper details the data-types measured, and the technical infrastructure in terms of both backend and phone software, as well as an outline of the deployment procedures. We document the participant privacy procedures and their underlying principles. The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection. PMID:24770359

  12. Large-scale coastal evolution of Louisiana's barrier islands

    USGS Publications Warehouse

    List, Jeffrey H.; Jaffe, Bruce E.; Sallenger,, Asbury H., Jr.

    1991-01-01

    The prediction of large-scale coastal change is an extremely important, but distant goal. Here we describe some of our initial efforts in this direction, using historical bathymetric information along a 150 km reach of the rapidly evolving barrier island coast of Louisiana. Preliminary results suggest that the relative sea level rise rate, though extremely high in the area, has played a secondary role in coastal erosion over the last 100 years, with longshore transport of sand-sized sediment being the primary cause. Prediction of future conditions is hampered by a general lack of erosion processes understanding; however, an examination of the changing volumes of sand stored in a large ebb-tidal delta system suggests a continued high rate of shoreline retreat driven by the longshore re-distribution of sand.

  13. Large-Scale Quantitative Analysis of Painting Arts

    NASA Astrophysics Data System (ADS)

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-12-01

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.

  14. Are Critical Phenomena Relevant to Large-Scale Evolution?

    NASA Astrophysics Data System (ADS)

    Sole, Ricard V.; Bascompte, Jordi

    1996-02-01

    Recent theoretical studies, based on the theory of self-organized critical systems, seem to suggest that the dynamical patterns of macroevolution could belong to such class of critical phenomena. Two basic approaches have been proposed: the Kauffman-Johnsen model (based on the use of coupled fitness landscapes) and the Bak-Sneppen model. Both are reviewed here. These models are oversimplified pictures of biological evolution, but the (possible) validity of them is based on the concept of universality, i.e. that apparently very different systems sharing some few common properties should also behave in a very similar way. In this paper we explore the current evidence from the fossil record, showing that some properties that are suggestive of critical dynamics would also be the result of random phenomema. Some general properties of the large-scale pattern of evolution, which should be reproduced by these models, are discussed.

  15. Disaster triage systems for large-scale catastrophic events.

    PubMed

    Bostick, Nathan A; Subbarao, Italo; Burkle, Frederick M; Hsu, Edbert B; Armstrong, John H; James, James J

    2008-09-01

    Large-scale catastrophic events typically result in a scarcity of essential medical resources and accordingly necessitate the implementation of triage management policies to minimize preventable morbidity and mortality. Accomplishing this goal requires a reconceptualization of triage as a population-based systemic process that integrates care at all points of interaction between patients and the health care system. This system identifies at minimum 4 orders of contact: first order, the community; second order, prehospital; third order, facility; and fourth order, regional level. Adopting this approach will ensure that disaster response activities will occur in a comprehensive fashion that minimizes the patient care burden at each subsequent order of intervention and reduces the overall need to ration care. The seamless integration of all orders of intervention within this systems-based model of disaster-specific triage, coordinated through health emergency operations centers, can ensure that disaster response measures are undertaken in a manner that is effective, just, and equitable. PMID:18769264

  16. The effect of large-scale eddies on climatic change.

    NASA Technical Reports Server (NTRS)

    Stone, P. H.

    1973-01-01

    A parameterization for the fluxes of sensible heat by large-scale eddies developed in an earlier paper is incorporated into a model for the mean temperature structure of an atmosphere including only these fluxes and the radiative fluxes. The climatic changes in this simple model are then studied in order to assess the strength of the dynamical feedback and to gain insight into how dynamical parameters may change in more sophisticated climatic models. The model shows the following qualitative changes: (1) an increase in the solar constant leads to increased static stability, decreased dynamic stability, and stronger horizontal and vertical winds; (2) an increase in the amount of atmospheric absorption leads to decreased static and dynamic stability, and stronger horizontal and vertical winds; and (3) an increase in rotation rate leads to greater static and dynamic stability, weaker horizontal winds, and stronger vertical winds.

  17. Policy Driven Development: Flexible Policy Insertion for Large Scale Systems

    PubMed Central

    Demchak, Barry; Krüger, Ingolf

    2014-01-01

    The success of a software system depends critically on how well it reflects and adapts to stakeholder requirements. Traditional development methods often frustrate stakeholders by creating long latencies between requirement articulation and system deployment, especially in large scale systems. One source of latency is the maintenance of policy decisions encoded directly into system workflows at development time, including those involving access control and feature set selection. We created the Policy Driven Development (PDD) methodology to address these development latencies by enabling the flexible injection of decision points into existing workflows at runtime, thus enabling policy composition that integrates requirements furnished by multiple, oblivious stakeholder groups. Using PDD, we designed and implemented a production cyberinfrastructure that demonstrates policy and workflow injection that quickly implements stakeholder requirements, including features not contemplated in the original system design. PDD provides a path to quickly and cost effectively evolve such applications over a long lifetime. PMID:25383258

  18. Large-Scale Quantitative Analysis of Painting Arts

    PubMed Central

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-01-01

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images – the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances. PMID:25501877

  19. Successful Physician Training Program for Large Scale EMR Implementation

    PubMed Central

    Stevens, L.A.; Mailes, E.S.; Goad, B.A.; Longhurst, C.A.

    2015-01-01

    Summary End-user training is an essential element of electronic medical record (EMR) implementation and frequently suffers from minimal institutional investment. In addition, discussion of successful EMR training programs for physicians is limited in the literature. The authors describe a successful physician-training program at Stanford Children’s Health as part of a large scale EMR implementation. Evaluations of classroom training, obtained at the conclusion of each class, revealed high physician satisfaction with the program. Free-text comments from learners focused on duration and timing of training, the learning environment, quality of the instructors, and specificity of training to their role or department. Based upon participant feedback and institutional experience, best practice recommendations, including physician engagement, curricular design, and assessment of proficiency and recognition, are suggested for future provider EMR training programs. The authors strongly recommend the creation of coursework to group providers by common workflow. PMID:25848415

  20. Battery technologies for large-scale stationary energy storage.

    PubMed

    Soloveichik, Grigorii L

    2011-01-01

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β″-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  1. Large-scale asymmetric synthesis of a cathepsin S inhibitor.

    PubMed

    Lorenz, Jon C; Busacca, Carl A; Feng, XuWu; Grinberg, Nelu; Haddad, Nizar; Johnson, Joe; Kapadia, Suresh; Lee, Heewon; Saha, Anjan; Sarvestani, Max; Spinelli, Earl M; Varsolona, Rich; Wei, Xudong; Zeng, Xingzhong; Senanayake, Chris H

    2010-02-19

    A potent reversible inhibitor of the cysteine protease cathepsin-S was prepared on large scale using a convergent synthetic route, free of chromatography and cryogenics. Late-stage peptide coupling of a chiral urea acid fragment with a functionalized aminonitrile was employed to prepare the target, using 2-hydroxypyridine as a robust, nonexplosive replacement for HOBT. The two key intermediates were prepared using a modified Strecker reaction for the aminonitrile and a phosphonation-olefination-rhodium-catalyzed asymmetric hydrogenation sequence for the urea. A palladium-catalyzed vinyl transfer coupled with a Claisen reaction was used to produce the aldehyde required for the side chain. Key scale up issues, safety calorimetry, and optimization of all steps for multikilogram production are discussed. PMID:20102230

  2. Applications of large-scale density functional theory in biology

    NASA Astrophysics Data System (ADS)

    Cole, Daniel J.; Hine, Nicholas D. M.

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.

  3. Thermophoretically induced large-scale deformations around microscopic heat centers

    NASA Astrophysics Data System (ADS)

    Puljiz, Mate; Orlishausen, Michael; Köhler, Werner; Menzel, Andreas M.

    2016-05-01

    Selectively heating a microscopic colloidal particle embedded in a soft elastic matrix is a situation of high practical relevance. For instance, during hyperthermic cancer treatment, cell tissue surrounding heated magnetic colloidal particles is destroyed. Experiments on soft elastic polymeric matrices suggest a very long-ranged, non-decaying radial component of the thermophoretically induced displacement fields around the microscopic heat centers. We theoretically confirm this conjecture using a macroscopic hydrodynamic two-fluid description. Both thermophoretic and elastic effects are included in this theory. Indeed, we find that the elasticity of the environment can cause the experimentally observed large-scale radial displacements in the embedding matrix. Additional experiments confirm the central role of elasticity. Finally, a linearly decaying radial component of the displacement field in the experiments is attributed to the finite size of the experimental sample. Similar results are obtained from our theoretical analysis under modified boundary conditions.

  4. Large scale land use cartography of special areas

    SciTech Connect

    Amico, F.D.; Maccarone, D.; Pandiscia, G.V.

    1996-11-01

    On 06 October 1993 an aerial remote sensing mission has been done on the {open_quote}Mounts of the Sila{close_quotes} area, using a DAEDALUS ATM multispectral scanner, in the framework of the TELAER project, supported by I.A.S.M. (Istituto per l`Assistenza e lo Sviluppo del Mezzogiorno). The study area is inside the National Park of Calabria, well known for its coniferous forests. The collected imagery were used to produce a large scale land use cartography, on the scale of 1 to 5000, extracting information on natural and anthropical vegetation from the multispectral images, with the aid of stereo photos acquired simultaneously. 5 refs., 1 fig., 1 tab.

  5. Large-scale identification of yeast integral membrane protein interactions

    PubMed Central

    Miller, John P.; Lo, Russell S.; Ben-Hur, Asa; Desmarais, Cynthia; Stagljar, Igor; Noble, William Stafford; Fields, Stanley

    2005-01-01

    We carried out a large-scale screen to identify interactions between integral membrane proteins of Saccharomyces cerevisiae by using a modified split-ubiquitin technique. Among 705 proteins annotated as integral membrane, we identified 1,985 putative interactions involving 536 proteins. To ascribe confidence levels to the interactions, we used a support vector machine algorithm to classify interactions based on the assay results and protein data derived from the literature. Previously identified and computationally supported interactions were used to train the support vector machine, which identified 131 interactions of highest confidence, 209 of the next highest confidence, 468 of the next highest, and the remaining 1,085 of low confidence. This study provides numerous putative interactions among a class of proteins that have been difficult to analyze on a high-throughput basis by other approaches. The results identify potential previously undescribed components of established biological processes and roles for integral membrane proteins of ascribed functions. PMID:16093310

  6. Large-Scale Advanced Prop-Fan (LAP) blade design

    NASA Technical Reports Server (NTRS)

    Violette, John A.; Sullivan, William E.; Turnberg, Jay E.

    1984-01-01

    This report covers the design analysis of a very thin, highly swept, propeller blade to be used in the Large-Scale Advanced Prop-Fan (LAP) test program. The report includes: design requirements and goals, a description of the blade configuration which meets requirements, a description of the analytical methods utilized/developed to demonstrate compliance with the requirements, and the results of these analyses. The methods described include: finite element modeling, predicted aerodynamic loads and their application to the blade, steady state and vibratory response analyses, blade resonant frequencies and mode shapes, bird impact analysis, and predictions of stalled and unstalled flutter phenomena. Summarized results include deflections, retention loads, stress/strength comparisons, foreign object damage resistance, resonant frequencies and critical speed margins, resonant vibratory mode shapes, calculated boundaries of stalled and unstalled flutter, and aerodynamic and acoustic performance calculations.

  7. Nonzero Density-Velocity Consistency Relations for Large Scale Structures

    NASA Astrophysics Data System (ADS)

    Rizzo, Luca Alberto; Mota, David F.; Valageas, Patrick

    2016-08-01

    We present exact kinematic consistency relations for cosmological structures that do not vanish at equal times and can thus be measured in surveys. These rely on cross correlations between the density and velocity, or momentum, fields. Indeed, the uniform transport of small-scale structures by long-wavelength modes, which cannot be detected at equal times by looking at density correlations only, gives rise to a shift in the amplitude of the velocity field that could be measured. These consistency relations only rely on the weak equivalence principle and Gaussian initial conditions. They remain valid in the nonlinear regime and for biased galaxy fields. They can be used to constrain nonstandard cosmological scenarios or the large-scale galaxy bias.

  8. Scalable Parallel Distance Field Construction for Large-Scale Applications.

    PubMed

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan-Liu; Kolla, Hemanth; Chen, Jacqueline H

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. A new distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking over time, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. Our work greatly extends the usability of distance fields for demanding applications.

  9. In situ vitrification large-scale operational acceptance test analysis

    SciTech Connect

    Buelt, J.L.; Carter, J.G.

    1986-05-01

    A thermal treatment process is currently under study to provide possible enhancement of in-place stabilization of transuranic and chemically contaminated soil sites. The process is known as in situ vitrification (ISV). In situ vitrification is a remedial action process that destroys solid and liquid organic contaminants and incorporates radionuclides into a glass-like material that renders contaminants substantially less mobile and less likely to impact the environment. A large-scale operational acceptance test (LSOAT) was recently completed in which more than 180 t of vitrified soil were produced in each of three adjacent settings. The LSOAT demonstrated that the process conforms to the functional design criteria necessary for the large-scale radioactive test (LSRT) to be conducted following verification of the performance capabilities of the process. The energy requirements and vitrified block size, shape, and mass are sufficiently equivalent to those predicted by the ISV mathematical model to confirm its usefulness as a predictive tool. The LSOAT demonstrated an electrode replacement technique, which can be used if an electrode fails, and techniques have been identified to minimize air oxidation, thereby extending electrode life. A statistical analysis was employed during the LSOAT to identify graphite collars and an insulative surface as successful cold cap subsidence techniques. The LSOAT also showed that even under worst-case conditions, the off-gas system exceeds the flow requirements necessary to maintain a negative pressure on the hood covering the area being vitrified. The retention of simulated radionuclides and chemicals in the soil and off-gas system exceeds requirements so that projected emissions are one to two orders of magnitude below the maximum permissible concentrations of contaminants at the stack.

  10. Sheltering in buildings from large-scale outdoor releases

    SciTech Connect

    Chan, W.R.; Price, P.N.; Gadgil, A.J.

    2004-06-01

    Intentional or accidental large-scale airborne toxic release (e.g. terrorist attacks or industrial accidents) can cause severe harm to nearby communities. Under these circumstances, taking shelter in buildings can be an effective emergency response strategy. Some examples where shelter-in-place was successful at preventing injuries and casualties have been documented [1, 2]. As public education and preparedness are vital to ensure the success of an emergency response, many agencies have prepared documents advising the public on what to do during and after sheltering [3, 4, 5]. In this document, we will focus on the role buildings play in providing protection to occupants. The conclusions to this article are: (1) Under most circumstances, shelter-in-place is an effective response against large-scale outdoor releases. This is particularly true for release of short duration (a few hours or less) and chemicals that exhibit non-linear dose-response characteristics. (2) The building envelope not only restricts the outdoor-indoor air exchange, but can also filter some biological or even chemical agents. Once indoors, the toxic materials can deposit or sorb onto indoor surfaces. All these processes contribute to the effectiveness of shelter-in-place. (3) Tightening of building envelope and improved filtration can enhance the protection offered by buildings. Common mechanical ventilation system present in most commercial buildings, however, should be turned off and dampers closed when sheltering from an outdoor release. (4) After the passing of the outdoor plume, some residuals will remain indoors. It is therefore important to terminate shelter-in-place to minimize exposure to the toxic materials.

  11. High Fidelity Simulations of Large-Scale Wireless Networks

    SciTech Connect

    Onunkwo, Uzoma; Benz, Zachary

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  12. The Large-scale Component of Mantle Convection

    NASA Astrophysics Data System (ADS)

    Cserepes, L.

    Circulation in the Earth's mantle occurs on multiple spatial scales: this review dis- cusses the character of its large-scale or global components. Direct and strong evi- dence concerning the global flow comes, first of all, from the pattern of plate motion. Further indirect observational data which can be transformed into flow velocities by the equation of motion are the internal density heterogeneities revealed by seismic to- mography, and the geoid can also be used as an observational constraint. Due to their limited spatial resolution, global tomographic data automatically filter out the small- scale features and are therefore relevant to the global flow pattern. Flow solutions obtained from tomographic models, using the plate motion as boundary condition, re- veal that subduction is the downwelling of the global mantle circulation and that the deep-rooted upwellings are concentrated in 2-3 superplumes. Spectral analysis of the tomographic heterogeneities shows that the power of global flow appears dominantly in the lowest spherical harmonic orders 2-5. Theoretical convection calculations con- tribute substantially to the understanding of global flow. If basal heating of the mantle is significant, numerical models can reproduce the basic 2 to 5 cell pattern of con- vection even without the inclusion of surface plates. If plates are superimposed on the solution with their present arrangement and motion, the dominance of these low spherical harmonic orders is more pronounced. The cells are not necessarily closed, rather they show chaotic time-dependence, but they are normally bordered by long downwelling features, and they have usually a single superplume in the cell interior. Swarms of small plumes can develop in the large cells, especially when convection is partially layered due to an internal boundary such as the 670 km discontinuity (source of small plumes). These small plumes are usually tilted by the background large-scale flow which shows that they are

  13. Nanomaterials processing toward large-scale flexible/stretchable electronics

    NASA Astrophysics Data System (ADS)

    Takahashi, Toshitake

    In recent years, there has been tremendous progress in large-scale mechanically flexible electronics, where electrical components are fabricated on non-crystalline substrates such as plastics and glass. These devices are currently serving as the basis for various applications such as flat-panel displays, smart cards, and wearable electronics. In this thesis, a promising approach using chemically synthesized nanomaterials is explored to overcome various obstacles current technology faces in this field. Here, we use chemically synthesized semiconducting nanowires (NWs) including group IV (Si, Ge), III-V (InAs) and II-IV (CdS, CdSe) NWs, and semiconductor-enriched SWNTs (99 % purity), and developed reliable, controllable, and more importantly uniform assembly methods on 4-inch wafer-scale flexible substrates in the form of either parallel NW arrays or SWNT random networks, which act as the active components in thin film transistors (TFTs). Thusly obtained TFTs composed of nanomaterials show respectable electrical and optical properties such as 1) cut-off frequency, ft ~ 1 GHz and maximum frequency of oscillation, fmax ~ 1.8 GHz from InAs parallel NW array TFTs with channel length of ~ 1.5 μm, 2) photodetectors covering visible wavelengths (500-700 nm) using compositionally graded CdSxSe1-x (0 < x < 1) parallel NW arrays, and 3) carrier mobility of ~ 20 cm2/Vs, which is an order of magnitude larger than conventional TFT materials such as a-Si and organic semiconductors, without sacrificing current on/off ratio (Ion/Ioff ~ 104) from SWNT network TFTs. The capability to uniformly assemble nanomaterials over large-scale flexible substrates enables us to use them for more sophisticated applications. Artificial electronic skin (e-skin) is demonstrated by laminating pressure sensitive rubber on top of nanomaterial-based active matrix backplanes. Furthermore, an x-ray imaging device is also achieved by combining organic photodiodes with this backplane technology.

  14. Large scale, urban decontamination; developments, historical examples and lessons learned

    SciTech Connect

    Demmer, R.L.

    2007-07-01

    Recent terrorist threats and actions have lead to a renewed interest in the technical field of large scale, urban environment decontamination. One of the driving forces for this interest is the prospect for the cleanup and removal of radioactive dispersal device (RDD or 'dirty bomb') residues. In response, the United States Government has spent many millions of dollars investigating RDD contamination and novel decontamination methodologies. The efficiency of RDD cleanup response will be improved with these new developments and a better understanding of the 'old reliable' methodologies. While an RDD is primarily an economic and psychological weapon, the need to cleanup and return valuable or culturally significant resources to the public is nonetheless valid. Several private companies, universities and National Laboratories are currently developing novel RDD cleanup technologies. Because of its longstanding association with radioactive facilities, the U. S. Department of Energy National Laboratories are at the forefront in developing and testing new RDD decontamination methods. However, such cleanup technologies are likely to be fairly task specific; while many different contamination mechanisms, substrate and environmental conditions will make actual application more complicated. Some major efforts have also been made to model potential contamination, to evaluate both old and new decontamination techniques and to assess their readiness for use. There are a number of significant lessons that can be gained from a look at previous large scale cleanup projects. Too often we are quick to apply a costly 'package and dispose' method when sound technological cleaning approaches are available. Understanding historical perspectives, advanced planning and constant technology improvement are essential to successful decontamination. (authors)

  15. Tools for Large-Scale Mobile Malware Analysis

    SciTech Connect

    Bierma, Michael

    2014-01-01

    Analyzing mobile applications for malicious behavior is an important area of re- search, and is made di cult, in part, by the increasingly large number of appli- cations available for the major operating systems. There are currently over 1.2 million apps available in both the Google Play and Apple App stores (the respec- tive o cial marketplaces for the Android and iOS operating systems)[1, 2]. Our research provides two large-scale analysis tools to aid in the detection and analysis of mobile malware. The rst tool we present, Andlantis, is a scalable dynamic analysis system capa- ble of processing over 3000 Android applications per hour. Traditionally, Android dynamic analysis techniques have been relatively limited in scale due to the compu- tational resources required to emulate the full Android system to achieve accurate execution. Andlantis is the most scalable Android dynamic analysis framework to date, and is able to collect valuable forensic data, which helps reverse-engineers and malware researchers identify and understand anomalous application behavior. We discuss the results of running 1261 malware samples through the system, and provide examples of malware analysis performed with the resulting data. While techniques exist to perform static analysis on a large number of appli- cations, large-scale analysis of iOS applications has been relatively small scale due to the closed nature of the iOS ecosystem, and the di culty of acquiring appli- cations for analysis. The second tool we present, iClone, addresses the challenges associated with iOS research in order to detect application clones within a dataset of over 20,000 iOS applications.

  16. Large-scale mass distribution in the Illustris simulation

    NASA Astrophysics Data System (ADS)

    Haider, M.; Steinhauser, D.; Vogelsberger, M.; Genel, S.; Springel, V.; Torrey, P.; Hernquist, L.

    2016-04-01

    Observations at low redshifts thus far fail to account for all of the baryons expected in the Universe according to cosmological constraints. A large fraction of the baryons presumably resides in a thin and warm-hot medium between the galaxies, where they are difficult to observe due to their low densities and high temperatures. Cosmological simulations of structure formation can be used to verify this picture and provide quantitative predictions for the distribution of mass in different large-scale structure components. Here we study the distribution of baryons and dark matter at different epochs using data from the Illustris simulation. We identify regions of different dark matter density with the primary constituents of large-scale structure, allowing us to measure mass and volume of haloes, filaments and voids. At redshift zero, we find that 49 per cent of the dark matter and 23 per cent of the baryons are within haloes more massive than the resolution limit of 2 × 108 M⊙. The filaments of the cosmic web host a further 45 per cent of the dark matter and 46 per cent of the baryons. The remaining 31 per cent of the baryons reside in voids. The majority of these baryons have been transported there through active galactic nuclei feedback. We note that the feedback model of Illustris is too strong for heavy haloes, therefore it is likely that we are overestimating this amount. Categorizing the baryons according to their density and temperature, we find that 17.8 per cent of them are in a condensed state, 21.6 per cent are present as cold, diffuse gas, and 53.9 per cent are found in the state of a warm-hot intergalactic medium.

  17. Large-scale quantum photonic circuits in silicon

    NASA Astrophysics Data System (ADS)

    Harris, Nicholas C.; Bunandar, Darius; Pant, Mihir; Steinbrecher, Greg R.; Mower, Jacob; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Englund, Dirk

    2016-08-01

    Quantum information science offers inherently more powerful methods for communication, computation, and precision measurement that take advantage of quantum superposition and entanglement. In recent years, theoretical and experimental advances in quantum computing and simulation with photons have spurred great interest in developing large photonic entangled states that challenge today's classical computers. As experiments have increased in complexity, there has been an increasing need to transition bulk optics experiments to integrated photonics platforms to control more spatial modes with higher fidelity and phase stability. The silicon-on-insulator (SOI) nanophotonics platform offers new possibilities for quantum optics, including the integration of bright, nonclassical light sources, based on the large third-order nonlinearity (χ(3)) of silicon, alongside quantum state manipulation circuits with thousands of optical elements, all on a single phase-stable chip. How large do these photonic systems need to be? Recent theoretical work on Boson Sampling suggests that even the problem of sampling from e30 identical photons, having passed through an interferometer of hundreds of modes, becomes challenging for classical computers. While experiments of this size are still challenging, the SOI platform has the required component density to enable low-loss and programmable interferometers for manipulating hundreds of spatial modes. Here, we discuss the SOI nanophotonics platform for quantum photonic circuits with hundreds-to-thousands of optical elements and the associated challenges. We compare SOI to competing technologies in terms of requirements for quantum optical systems. We review recent results on large-scale quantum state evolution circuits and strategies for realizing high-fidelity heralded gates with imperfect, practical systems. Next, we review recent results on silicon photonics-based photon-pair sources and device architectures, and we discuss a path towards

  18. Making Large-Scale Networks from fMRI Data

    PubMed Central

    Schmittmann, Verena D.; Jahfari, Sara; Borsboom, Denny; Savi, Alexander O.; Waldorp, Lourens J.

    2015-01-01

    Pairwise correlations are currently a popular way to estimate a large-scale network (> 1000 nodes) from functional magnetic resonance imaging data. However, this approach generally results in a poor representation of the true underlying network. The reason is that pairwise correlations cannot distinguish between direct and indirect connectivity. As a result, pairwise correlation networks can lead to fallacious conclusions; for example, one may conclude that a network is a small-world when it is not. In a simulation study and an application to resting-state fMRI data, we compare the performance of pairwise correlations in large-scale networks (2000 nodes) against three other methods that are designed to filter out indirect connections. Recovery methods are evaluated in four simulated network topologies (small world or not, scale-free or not) in scenarios where the number of observations is very small compared to the number of nodes. Simulations clearly show that pairwise correlation networks are fragmented into separate unconnected components with excessive connectedness within components. This often leads to erroneous estimates of network metrics, like small-world structures or low betweenness centrality, and produces too many low-degree nodes. We conclude that using partial correlations, informed by a sparseness penalty, results in more accurate networks and corresponding metrics than pairwise correlation networks. However, even with these methods, the presence of hubs in the generating network can be problematic if the number of observations is too small. Additionally, we show for resting-state fMRI that partial correlations are more robust than correlations to different parcellation sets and to different lengths of time-series. PMID:26325185

  19. Modeling temporal relationships in large scale clinical associations

    PubMed Central

    Hanauer, David A; Ramakrishnan, Naren

    2013-01-01

    Objective We describe an approach for modeling temporal relationships in a large scale association analysis of electronic health record data. The addition of temporal information can inform hypothesis generation and help to explain the relationships. We applied this approach on a dataset containing 41.2 million time-stamped International Classification of Diseases, Ninth Revision (ICD-9) codes from 1.6 million patients. Methods We performed two independent analyses including a pairwise association analysis using a χ2 test and a temporal analysis using a binomial test. Data were visualized using network diagrams and reviewed for clinical significance. Results We found nearly 400 000 highly associated pairs of ICD-9 codes with varying numbers of strong temporal associations ranging from ≥1 day to ≥10 years apart. Most of the findings were not considered clinically novel, although some, such as an association between Helicobacter pylori infection and diabetes, have recently been reported in the literature. The temporal analysis in our large cohort, however, revealed that diabetes usually preceded the diagnoses of H pylori, raising questions about possible cause and effect. Discussion Such analyses have significant limitations, some of which are due to known problems with ICD-9 codes and others to potentially incomplete data even at a health system level. Nevertheless, large scale association analyses with temporal modeling can help provide a mechanism for novel discovery in support of hypothesis generation. Conclusions Temporal relationships can provide an additional layer of meaning in identifying and interpreting clinical associations. PMID:23019240

  20. Sleeping Beauty transposon system harboring HRAS, c-Myc and shp53 induces sarcomatoid carcinomas in mouse skin.

    PubMed

    Jung, Sunyoung; Ro, Simon Weonsang; Jung, Geunyoung; Ju, Hye-Lim; Yu, Eun-Sil; Son, Woo-Chan

    2013-04-01

    The Sleeping Beauty transposon system is used as a tool for insertional mutagenesis and oncogenesis. However, little is known about the exact histological phenotype of the tumors induced. Thus, we used immunohistochemical markers to enable histological identification of the type of tumor induced by subcutaneous injection of the HRAS, c-Myc and shp53 oncogenes in female C57BL/6 mice. The tumor was removed when it reached 100 mm3 in volume. Subsequently, we used 13 immunohistochemical markers to histologically identify the tumor type. The results suggested that the morphology of the tumor was similar to that of sarcomatoid carcinoma. PMID:23380875

  1. Knockout of an outer membrane protein operon of anaplasma marginale by transposon mutagenesis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Large amounts of data generated by genomics, transcriptomics and proteomics technologies have increased our understanding of the biology of Anaplasma marginale. However, these data have also led to new assumptions that require testing, ideally through classic genetic mutation. One example is the def...

  2. piggyBac-based insertional mutagenesis and enhancer detection as a tool for functional insect genomics.

    PubMed Central

    Horn, Carsten; Offen, Nils; Nystedt, Sverker; Häcker, Udo; Wimmer, Ernst A

    2003-01-01

    Transposon mutagenesis provides a fundamental tool for functional genomics. Here we present a non-species-specific, combined enhancer detection and binary expression system based on the transposable element piggyBac: For the different components of this insertional mutagenesis system, we used widely applicable transposons and distinguishable broad-range transformation markers, which should enable this system to be operational in nonmodel arthropods. In a pilot screen in Drosophila melanogaster, piggyBac mutator elements on the X chromosome were mobilized in males by a Hermes-based jumpstarter element providing piggyBac transposase activity under control of the alpha1-tubulin promoter. As primary reporters in the piggyBac mutator elements, we employed the heterologous transactivators GAL4delta or tTA. To identify larval and adult enhancer detectors, strains carrying UASp-EYFP or TRE-EYFP as secondary reporter elements were used. Tissue-specific enhancer activities were readily observed in the GAL4delta/UASp-based systems, but only rarely in the tTA/TRE system. Novel autosomal insertions were recovered with an average jumping rate of 80%. Of these novel insertions, 3.8% showed homozygous lethality, which was reversible by piggyBac excision. Insertions were found in both coding and noncoding regions of characterized genes and also in noncharacterized and non-P-targeted CG-number genes. This indicates that piggyBac will greatly facilitate the intended saturation mutagenesis in Drosophila. PMID:12618403

  3. Kinematics and Dynamics in Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    dell'Antonio, Ian Pietro

    1995-01-01

    We study a sample of x-ray observed groups of galaxies to examine the relation between group velocity dispersions and x-ray luminosities. For the rich groups, Lx~ sigma ^{4.0+/-0.6}, but poorer systems follow a flatter relation. This L_{x }- sigma relation probably arises from a combination of extended gas and individual galaxy emission. We then concentrate on six poor clusters of galaxies with higher-quality x-ray data, and we measure the virial mass, gas mass, and x-ray temperature. From the x-ray surface brightness distribution, we construct models of the mass distribution. We use a modified V/ Vmax test to test whether the galaxies trace the potential marked by the gas. The galaxy distribution is consistent with the density distribution inferred from the x-rays. The mass in galaxies is {~}3h^{-1}% of the total mass of the systems. Galaxies contribute significantly to the baryonic mass total: M_ {gas}/Mgal ~1.4h^{-1/2},~ilar to the value for rich clusters. The baryon fraction in rich groups is {~}0.08 (for Ho=100), about half that in rich clusters. This result has significant implications for the origin of large-scale structure. In a study of structure on a larger scale, we use the Tully-Fisher (TF) relation to examine the kinematics of the Great Wall of Galaxies. First, we examine the relation between rotation profiles of galaxies and HI linewidths, and investigate the effects on the TF relation. The rotation curve profile shapes and magnitudes of galaxies are correlated, implying that a galaxy yields different distance estimates with a linewidth measured at a different fraction of peak emission. Indiscriminatingly combining data based on different measures of the "rotation velocity" into a single TF relation leads to systematic errors and biases in the velocity field. We evaluate these effects using optical rotation curves and HI linewidth data. The TF relation can be improved by adding shape parameters to characterize the HI profiles. We construct the I

  4. Planar Doppler Velocimetry for Large-Scale Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    McKenzie, Robert L.

    1997-01-01

    Recently, Planar Doppler Velocimetry (PDV) has been shown by several laboratories to offer an attractive means for measuring three-dimensional velocity vectors everywhere in a light sheet placed in a flow. Unlike other optical means of measuring flow velocities, PDV is particularly attractive for use in large wind tunnels where distances to the sample region may be several meters, because it does not require the spatial resolution and tracking of individual scattering particles or the alignment of crossed beams at large distances. To date, demonstrations of PDV have been made either in low speed flows without quantitative comparison to other measurements, or in supersonic flows where the Doppler shift is large and its measurement is relatively insensitive to instrumental errors. Moreover, most reported applications have relied on the use of continuous-wave lasers, which limit the measurement to time-averaged velocity fields. This work summarizes the results of two previous studies of PDV in which the use of pulsed lasers to obtain instantaneous velocity vector fields is evaluated. The objective has been to quantitatively define and demonstrate PDV capabilities for applications in large-scale wind tunnels that are intended primarily for the production testing of subsonic aircraft. For such applications, the adequate resolution of low-speed flow fields requires accurate measurements of small Doppler shifts that are obtained at distances of several meters from the sample region. The use of pulsed lasers provides the unique capability to obtain not only time-averaged fields, but also their statistical fluctuation amplitudes and the spatial excursions of unsteady flow regions such as wakes and separations. To accomplish the objectives indicated, the PDV measurement process is first modeled and its performance evaluated computationally. The noise sources considered include those related to the optical and electronic properties of Charge-Coupled Device (CCD) arrays and to

  5. Large-scale simulations of layered double hydroxide nanocomposite materials

    NASA Astrophysics Data System (ADS)

    Thyveetil, Mary-Ann

    Layered double hydroxides (LDHs) have the ability to intercalate a multitude of anionic species. Atomistic simulation techniques such as molecular dynamics have provided considerable insight into the behaviour of these materials. We review these techniques and recent algorithmic advances which considerably improve the performance of MD applications. In particular, we discuss how the advent of high performance computing and computational grids has allowed us to explore large scale models with considerable ease. Our simulations have been heavily reliant on computational resources on the UK's NGS (National Grid Service), the US TeraGrid and the Distributed European Infrastructure for Supercomputing Applications (DEISA). In order to utilise computational grids we rely on grid middleware to launch, computationally steer and visualise our simulations. We have integrated the RealityGrid steering library into the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) 1 . which has enabled us to perform re mote computational steering and visualisation of molecular dynamics simulations on grid infrastruc tures. We also use the Application Hosting Environment (AHE) 2 in order to launch simulations on remote supercomputing resources and we show that data transfer rates between local clusters and super- computing resources can be considerably enhanced by using optically switched networks. We perform large scale molecular dynamics simulations of MgiAl-LDHs intercalated with either chloride ions or a mixture of DNA and chloride ions. The systems exhibit undulatory modes, which are suppressed in smaller scale simulations, caused by the collective thermal motion of atoms in the LDH layers. Thermal undulations provide elastic properties of the system including the bending modulus, Young's moduli and Poisson's ratios. To explore the interaction between LDHs and DNA. we use molecular dynamics techniques to per form simulations of double stranded, linear and plasmid DNA up

  6. Reconstructing Information in Large-Scale Structure via Logarithmic Mapping

    NASA Astrophysics Data System (ADS)

    Szapudi, Istvan

    We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out

  7. Large scale stochastic spatio-temporal modelling with PCRaster

    NASA Astrophysics Data System (ADS)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  8. Local and Regional Impacts of Large Scale Wind Energy Deployment

    NASA Astrophysics Data System (ADS)

    Michalakes, J.; Hammond, S.; Lundquist, J. K.; Moriarty, P.; Robinson, M.

    2010-12-01

    The U.S. is currently on a path to produce 20% of its electricity from wind energy by 2030, almost a 10-fold increase over present levels of electricity generated from wind. Such high-penetration wind energy deployment will entail extracting elevated energy levels from the planetary boundary layer and preliminary studies indicate that this will have significant but uncertain impacts on the local and regional environment. State and federal regulators have raised serious concerns regarding potential agricultural impacts from large farms deployed throughout the Midwest where agriculture is the basis of the local economy. The effects of large wind farms have been proposed to be both beneficial (drying crops to reduce occurrences of fungal diseases, avoiding late spring freezes, enhancing pollen viability, reducing dew duration) and detrimental (accelerating moisture loss during drought) with no conclusive investigations thus far. As both wind and solar technologies are deployed at scales required to replace conventional technologies, there must be reasonable certainty that the potential environmental impacts at the micro, macro, regional and global scale do not exceed those anticipated from carbon emissions. Largely because of computational limits, the role of large wind farms in affecting regional-scale weather patterns has only been investigated in coarse simulations and modeling tools do not yet exist which are capable of assessing the downwind affects of large wind farms may have on microclimatology. In this presentation, we will outline the vision for and discuss technical and scientific challenges in developing a multi-model high-performance simulation capability covering the range of mesoscale to sub-millimeter scales appropriate for assessing local, regional, and ultimately global environmental impacts and quantifying uncertainties of large scale wind energy deployment scenarios. Such a system will allow continuous downscaling of atmospheric processes on wind

  9. Large-Scale Spray Releases: Additional Aerosol Test Results

    SciTech Connect

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used

  10. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model

  11. Food security through large scale investments in agriculture

    NASA Astrophysics Data System (ADS)

    Rulli, M.; D'Odorico, P.

    2013-12-01

    Most of the human appropriation of freshwater resources is for food production. There is some concern that in the near future the finite freshwater resources available on Earth might not be sufficient to meet the increasing human demand for agricultural products. In the late 1700s Malthus argued that in the long run the humanity would not have enough resources to feed itself. Malthus' analysis, however, did not account for the emergence of technological innovations that could increase the rate of food production. The modern and contemporary history has seen at least three major technological advances that have increased humans' access to food, namely, the industrial revolution, the green revolution, and the intensification of global trade. Here we argue that a fourth revolution has just started to happen. It involves foreign direct investments in agriculture, which intensify the crop yields of potentially highly productive agricultural lands by introducing the use of more modern technologies. The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions for commercial farming will bring the technology required to close the existing yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of verified land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with large scale land acquisitions. We

  12. Ferroelectric opening switches for large-scale pulsed power drivers.

    SciTech Connect

    Brennecka, Geoffrey L.; Rudys, Joseph Matthew; Reed, Kim Warren; Pena, Gary Edward; Tuttle, Bruce Andrew; Glover, Steven Frank

    2009-11-01

    Fast electrical energy storage or Voltage-Driven Technology (VDT) has dominated fast, high-voltage pulsed power systems for the past six decades. Fast magnetic energy storage or Current-Driven Technology (CDT) is characterized by 10,000 X higher energy density than VDT and has a great number of other substantial advantages, but it has all but been neglected for all of these decades. The uniform explanation for neglect of CDT technology is invariably that the industry has never been able to make an effective opening switch, which is essential for the use of CDT. Most approaches to opening switches have involved plasma of one sort or another. On a large scale, gaseous plasmas have been used as a conductor to bridge the switch electrodes that provides an opening function when the current wave front propagates through to the output end of the plasma and fully magnetizes the plasma - this is called a Plasma Opening Switch (POS). Opening can be triggered in a POS using a magnetic field to push the plasma out of the A-K gap - this is called a Magnetically Controlled Plasma Opening Switch (MCPOS). On a small scale, depletion of electron plasmas in semiconductor devices is used to affect opening switch behavior, but these devices are relatively low voltage and low current compared to the hundreds of kilo-volts and tens of kilo-amperes of interest to pulsed power. This work is an investigation into an entirely new approach to opening switch technology that utilizes new materials in new ways. The new materials are Ferroelectrics and using them as an opening switch is a stark contrast to their traditional applications in optics and transducer applications. Emphasis is on use of high performance ferroelectrics with the objective of developing an opening switch that would be suitable for large scale pulsed power applications. Over the course of exploring this new ground, we have discovered new behaviors and properties of these materials that were here to fore unknown. Some of

  13. Punishment sustains large-scale cooperation in prestate warfare.

    PubMed

    Mathew, Sarah; Boyd, Robert

    2011-07-12

    Understanding cooperation and punishment in small-scale societies is crucial for explaining the origins of human cooperation. We studied warfare among the Turkana, a politically uncentralized, egalitarian, nomadic pastoral society in East Africa. Based on a representative sample of 88 recent raids, we show that the Turkana sustain costly cooperation in combat at a remarkably large scale, at least in part, through punishment of free-riders. Raiding parties comprised several hundred warriors and participants are not kin or day-to-day interactants. Warriors incur substantial risk of death and produce collective benefits. Cowardice and desertions occur, and are punished by community-imposed sanctions, including collective corporal punishment and fines. Furthermore, Turkana norms governing warfare benefit the ethnolinguistic group, a population of a half-million people, at the expense of smaller social groupings. These results challenge current views that punishment is unimportant in small-scale societies and that human cooperation evolved in small groups of kin and familiar individuals. Instead, these results suggest that cooperation at the larger scale of ethnolinguistic units enforced by third-party sanctions could have a deep evolutionary history in the human species.

  14. A novel methodology for large-scale phylogeny partition.

    PubMed

    Prosperi, Mattia C F; Ciccozzi, Massimo; Fanti, Iuri; Saladini, Francesco; Pecorari, Monica; Borghi, Vanni; Di Giambenedetto, Simona; Bruzzone, Bianca; Capetti, Amedeo; Vivarelli, Angela; Rusconi, Stefano; Re, Maria Carla; Gismondo, Maria Rita; Sighinolfi, Laura; Gray, Rebecca R; Salemi, Marco; Zazzi, Maurizio; De Luca, Andrea

    2011-01-01

    Understanding the determinants of virus transmission is a fundamental step for effective design of screening and intervention strategies to control viral epidemics. Phylogenetic analysis can be a valid approach for the identification of transmission chains, and very-large data sets can be analysed through parallel computation. Here we propose and validate a new methodology for the partition of large-scale phylogenies and the inference of transmission clusters. This approach, on the basis of a depth-first search algorithm, conjugates the evaluation of node reliability, tree topology and patristic distance analysis. The method has been applied to identify transmission clusters of a phylogeny of 11,541 human immunodeficiency virus-1 subtype B pol gene sequences from a large Italian cohort. Molecular transmission chains were characterized by means of different clinical/demographic factors, such as the interaction between male homosexuals and male heterosexuals. Our method takes an advantage of a flexible notion of transmission cluster and can become a general framework to analyse other epidemics.

  15. Large scale electromechanical transistor with application in mass sensing

    SciTech Connect

    Jin, Leisheng; Li, Lijie

    2014-12-07

    Nanomechanical transistor (NMT) has evolved from the single electron transistor, a device that operates by shuttling electrons with a self-excited central conductor. The unfavoured aspects of the NMT are the complexity of the fabrication process and its signal processing unit, which could potentially be overcome by designing much larger devices. This paper reports a new design of large scale electromechanical transistor (LSEMT), still taking advantage of the principle of shuttling electrons. However, because of the large size, nonlinear electrostatic forces induced by the transistor itself are not sufficient to drive the mechanical member into vibration—an external force has to be used. In this paper, a LSEMT device is modelled, and its new application in mass sensing is postulated using two coupled mechanical cantilevers, with one of them being embedded in the transistor. The sensor is capable of detecting added mass using the eigenstate shifts method by reading the change of electrical current from the transistor, which has much higher sensitivity than conventional eigenfrequency shift approach used in classical cantilever based mass sensors. Numerical simulations are conducted to investigate the performance of the mass sensor.

  16. Development of a Large Scale, High Speed Wheel Test Facility

    NASA Technical Reports Server (NTRS)

    Kondoleon, Anthony; Seltzer, Donald; Thornton, Richard; Thompson, Marc

    1996-01-01

    Draper Laboratory, with its internal research and development budget, has for the past two years been funding a joint effort with the Massachusetts Institute of Technology (MIT) for the development of a large scale, high speed wheel test facility. This facility was developed to perform experiments and carry out evaluations on levitation and propulsion designs for MagLev systems currently under consideration. The facility was developed to rotate a large (2 meter) wheel which could operate with peripheral speeds of greater than 100 meters/second. The rim of the wheel was constructed of a non-magnetic, non-conductive composite material to avoid the generation of errors from spurious forces. A sensor package containing a multi-axis force and torque sensor mounted to the base of the station, provides a signal of the lift and drag forces on the package being tested. Position tables mounted on the station allow for the introduction of errors in real time. A computer controlled data acquisition system was developed around a Macintosh IIfx to record the test data and control the speed of the wheel. This paper describes the development of this test facility. A detailed description of the major components is presented. Recently completed tests carried out on a novel Electrodynamic (EDS) suspension system, developed by MIT as part of this joint effort are described and presented. Adaptation of this facility for linear motor and other propulsion and levitation testing is described.

  17. Bio-inspired wooden actuators for large scale applications.

    PubMed

    Rüggeberg, Markus; Burgert, Ingo

    2015-01-01

    Implementing programmable actuation into materials and structures is a major topic in the field of smart materials. In particular the bilayer principle has been employed to develop actuators that respond to various kinds of stimuli. A multitude of small scale applications down to micrometer size have been developed, but up-scaling remains challenging due to either limitations in mechanical stiffness of the material or in the manufacturing processes. Here, we demonstrate the actuation of wooden bilayers in response to changes in relative humidity, making use of the high material stiffness and a good machinability to reach large scale actuation and application. Amplitude and response time of the actuation were measured and can be predicted and controlled by adapting the geometry and the constitution of the bilayers. Field tests in full weathering conditions revealed long-term stability of the actuation. The potential of the concept is shown by a first demonstrator. With the sensor and actuator intrinsically incorporated in the wooden bilayers, the daily change in relative humidity is exploited for an autonomous and solar powered movement of a tracker for solar modules. PMID:25835386

  18. Satellite measurements of large-scale air pollution - Methods

    NASA Technical Reports Server (NTRS)

    Kaufman, Yoram J.; Ferrare, Richard A.; Fraser, Robert S.

    1990-01-01

    A technique for deriving large-scale pollution parameters from NIR and visible satellite remote-sensing images obtained over land or water is described and demonstrated on AVHRR images. The method is based on comparison of the upward radiances on clear and hazy days and permits simultaneous determination of aerosol optical thickness with error Delta tau(a) = 0.08-0.15, particle size with error + or - 100-200 nm, and single-scattering albedo with error + or - 0.03 (for albedos near 1), all assuming accurate and stable satellite calibration and stable surface reflectance between the clear and hazy days. In the analysis of AVHRR images of smoke from a forest fire, good agreement was obtained between satellite and ground-based (sun-photometer) measurements of aerosol optical thickness, but the satellite particle sizes were systematically greater than those measured from the ground. The AVHRR single-scattering albedo agreed well with a Landsat albedo for the same smoke.

  19. The dual role of shear in large-scale dynamos

    NASA Astrophysics Data System (ADS)

    Brandenburg, A.

    2008-09-01

    The role of shear in alleviating catastrophic quenching by shedding small-scale magnetic helicity through fluxes along contours of constant shear is discussed. The level of quenching of the dynamo effect depends on the quenched value of the turbulent magnetic diffusivity. Earlier estimates that might have suffered from the force-free degeneracy of Beltrami fields are now confirmed for shear flows where this degeneracy is lifted. For a dynamo that is saturated near equipartition field strength those estimates result in a 5-fold decrease of the magnetic diffusivity as the magnetic Reynolds number based on the wavenumber of the energy-carrying eddies is increased from 2 to 600. Finally, the role of shear in driving turbulence and large-scale fields by the magneto-rotational instability is emphasized. New simulations are presented and the 3\\pi/4 phase shift between poloidal and toroidal fields is confirmed. It is suggested that this phase shift might be a useful diagnostic tool in identifying mean-field dynamo action in simulations and to distinguish this from other scenarios invoking magnetic buoyancy as a means to explain migration away from the midplane.

  20. Safety aspects of large-scale combustion of hydrogen

    SciTech Connect

    Edeskuty, F.J.; Haugh, J.J.; Thompson, R.T.

    1986-01-01

    Recent hydrogen-safety investigations have studied the possible large-scale effects from phenomena such as the accumulation of combustible hydrogen-air mixtures in large, confined volumes. Of particular interest are safe methods for the disposal of the hydrogen and the pressures which can arise from its confined combustion. Consequently, tests of the confined combustion of hydrogen-air mixtures were conducted in a 2100 m/sup 3/ volume. These tests show that continuous combustion, as the hydrogen is generated, is a safe method for its disposal. It also has been seen that, for hydrogen concentrations up to 13 vol %, it is possible to predict maximum pressures that can occur upon ignition of premixed hydrogen-air atmospheres. In addition information has been obtained concerning the survivability of the equipment that is needed to recover from an accident involving hydrogen combustion. An accident that involved the inadvertent mixing of hydrogen and oxygen gases in a tube trailer gave evidence that under the proper conditions hydrogen combustion can transit to a detonation. If detonation occurs the pressures which can be experienced are much higher although short in duration.