Sample records for trace archive sequence

  1. Loess as an environmental archive of atmospheric trace element deposition

    NASA Astrophysics Data System (ADS)

    Blazina, T.; Winkel, L. H.

    2013-12-01

    Environmental archives such as ice cores, lake sediment cores, and peat cores have been used extensively to reconstruct past atmospheric deposition of trace elements. These records have provided information about how anthropogenic activities such as mining and fossil fuel combustion have disturbed the natural cycles of various atmospherically transported trace elements (e.g. Pb, Hg and Se). While these records are invaluable for tracing human impacts on such trace elements, they often provide limited information about the long term natural cycles of these elements. An assumption of these records is that the observed variations in trace element input, prior to any assumed anthropogenic perturbations, represent the full range of natural variations. However, records such as those mentioned above which extend back to a maximum of ~400kyr may not capture the potentially large variations of trace element input occurring over millions of years. Windblown loess sediments, often representing atmospheric deposition over time scales >1Ma, are the most widely distributed terrestrial sediments on Earth. These deposits have been used extensively to reconstruct continental climate variability throughout the Quaternary and late Neogene periods. In addition to being a valuable record of continental climate change, loess deposits may represent a long term environmental archive of atmospheric trace element deposition and may be combined with paleoclimate records to elucidate how fluctuations in climate have impacted the natural cycle of such elements. Our research uses the loess-paleosol deposits on the Chinese Loess Plateau (CLP) to quantify how atmospheric deposition of trace elements has fluctuated in central China over the past 6.8Ma. The CLP has been used extensively to reconstruct past changes of East Asian monsoon system (EAM). We present a suite of trace element concentration records (e.g. Pb, Hg, and Se) from the CLP which exemplifies how loess deposits can be used as an

  2. From Ephemeral to Legitimate: An Inquiry into Television's Material Traces in Archival Spaces, 1950s-1970s

    ERIC Educational Resources Information Center

    Bratslavsky, Lauren Michelle

    2013-01-01

    The dissertation offers a historical inquiry about how television's material traces entered archival spaces. Material traces refer to both the moving image products and the assortment of documentation about the processes of television as industrial and creative endeavors. By identifying the development of television-specific archives and…

  3. A trace display and editing program for data from fluorescence based sequencing machines.

    PubMed

    Gleeson, T; Hillier, L

    1991-12-11

    'Ted' (Trace editor) is a graphical editor for sequence and trace data from automated fluorescence sequencing machines. It provides facilities for viewing sequence and trace data (in top or bottom strand orientation), for editing the base sequence, for automated or manual trimming of the head (vector) and tail (uncertain data) from the sequence, for vertical and horizontal trace scaling, for keeping a history of sequence editing, and for output of the edited sequence. Ted has been used extensively in the C.elegans genome sequencing project, both as a stand-alone program and integrated into the Staden sequence assembly package, and has greatly aided in the efficiency and accuracy of sequence editing. It runs in the X windows environment on Sun workstations and is available from the authors. Ted currently supports sequence and trace data from the ABI 373A and Pharmacia A.L.F. sequencers.

  4. preAssemble: a tool for automatic sequencer trace data processing.

    PubMed

    Adzhubei, Alexei A; Laerdahl, Jon K; Vlasova, Anna V

    2006-01-17

    Trace or chromatogram files (raw data) are produced by automatic nucleic acid sequencing equipment or sequencers. Each file contains information which can be interpreted by specialised software to reveal the sequence (base calling). This is done by the sequencer proprietary software or publicly available programs. Depending on the size of a sequencing project the number of trace files can vary from just a few to thousands of files. Sequencing quality assessment on various criteria is important at the stage preceding clustering and contig assembly. Two major publicly available packages--Phred and Staden are used by preAssemble to perform sequence quality processing. The preAssemble pre-assembly sequence processing pipeline has been developed for small to large scale automatic processing of DNA sequencer chromatogram (trace) data. The Staden Package Pregap4 module and base-calling program Phred are utilized in the pipeline, which produces detailed and self-explanatory output that can be displayed with a web browser. preAssemble can be used successfully with very little previous experience, however options for parameter tuning are provided for advanced users. preAssemble runs under UNIX and LINUX operating systems. It is available for downloading and will run as stand-alone software. It can also be accessed on the Norwegian Salmon Genome Project web site where preAssemble jobs can be run on the project server. preAssemble is a tool allowing to perform quality assessment of sequences generated by automatic sequencing equipment. preAssemble is flexible since both interactive jobs on the preAssemble server and the stand alone downloadable version are available. Virtually no previous experience is necessary to run a default preAssemble job, on the other hand options for parameter tuning are provided. Consequently preAssemble can be used as efficiently for just several trace files as for large scale sequence processing.

  5. Calculating the quality of public high-throughput sequencing data to obtain a suitable subset for reanalysis from the Sequence Read Archive.

    PubMed

    Ohta, Tazro; Nakazato, Takeru; Bono, Hidemasa

    2017-06-01

    It is important for public data repositories to promote the reuse of archived data. In the growing field of omics science, however, the increasing number of submissions of high-throughput sequencing (HTSeq) data to public repositories prevents users from choosing a suitable data set from among the large number of search results. Repository users need to be able to set a threshold to reduce the number of results to obtain a suitable subset of high-quality data for reanalysis. We calculated the quality of sequencing data archived in a public data repository, the Sequence Read Archive (SRA), by using the quality control software FastQC. We obtained quality values for 1 171 313 experiments, which can be used to evaluate the suitability of data for reuse. We also visualized the data distribution in SRA by integrating the quality information and metadata of experiments and samples. We provide quality information of all of the archived sequencing data, which enable users to obtain sufficient quality sequencing data for reanalyses. The calculated quality data are available to the public in various formats. Our data also provide an example of enhancing the reuse of public data by adding metadata to published research data by a third party. © The Authors 2017. Published by Oxford University Press.

  6. Calculating the quality of public high-throughput sequencing data to obtain a suitable subset for reanalysis from the Sequence Read Archive

    PubMed Central

    Nakazato, Takeru; Bono, Hidemasa

    2017-01-01

    Abstract It is important for public data repositories to promote the reuse of archived data. In the growing field of omics science, however, the increasing number of submissions of high-throughput sequencing (HTSeq) data to public repositories prevents users from choosing a suitable data set from among the large number of search results. Repository users need to be able to set a threshold to reduce the number of results to obtain a suitable subset of high-quality data for reanalysis. We calculated the quality of sequencing data archived in a public data repository, the Sequence Read Archive (SRA), by using the quality control software FastQC. We obtained quality values for 1 171 313 experiments, which can be used to evaluate the suitability of data for reuse. We also visualized the data distribution in SRA by integrating the quality information and metadata of experiments and samples. We provide quality information of all of the archived sequencing data, which enable users to obtain sufficient quality sequencing data for reanalyses. The calculated quality data are available to the public in various formats. Our data also provide an example of enhancing the reuse of public data by adding metadata to published research data by a third party. PMID:28449062

  7. The DNA Data Bank of Japan launches a new resource, the DDBJ Omics Archive of functional genomics experiments.

    PubMed

    Kodama, Yuichi; Mashima, Jun; Kaminuma, Eli; Gojobori, Takashi; Ogasawara, Osamu; Takagi, Toshihisa; Okubo, Kousaku; Nakamura, Yasukazu

    2012-01-01

    The DNA Data Bank of Japan (DDBJ; http://www.ddbj.nig.ac.jp) maintains and provides archival, retrieval and analytical resources for biological information. The central DDBJ resource consists of public, open-access nucleotide sequence databases including raw sequence reads, assembly information and functional annotation. Database content is exchanged with EBI and NCBI within the framework of the International Nucleotide Sequence Database Collaboration (INSDC). In 2011, DDBJ launched two new resources: the 'DDBJ Omics Archive' (DOR; http://trace.ddbj.nig.ac.jp/dor) and BioProject (http://trace.ddbj.nig.ac.jp/bioproject). DOR is an archival database of functional genomics data generated by microarray and highly parallel new generation sequencers. Data are exchanged between the ArrayExpress at EBI and DOR in the common MAGE-TAB format. BioProject provides an organizational framework to access metadata about research projects and the data from the projects that are deposited into different databases. In this article, we describe major changes and improvements introduced to the DDBJ services, and the launch of two new resources: DOR and BioProject.

  8. Experimental Design-Based Functional Mining and Characterization of High-Throughput Sequencing Data in the Sequence Read Archive

    PubMed Central

    Nakazato, Takeru; Ohta, Tazro; Bono, Hidemasa

    2013-01-01

    High-throughput sequencing technology, also called next-generation sequencing (NGS), has the potential to revolutionize the whole process of genome sequencing, transcriptomics, and epigenetics. Sequencing data is captured in a public primary data archive, the Sequence Read Archive (SRA). As of January 2013, data from more than 14,000 projects have been submitted to SRA, which is double that of the previous year. Researchers can download raw sequence data from SRA website to perform further analyses and to compare with their own data. However, it is extremely difficult to search entries and download raw sequences of interests with SRA because the data structure is complicated, and experimental conditions along with raw sequences are partly described in natural language. Additionally, some sequences are of inconsistent quality because anyone can submit sequencing data to SRA with no quality check. Therefore, as a criterion of data quality, we focused on SRA entries that were cited in journal articles. We extracted SRA IDs and PubMed IDs (PMIDs) from SRA and full-text versions of journal articles and retrieved 2748 SRA ID-PMID pairs. We constructed a publication list referring to SRA entries. Since, one of the main themes of -omics analyses is clarification of disease mechanisms, we also characterized SRA entries by disease keywords, according to the Medical Subject Headings (MeSH) extracted from articles assigned to each SRA entry. We obtained 989 SRA ID-MeSH disease term pairs, and constructed a disease list referring to SRA data. We previously developed feature profiles of diseases in a system called “Gendoo”. We generated hyperlinks between diseases extracted from SRA and the feature profiles of it. The developed project, publication and disease lists resulting from this study are available at our web service, called “DBCLS SRA” (http://sra.dbcls.jp/). This service will improve accessibility to high-quality data from SRA. PMID:24167589

  9. Trace metal depositional patterns from an open pit mining activity as revealed by archived avian gizzard contents.

    PubMed

    Bendell, L I

    2011-02-15

    Archived samples of blue grouse (Dendragapus obscurus) gizzard contents, inclusive of grit, collected yearly between 1959 and 1970 were analyzed for cadmium, lead, zinc, and copper content. Approximately halfway through the 12-year sampling period, an open-pit copper mine began activities, then ceased operations 2 years later. Thus the archived samples provided a unique opportunity to determine if avian gizzard contents, inclusive of grit, could reveal patterns in the anthropogenic deposition of trace metals associated with mining activities. Gizzard concentrations of cadmium and copper strongly coincided with the onset of opening and the closing of the pit mining activity. Gizzard zinc and lead demonstrated significant among year variation; however, maximum concentrations did not correlate to mining activity. The archived gizzard contents did provide a useful tool for documenting trends in metal depositional patterns related to an anthropogenic activity. Further, blue grouse ingesting grit particles during the time of active mining activity would have been exposed to toxicologically significant levels of cadmium. Gizzard lead concentrations were also of toxicological significance but not related to mining activity. This type of "pulse" toxic metal exposure as a consequence of open-pit mining activity would not necessarily have been revealed through a "snap-shot" of soil, plant or avian tissue trace metal analysis post-mining activity. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. Evaluation of Targeted Sequencing for Transcriptional Analysis of Archival Formalin-Fixed Paraffin-Embedded (FFPE) Samples

    EPA Science Inventory

    Next-generation sequencing provides unprecedented access to genomic information in archival FFPE tissue samples. However, costs and technical challenges related to RNA isolation and enrichment limit use of whole-genome RNA-sequencing for large-scale studies of FFPE specimens. Rec...

  11. Evaluating Quality of Aged Archival Formalin-Fixed Paraffin-Embedded Samples for RNA-Sequencing

    EPA Science Inventory

    Archival formalin-fixed paraffin-embedded (FFPE) samples offer a vast, untapped source of genomic data for biomarker discovery. However, the quality of FFPE samples is often highly variable, and conventional methods to assess RNA quality for RNA-sequencing (RNA-seq) are not infor...

  12. Whole-organism clone tracing using single-cell sequencing.

    PubMed

    Alemany, Anna; Florescu, Maria; Baron, Chloé S; Peterson-Maduro, Josi; van Oudenaarden, Alexander

    2018-04-05

    Embryonic development is a crucial period in the life of a multicellular organism, during which limited sets of embryonic progenitors produce all cells in the adult body. Determining which fate these progenitors acquire in adult tissues requires the simultaneous measurement of clonal history and cell identity at single-cell resolution, which has been a major challenge. Clonal history has traditionally been investigated by microscopically tracking cells during development, monitoring the heritable expression of genetically encoded fluorescent proteins and, more recently, using next-generation sequencing technologies that exploit somatic mutations, microsatellite instability, transposon tagging, viral barcoding, CRISPR-Cas9 genome editing and Cre-loxP recombination. Single-cell transcriptomics provides a powerful platform for unbiased cell-type classification. Here we present ScarTrace, a single-cell sequencing strategy that enables the simultaneous quantification of clonal history and cell type for thousands of cells obtained from different organs of the adult zebrafish. Using ScarTrace, we show that a small set of multipotent embryonic progenitors generate all haematopoietic cells in the kidney marrow, and that many progenitors produce specific cell types in the eyes and brain. In addition, we study when embryonic progenitors commit to the left or right eye. ScarTrace reveals that epidermal and mesenchymal cells in the caudal fin arise from the same progenitors, and that osteoblast-restricted precursors can produce mesenchymal cells during regeneration. Furthermore, we identify resident immune cells in the fin with a distinct clonal origin from other blood cell types. We envision that similar approaches will have major applications in other experimental systems, in which the matching of embryonic clonal origin to adult cell type will ultimately allow reconstruction of how the adult body is built from a single cell.

  13. Automated method for tracing leading and trailing processes of migrating neurons in confocal image sequences

    NASA Astrophysics Data System (ADS)

    Kerekes, Ryan A.; Gleason, Shaun S.; Trivedi, Niraj; Solecki, David J.

    2010-03-01

    Segmentation, tracking, and tracing of neurons in video imagery are important steps in many neuronal migration studies and can be inaccurate and time-consuming when performed manually. In this paper, we present an automated method for tracing the leading and trailing processes of migrating neurons in time-lapse image stacks acquired with a confocal fluorescence microscope. In our approach, we first locate and track the soma of the cell of interest by smoothing each frame and tracking the local maxima through the sequence. We then trace the leading process in each frame by starting at the center of the soma and stepping repeatedly in the most likely direction of the leading process. This direction is found at each step by examining second derivatives of fluorescent intensity along curves of constant radius around the current point. Tracing terminates after a fixed number of steps or when fluorescent intensity drops below a fixed threshold. We evolve the resulting trace to form an improved trace that more closely follows the approximate centerline of the leading process. We apply a similar algorithm to the trailing process of the cell by starting the trace in the opposite direction. We demonstrate our algorithm on two time-lapse confocal video sequences of migrating cerebellar granule neurons (CGNs). We show that the automated traces closely approximate ground truth traces to within 1 or 2 pixels on average. Additionally, we compute line intensity profiles of fluorescence along the automated traces and quantitatively demonstrate their similarity to manually generated profiles in terms of fluorescence peak locations.

  14. Detection of viral sequences in archival spinal cords from fatal cases of poliomyelitis in 1951-1952.

    PubMed

    Rekand, Tiina; Male, Rune; Myking, Andreas O; Nygaard, Svein J T; Aarli, Johan A; Haarr, Lars; Langeland, Nina

    2003-12-01

    Poliovirus (PV) subjected to genetic characterization is often isolated from faecal carriage. Such virus is not necessarily identical to the virus causing paralytic disease since genetic modifications may occur during replication outside the nervous system. We have searched for poliovirus genomes in the 14 fatal cases occurring during the last epidemics in Norway in 1951-1952. A method was developed for isolation and analysis of poliovirus RNA from formalin-fixed and paraffin-embedded archival tissue. RNA was purified by incubation with Chelex-100 and heating followed by treatment with the proteinase K and chloroform extraction. Viral sequences were amplified by a reverse transcriptase-polymerase chain reaction (RT-PCR), the products subjected to TA cloning and sequenced. RNA from the beta-actin gene, as a control, was identified in 13 cases, while sequences specific for poliovirus were achieved in 11 cases. The sequences from the 2C region of poliovirus were rather conserved while those in the 5'-untranslated region were variable. The developed method should be suitable also for other genetic studies of old archival material.

  15. High resolution trace element records from the deep sea hydrocoral Stylaster venustus: Implications for stylasterids as a paleoceanographic archive

    NASA Astrophysics Data System (ADS)

    Aranha, R. S.; Layne, G. D.; Edinger, E.; Piercey, G.

    2009-12-01

    Stylasterids are one of the lesser known groups of deep sea corals, but appear to have potential to serve as viable geochemical archives for reconstructing temperature, salinity and nutrient regimes in the deep ocean. This group of hydrocorals are present in most, if not all of the world’s major oceans. Stylasterid species dominantly have aragonitic skeletons, with a small percentage of species having calcitic skeletons (1). A recent study on the biomineralization of a deep sea stylasterid (Errina dabneyi) has revealed that during the organism’s growth, a steady dissolution and reprecipitation of skeletal material occurs in the central canals of the skeleton. This skeletal modification likely alters the stable isotope and/or trace element profiles of these corals, making them potentially less reliable as geochemical archives, depending on the scale of sampling (2). Recent specimens of Stylaster venustus were collected in July, 2008 from the Olympic Coast National Marine sanctuary off the coast of Washington at depths of 200 - 350 m. We used a Cameca IMS 4f Secondary Ion Mass Spectrometer (SIMS) to perform high spatial resolution (<25 µm) spot analyses of Sr/Ca, Mg/Ca and Na/Ca in detailed traverses across the basal cross-sections from three of these specimens. We identified the remineralized material by remnant porous texture and/or a substantially different trace element composition. Spot analyses corresponding to the remineralized material were eliminated from the dataset. In all three specimens we observed a pronounced inverse correlation (r = -0.36) of Mg/Ca and Sr/Ca profiles throughout the length of the transects . A positive correlation (r =0.46) between Na/Ca and Mg/Ca profiles was also noted in two of the specimens analyzed. These correlations strongly imply that the coral skeleton is recording either cyclical or episodic variations in temperature, with possible overprinting from other environmental variation. The exact relationship between the

  16. A new sequence for single-shot diffusion-weighted NMR spectroscopy by the trace of the diffusion tensor.

    PubMed

    Valette, Julien; Giraudeau, Céline; Marchadour, Charlotte; Djemai, Boucif; Geffroy, Françoise; Ghaly, Mohamed Ahmed; Le Bihan, Denis; Hantraye, Philippe; Lebon, Vincent; Lethimonnier, Franck

    2012-12-01

    Diffusion-weighted spectroscopy is a unique tool for exploring the intracellular microenvironment in vivo. In living systems, diffusion may be anisotropic, when biological membranes exhibit particular orientation patterns. In this work, a volume selective diffusion-weighted sequence is proposed, allowing single-shot measurement of the trace of the diffusion tensor, which does not depend on tissue anisotropy. With this sequence, the minimal echo time is only three times the diffusion time. In addition, cross-terms between diffusion gradients and other gradients are cancelled out. An adiabatic version, similar to localization by adiabatic selective refocusing sequence, is then derived, providing partial immunity against cross-terms. Proof of concept is performed ex vivo on chicken skeletal muscle by varying tissue orientation and intra-voxel shim. In vivo performance of the sequence is finally illustrated in a U87 glioblastoma mouse model, allowing the measurement of the trace apparent diffusion coefficient for six metabolites, including J-modulated metabolites. Although measurement performed along three separate orthogonal directions would bring similar accuracy on trace apparent diffusion coefficient under ideal conditions, the method described here should be useful for probing intimate properties of the cells with minimal experimental bias. Copyright © 2012 Wiley Periodicals, Inc.

  17. Real-time detection of BRAF V600E mutation from archival hairy cell leukemia FFPE tissue by nanopore sequencing.

    PubMed

    Vacca, Davide; Cancila, Valeria; Gulino, Alessandro; Lo Bosco, Giosuè; Belmonte, Beatrice; Di Napoli, Arianna; Florena, Ada Maria; Tripodo, Claudio; Arancio, Walter

    2018-02-01

    The MinION is a miniaturized high-throughput next generation sequencing platform of novel conception. The use of nucleic acids derived from formalin-fixed paraffin-embedded samples is highly desirable, but their adoption for molecular assays is hurdled by the high degree of fragmentation and by the chemical-induced mutations stemming from the fixation protocols. In order to investigate the suitability of MinION sequencing on formalin-fixed paraffin-embedded samples, the presence and frequency of BRAF c.1799T > A mutation was investigated in two archival tissue specimens of Hairy cell leukemia and Hairy cell leukemia Variant. Despite the poor quality of the starting DNA, BRAF mutation was successfully detected in the Hairy cell leukemia sample with around 50% of the reads obtained within 2 h of the sequencing start. Notably, the mutational burden of the Hairy cell leukemia sample as derived from nanopore sequencing proved to be comparable to a sensitive method for the detection of point mutations, namely the Digital PCR, using a validated assay. Nanopore sequencing can be adopted for targeted sequencing of genetic lesions on critical DNA samples such as those extracted from archival routine formalin-fixed paraffin-embedded samples. This result let speculating about the possibility that the nanopore sequencing could be trustably adopted for the real-time targeted sequencing of genetic lesions. Our report opens the window for the adoption of nanopore sequencing in molecular pathology for research and diagnostics.

  18. MetaSRA: normalized human sample-specific metadata for the Sequence Read Archive.

    PubMed

    Bernstein, Matthew N; Doan, AnHai; Dewey, Colin N

    2017-09-15

    The NCBI's Sequence Read Archive (SRA) promises great biological insight if one could analyze the data in the aggregate; however, the data remain largely underutilized, in part, due to the poor structure of the metadata associated with each sample. The rules governing submissions to the SRA do not dictate a standardized set of terms that should be used to describe the biological samples from which the sequencing data are derived. As a result, the metadata include many synonyms, spelling variants and references to outside sources of information. Furthermore, manual annotation of the data remains intractable due to the large number of samples in the archive. For these reasons, it has been difficult to perform large-scale analyses that study the relationships between biomolecular processes and phenotype across diverse diseases, tissues and cell types present in the SRA. We present MetaSRA, a database of normalized SRA human sample-specific metadata following a schema inspired by the metadata organization of the ENCODE project. This schema involves mapping samples to terms in biomedical ontologies, labeling each sample with a sample-type category, and extracting real-valued properties. We automated these tasks via a novel computational pipeline. The MetaSRA is available at metasra.biostat.wisc.edu via both a searchable web interface and bulk downloads. Software implementing our computational pipeline is available at http://github.com/deweylab/metasra-pipeline. cdewey@biostat.wisc.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  19. Peat Bog Archives: from human history, vegetation change and Holocene climate, to atmospheric dusts and trace elements of natural and anthropogenic origin

    NASA Astrophysics Data System (ADS)

    Shotyk, William

    2010-05-01

    For at least two centuries, peat has been recognized as an excellent archive of environmental change. William Rennie (1807), for example, interpreted stratigraphic changes in Scottish bogs not only in terms of natural changes in paleoclimate, but was also able to identify environmental changes induced by humans, namely deforestation and the hydrological impacts which result from such activities. The use of bogs as archives of climate change in the early 20th century was accelerated by studies of fossil plant remains such as those by Lewis in Scotland, and by systematic investigations of pollen grains pioneered by von Post in Sweden. In Denmark, Glob outlined the remarkably well-preserved remains of bog bodies and associated artefacts (of cloth, wood, ceramic and metal) in Danish bogs. In Britain, Godwin provided an introduction to the use of bogs as archives of human history, vegetation change, and Holocene climate, with a more recent survey provided by Charman. Recent decades have provided many mineralogical studies of peat and there is growing evidence that many silicate minerals, whether derived from the surrounding watershed or the atmosphere (soil-derived dusts and particles emitted from volcanoes), also are well preserved in anoxic peatland waters. Similarly, geochemical studies have shown that a long list of trace metals, of both natural and anthropogenic origin, also are remarkably well preserved in peat bogs. Thus, there is growing evidence that ombrotrophic (ie 'rain-fed') peat bogs are reliable archives of atmospheric deposition of a wide range of trace elements, including conservative, lithogenic metals such as Al, Sc, Ti, Y, Zr, Hf and the REE, but also the potentially toxic Class B, or 'heavy metals' such as Cu, Ag, Hg, Pb, Sb and Tl. When high quality measurements of these elements is combined with accurate radiometric age dating, it becomes possible to create high resolution reconstructions of atmospheric soil dust fluxes, ancient and modern metal

  20. Evolution of Archival Storage (from Tape to Memory)

    NASA Technical Reports Server (NTRS)

    Ramapriyan, Hampapuram K.

    2015-01-01

    Over the last three decades, there has been a significant evolution in storage technologies supporting archival of remote sensing data. This section provides a brief survey of how these technologies have evolved. Three main technologies are considered - tape, hard disk and solid state disk. Their historical evolution is traced, summarizing how reductions in cost have helped being able to store larger volumes of data on faster media. The cost per GB of media is only one of the considerations in determining the best approach to archival storage. Active archives generally require faster response to user requests for data than permanent archives. The archive costs have to consider facilities and other capital costs, operations costs, software licenses, utilities costs, etc. For meeting requirements in any organization, typically a mix of technologies is needed.

  1. Whole Transcriptome Sequencing Enables Discovery and Analysis of Viruses in Archived Primary Central Nervous System Lymphomas

    PubMed Central

    DeBoever, Christopher; Reid, Erin G.; Smith, Erin N.; Wang, Xiaoyun; Dumaop, Wilmar; Harismendy, Olivier; Carson, Dennis; Richman, Douglas; Masliah, Eliezer; Frazer, Kelly A.

    2013-01-01

    Primary central nervous system lymphomas (PCNSL) have a dramatically increased prevalence among persons living with AIDS and are known to be associated with human Epstein Barr virus (EBV) infection. Previous work suggests that in some cases, co-infection with other viruses may be important for PCNSL pathogenesis. Viral transcription in tumor samples can be measured using next generation transcriptome sequencing. We demonstrate the ability of transcriptome sequencing to identify viruses, characterize viral expression, and identify viral variants by sequencing four archived AIDS-related PCNSL tissue samples and analyzing raw sequencing reads. EBV was detected in all four PCNSL samples and cytomegalovirus (CMV), JC polyomavirus (JCV), and HIV were also discovered, consistent with clinical diagnoses. CMV was found to express three long non-coding RNAs recently reported as expressed during active infection. Single nucleotide variants were observed in each of the viruses observed and three indels were found in CMV. No viruses were found in several control tumor types including 32 diffuse large B-cell lymphoma samples. This study demonstrates the ability of next generation transcriptome sequencing to accurately identify viruses, including DNA viruses, in solid human cancer tissue samples. PMID:24023918

  2. DNApod: DNA polymorphism annotation database from next-generation sequence read archives.

    PubMed

    Mochizuki, Takako; Tanizawa, Yasuhiro; Fujisawa, Takatomo; Ohta, Tazro; Nikoh, Naruo; Shimizu, Tokurou; Toyoda, Atsushi; Fujiyama, Asao; Kurata, Nori; Nagasaki, Hideki; Kaminuma, Eli; Nakamura, Yasukazu

    2017-01-01

    With the rapid advances in next-generation sequencing (NGS), datasets for DNA polymorphisms among various species and strains have been produced, stored, and distributed. However, reliability varies among these datasets because the experimental and analytical conditions used differ among assays. Furthermore, such datasets have been frequently distributed from the websites of individual sequencing projects. It is desirable to integrate DNA polymorphism data into one database featuring uniform quality control that is distributed from a single platform at a single place. DNA polymorphism annotation database (DNApod; http://tga.nig.ac.jp/dnapod/) is an integrated database that stores genome-wide DNA polymorphism datasets acquired under uniform analytical conditions, and this includes uniformity in the quality of the raw data, the reference genome version, and evaluation algorithms. DNApod genotypic data are re-analyzed whole-genome shotgun datasets extracted from sequence read archives, and DNApod distributes genome-wide DNA polymorphism datasets and known-gene annotations for each DNA polymorphism. This new database was developed for storing genome-wide DNA polymorphism datasets of plants, with crops being the first priority. Here, we describe our analyzed data for 679, 404, and 66 strains of rice, maize, and sorghum, respectively. The analytical methods are available as a DNApod workflow in an NGS annotation system of the DNA Data Bank of Japan and a virtual machine image. Furthermore, DNApod provides tables of links of identifiers between DNApod genotypic data and public phenotypic data. To advance the sharing of organism knowledge, DNApod offers basic and ubiquitous functions for multiple alignment and phylogenetic tree construction by using orthologous gene information.

  3. DNApod: DNA polymorphism annotation database from next-generation sequence read archives

    PubMed Central

    Mochizuki, Takako; Tanizawa, Yasuhiro; Fujisawa, Takatomo; Ohta, Tazro; Nikoh, Naruo; Shimizu, Tokurou; Toyoda, Atsushi; Fujiyama, Asao; Kurata, Nori; Nagasaki, Hideki; Kaminuma, Eli; Nakamura, Yasukazu

    2017-01-01

    With the rapid advances in next-generation sequencing (NGS), datasets for DNA polymorphisms among various species and strains have been produced, stored, and distributed. However, reliability varies among these datasets because the experimental and analytical conditions used differ among assays. Furthermore, such datasets have been frequently distributed from the websites of individual sequencing projects. It is desirable to integrate DNA polymorphism data into one database featuring uniform quality control that is distributed from a single platform at a single place. DNA polymorphism annotation database (DNApod; http://tga.nig.ac.jp/dnapod/) is an integrated database that stores genome-wide DNA polymorphism datasets acquired under uniform analytical conditions, and this includes uniformity in the quality of the raw data, the reference genome version, and evaluation algorithms. DNApod genotypic data are re-analyzed whole-genome shotgun datasets extracted from sequence read archives, and DNApod distributes genome-wide DNA polymorphism datasets and known-gene annotations for each DNA polymorphism. This new database was developed for storing genome-wide DNA polymorphism datasets of plants, with crops being the first priority. Here, we describe our analyzed data for 679, 404, and 66 strains of rice, maize, and sorghum, respectively. The analytical methods are available as a DNApod workflow in an NGS annotation system of the DNA Data Bank of Japan and a virtual machine image. Furthermore, DNApod provides tables of links of identifiers between DNApod genotypic data and public phenotypic data. To advance the sharing of organism knowledge, DNApod offers basic and ubiquitous functions for multiple alignment and phylogenetic tree construction by using orthologous gene information. PMID:28234924

  4. Dose-Response Analysis of RNA-Seq Profiles in Archival ...

    EPA Pesticide Factsheets

    Use of archival resources has been limited to date by inconsistent methods for genomic profiling of degraded RNA from formalin-fixed paraffin-embedded (FFPE) samples. RNA-sequencing offers a promising way to address this problem. Here we evaluated transcriptomic dose responses using RNA-sequencing in paired FFPE and frozen (FROZ) samples from two archival studies in mice, one 20 years old. Experimental treatments included 3 different doses of di(2-ethylhexyl)phthalate or dichloroacetic acid for the recently archived and older studies, respectively. Total RNA was ribo-depleted and sequenced using the Illumina HiSeq platform. In the recently archived study, FFPE samples had 35% lower total counts compared to FROZ samples but high concordance in fold-change values of differentially expressed genes (DEGs) (r2 = 0.99), highly enriched pathways (90% overlap with FROZ), and benchmark dose estimates for preselected target genes (2% difference vs FROZ). In contrast, older FFPE samples had markedly lower total counts (3% of FROZ) and poor concordance in global DEGs and pathways. However, counts from FFPE and FROZ samples still positively correlated (r2 = 0.84 across all transcripts) and showed comparable dose responses for more highly expressed target genes. These findings highlight potential applications and issues in using RNA-sequencing data from FFPE samples. Recently archived FFPE samples were highly similar to FROZ samples in sequencing q

  5. Tracing the origin of disseminated tumor cells in breast cancer using single-cell sequencing.

    PubMed

    Demeulemeester, Jonas; Kumar, Parveen; Møller, Elen K; Nord, Silje; Wedge, David C; Peterson, April; Mathiesen, Randi R; Fjelldal, Renathe; Zamani Esteki, Masoud; Theunis, Koen; Fernandez Gallardo, Elia; Grundstad, A Jason; Borgen, Elin; Baumbusch, Lars O; Børresen-Dale, Anne-Lise; White, Kevin P; Kristensen, Vessela N; Van Loo, Peter; Voet, Thierry; Naume, Bjørn

    2016-12-09

    Single-cell micro-metastases of solid tumors often occur in the bone marrow. These disseminated tumor cells (DTCs) may resist therapy and lay dormant or progress to cause overt bone and visceral metastases. The molecular nature of DTCs remains elusive, as well as when and from where in the tumor they originate. Here, we apply single-cell sequencing to identify and trace the origin of DTCs in breast cancer. We sequence the genomes of 63 single cells isolated from six non-metastatic breast cancer patients. By comparing the cells' DNA copy number aberration (CNA) landscapes with those of the primary tumors and lymph node metastasis, we establish that 53% of the single cells morphologically classified as tumor cells are DTCs disseminating from the observed tumor. The remaining cells represent either non-aberrant "normal" cells or "aberrant cells of unknown origin" that have CNA landscapes discordant from the tumor. Further analyses suggest that the prevalence of aberrant cells of unknown origin is age-dependent and that at least a subset is hematopoietic in origin. Evolutionary reconstruction analysis of bulk tumor and DTC genomes enables ordering of CNA events in molecular pseudo-time and traced the origin of the DTCs to either the main tumor clone, primary tumor subclones, or subclones in an axillary lymph node metastasis. Single-cell sequencing of bone marrow epithelial-like cells, in parallel with intra-tumor genetic heterogeneity profiling from bulk DNA, is a powerful approach to identify and study DTCs, yielding insight into metastatic processes. A heterogeneous population of CNA-positive cells is present in the bone marrow of non-metastatic breast cancer patients, only part of which are derived from the observed tumor lineages.

  6. Basecalling with LifeTrace

    PubMed Central

    Walther, Dirk; Bartha, Gábor; Morris, Macdonald

    2001-01-01

    A pivotal step in electrophoresis sequencing is the conversion of the raw, continuous chromatogram data into the actual sequence of discrete nucleotides, a process referred to as basecalling. We describe a novel algorithm for basecalling implemented in the program LifeTrace. Like Phred, currently the most widely used basecalling software program, LifeTrace takes processed trace data as input. It was designed to be tolerant to variable peak spacing by means of an improved peak-detection algorithm that emphasizes local chromatogram information over global properties. LifeTrace is shown to generate high-quality basecalls and reliable quality scores. It proved particularly effective when applied to MegaBACE capillary sequencing machines. In a benchmark test of 8372 dye-primer MegaBACE chromatograms, LifeTrace generated 17% fewer substitution errors, 16% fewer insertion/deletion errors, and 2.4% more aligned bases to the finished sequence than did Phred. For two sets totaling 6624 dye-terminator chromatograms, the performance improvement was 15% fewer substitution errors, 10% fewer insertion/deletion errors, and 2.1% more aligned bases. The processing time required by LifeTrace is comparable to that of Phred. The predicted quality scores were in line with observed quality scores, permitting direct use for quality clipping and in silico single nucleotide polymorphism (SNP) detection. Furthermore, we introduce a new type of quality score associated with every basecall: the gap-quality. It estimates the probability of a deletion error between the current and the following basecall. This additional quality score improves detection of single basepair deletions when used for locating potential basecalling errors during the alignment. We also describe a new protocol for benchmarking that we believe better discerns basecaller performance differences than methods previously published. PMID:11337481

  7. Sequence History Update Tool

    NASA Technical Reports Server (NTRS)

    Khanampompan, Teerapat; Gladden, Roy; Fisher, Forest; DelGuercio, Chris

    2008-01-01

    The Sequence History Update Tool performs Web-based sequence statistics archiving for Mars Reconnaissance Orbiter (MRO). Using a single UNIX command, the software takes advantage of sequencing conventions to automatically extract the needed statistics from multiple files. This information is then used to populate a PHP database, which is then seamlessly formatted into a dynamic Web page. This tool replaces a previous tedious and error-prone process of manually editing HTML code to construct a Web-based table. Because the tool manages all of the statistics gathering and file delivery to and from multiple data sources spread across multiple servers, there is also a considerable time and effort savings. With the use of The Sequence History Update Tool what previously took minutes is now done in less than 30 seconds, and now provides a more accurate archival record of the sequence commanding for MRO.

  8. The ExoMars Rover Science Archive: Status and Plans

    NASA Astrophysics Data System (ADS)

    Heather, D.; Lim, T.; Metcalfe, L.

    2017-09-01

    The ExoMars program is a co-operation between ESA and Roscosmos comprising two missions: the first, launched on 14 March 2016, included the Trace Gas Orbiter and Schiaparelli lander; the second, due for launch in 2020, will be a Rover and Surface Platform (RSP). The ExoMars Rover and Surface Platform deliveries will be among the first data in the PSA to be formatted according to the new PDS4 Standards, and will be the first rover data to be hosted within the archive at all. The archiving and management of the science data to be returned from ExoMars will require a significant development effort for the new Planetary Science Archive (PSA). This presentation will outline the current plans for archiving of the ExoMars Rover and Surface Platform science data.

  9. Evaluation and Adaptation of a Laboratory-Based cDNA Library Preparation Protocol for Retrospective Sequencing of Archived MicroRNAs from up to 35-Year-Old Clinical FFPE Specimens

    PubMed Central

    Loudig, Olivier; Wang, Tao; Ye, Kenny; Lin, Juan; Wang, Yihong; Ramnauth, Andrew; Liu, Christina; Stark, Azadeh; Chitale, Dhananjay; Greenlee, Robert; Multerer, Deborah; Honda, Stacey; Daida, Yihe; Spencer Feigelson, Heather; Glass, Andrew; Couch, Fergus J.; Rohan, Thomas; Ben-Dov, Iddo Z.

    2017-01-01

    Formalin-fixed paraffin-embedded (FFPE) specimens, when used in conjunction with patient clinical data history, represent an invaluable resource for molecular studies of cancer. Even though nucleic acids extracted from archived FFPE tissues are degraded, their molecular analysis has become possible. In this study, we optimized a laboratory-based next-generation sequencing barcoded cDNA library preparation protocol for analysis of small RNAs recovered from archived FFPE tissues. Using matched fresh and FFPE specimens, we evaluated the robustness and reproducibility of our optimized approach, as well as its applicability to archived clinical specimens stored for up to 35 years. We then evaluated this cDNA library preparation protocol by performing a miRNA expression analysis of archived breast ductal carcinoma in situ (DCIS) specimens, selected for their relation to the risk of subsequent breast cancer development and obtained from six different institutions. Our analyses identified six miRNAs (miR-29a, miR-221, miR-375, miR-184, miR-363, miR-455-5p) differentially expressed between DCIS lesions from women who subsequently developed an invasive breast cancer (cases) and women who did not develop invasive breast cancer within the same time interval (control). Our thorough evaluation and application of this laboratory-based miRNA sequencing analysis indicates that the preparation of small RNA cDNA libraries can reliably be performed on older, archived, clinically-classified specimens. PMID:28335433

  10. Compendium of NASA Data Base for the Global Tropospheric Experiment's Transport and Chemical Evolution Over the Pacific (TRACE-P). Volume 1; DC-8

    NASA Technical Reports Server (NTRS)

    Kleb, Mary M.; Scott, A. Donald, Jr.

    2003-01-01

    This report provides a compendium of NASA aircraft data that are available from NASA's Global Tropospheric Experiment's (GTE) Transport and Chemical Evolution over the Pacific (TRACE-P) Mission. The broad goal of TRACE-P was to characterize the transit and evolution of the Asian outflow over the western Pacific. Conducted from February 24 through April 10, 2001, TRACE-P integrated airborne, satellite- and ground-based observations, as well as forecasts from aerosol and chemistry models. The format of this compendium utilizes data plots (time series) of selected data acquired aboard the NASA/Dryden DC-8 (vol. 1) and NASA/Wallops P-3B (vol. 2) aircraft during TRACE-P. The purpose of this document is to provide a representation of aircraft data that are available in archived format via NASA Langley s Distributed Active Archive Center (DAAC) and through the GTE Project Office archive. The data format is not intended to support original research/analyses, but to assist the reader in identifying data that are of interest.

  11. TraceContract: A Scala DSL for Trace Analysis

    NASA Technical Reports Server (NTRS)

    Barringer, Howard; Havelund, Klaus

    2011-01-01

    In this paper we describe TRACECONTRACT, an API for trace analysis, implemented in the SCALA programming language. We argue that for certain forms of trace analysis the best weapon is a high level programming language augmented with constructs for temporal reasoning. A trace is a sequence of events, which may for example be generated by a running program, instrumented appropriately to generate events. The API supports writing properties in a notation that combines an advanced form of data parameterized state machines with temporal logic. The implementation utilizes SCALA's support for defining internal Domain Specific Languages (DSLs). Furthermore SCALA's combination of object oriented and functional programming features, including partial functions and pattern matching, makes it an ideal host language for such an API.

  12. Compendium of NASA Data Base for the Global Tropospheric Experiment's Transport and Chemical Evolution Over the Pacific (TRACE-P). Volume 2; P-3B

    NASA Technical Reports Server (NTRS)

    Kleb, Mary M.; Scott, A. Donald, Jr.

    2003-01-01

    This report provides a compendium of NASA aircraft data that are available from NASA's Global Tropospheric Experiment's (GTE) Transport and Chemical Evolution over the Pacific (TRACE-P) Mission. The broad goal of TRACE-P was to characterize the transit and evolution of the Asian outflow over the western Pacific. Conducted from February 24 through April 10, 2001, TRACE-P integrated airborne, satellite- and ground based observations, as well as forecasts from aerosol and chemistry models. The format of this compendium utilizes data plots (time series) of selected data acquired aboard the NASA/Dryden DC-8 (vol. 1) and NASA/Wallops P-3B (vol. 2) aircraft during TRACE-P. The purpose of this document is to provide a representation of aircraft data that are available in archived format via NASA Langley's Distributed Active Archive Center (DAAC) and through the GTE Project Office archive. The data format is not intended to support original research/analyses, but to assist the reader in identifying data that are of interest.

  13. Identification of extracellular miRNA in archived serum samples by next-generation sequencing from RNA extracted using multiple methods.

    PubMed

    Gautam, Aarti; Kumar, Raina; Dimitrov, George; Hoke, Allison; Hammamieh, Rasha; Jett, Marti

    2016-10-01

    miRNAs act as important regulators of gene expression by promoting mRNA degradation or by attenuating protein translation. Since miRNAs are stably expressed in bodily fluids, there is growing interest in profiling these miRNAs, as it is minimally invasive and cost-effective as a diagnostic matrix. A technical hurdle in studying miRNA dynamics is the ability to reliably extract miRNA as small sample volumes and low RNA abundance create challenges for extraction and downstream applications. The purpose of this study was to develop a pipeline for the recovery of miRNA using small volumes of archived serum samples. The RNA was extracted employing several widely utilized RNA isolation kits/methods with and without addition of a carrier. The small RNA library preparation was carried out using Illumina TruSeq small RNA kit and sequencing was carried out using Illumina platform. A fraction of five microliters of total RNA was used for library preparation as quantification is below the detection limit. We were able to profile miRNA levels in serum from all the methods tested. We found out that addition of nucleic acid based carrier molecules had higher numbers of processed reads but it did not enhance the mapping of any miRBase annotated sequences. However, some of the extraction procedures offer certain advantages: RNA extracted by TRIzol seemed to align to the miRBase best; extractions using TRIzol with carrier yielded higher miRNA-to-small RNA ratios. Nuclease free glycogen can be carrier of choice for miRNA sequencing. Our findings illustrate that miRNA extraction and quantification is influenced by the choice of methodologies. Addition of nucleic acid- based carrier molecules during extraction procedure is not a good choice when assaying miRNA using sequencing. The careful selection of an extraction method permits the archived serum samples to become valuable resources for high-throughput applications.

  14. TraceContract

    NASA Technical Reports Server (NTRS)

    Kavelund, Klaus; Barringer, Howard

    2012-01-01

    TraceContract is an API (Application Programming Interface) for trace analysis. A trace is a sequence of events, and can, for example, be generated by a running program, instrumented appropriately to generate events. An event can be any data object. An example of a trace is a log file containing events that a programmer has found important to record during a program execution. Trace - Contract takes as input such a trace together with a specification formulated using the API and reports on any violations of the specification, potentially calling code (reactions) to be executed when violations are detected. The software is developed as an internal DSL (Domain Specific Language) in the Scala programming language. Scala is a relatively new programming language that is specifically convenient for defining such internal DSLs due to a number of language characteristics. This includes Scala s elegant combination of object-oriented and functional programming, a succinct notation, and an advanced type system. The DSL offers a combination of data-parameterized state machines and temporal logic, which is novel. As an extension of Scala, it is a very expressive and convenient log file analysis framework.

  15. Hierarchical Traces for Reduced NSM Memory Requirements

    NASA Astrophysics Data System (ADS)

    Dahl, Torbjørn S.

    This paper presents work on using hierarchical long term memory to reduce the memory requirements of nearest sequence memory (NSM) learning, a previously published, instance-based reinforcement learning algorithm. A hierarchical memory representation reduces the memory requirements by allowing traces to share common sub-sequences. We present moderated mechanisms for estimating discounted future rewards and for dealing with hidden state using hierarchical memory. We also present an experimental analysis of how the sub-sequence length affects the memory compression achieved and show that the reduced memory requirements do not effect the speed of learning. Finally, we analyse and discuss the persistence of the sub-sequences independent of specific trace instances.

  16. Unfinished Business: The Uneven Past and Uncertain Future of One Historically Black University's Archives--A Personal Reflection

    ERIC Educational Resources Information Center

    Pevar, Susan Gunn

    2011-01-01

    This article presents a perspective on how the restructuring of a historically black university's library and resulting closure of its special collections and archives puts important records pertaining to African American history in jeopardy. This article traces the recent history of special collections and archives at the Lincoln University…

  17. DNA sequence chromatogram browsing using JAVA and CORBA.

    PubMed

    Parsons, J D; Buehler, E; Hillier, L

    1999-03-01

    DNA sequence chromatograms (traces) are the primary data source for all large-scale genomic and expressed sequence tags (ESTs) sequencing projects. Access to the sequencing trace assists many later analyses, for example contig assembly and polymorphism detection, but obtaining and using traces is problematic. Traces are not collected and published centrally, they are much larger than the base calls derived from them, and viewing them requires the interactivity of a local graphical client with local data. To provide efficient global access to DNA traces, we developed a client/server system based on flexible Java components integrated into other applications including an applet for use in a WWW browser and a stand-alone trace viewer. Client/server interaction is facilitated by CORBA middleware which provides a well-defined interface, a naming service, and location independence. [The software is packaged as a Jar file available from the following URL: http://www.ebi.ac.uk/jparsons. Links to working examples of the trace viewers can be found at http://corba.ebi.ac.uk/EST. All the Washington University mouse EST traces are available for browsing at the same URL.

  18. Sanger and Next-Generation Sequencing data for characterization of CTL epitopes in archived HIV-1 proviral DNA.

    PubMed

    Tumiotto, Camille; Riviere, Lionel; Bellecave, Pantxika; Recordon-Pinson, Patricia; Vilain-Parce, Alice; Guidicelli, Gwenda-Line; Fleury, Hervé

    2017-01-01

    One of the strategies for curing viral HIV-1 is a therapeutic vaccine involving the stimulation of cytotoxic CD8-positive T cells (CTL) that are Human Leucocyte Antigen (HLA)-restricted. The lack of efficiency of previous vaccination strategies may have been due to the immunogenic peptides used, which could be different from a patient's virus epitopes and lead to a poor CTL response. To counteract this lack of specificity, conserved epitopes must be targeted. One alternative is to gather as many data as possible from a large number of patients on their HIV-1 proviral archived epitope variants, taking into account their genetic background to select the best presented CTL epitopes. In order to process big data generated by Next-Generation Sequencing (NGS) of the DNA of HIV-infected patients, we have developed a software package called TutuGenetics. This tool combines an alignment derived either from Sanger or NGS files, HLA typing, target gene and a CTL epitope list as input files. It allows automatic translation after correction of the alignment obtained between the HxB2 reference and the reads, followed by automatic calculation of the MHC IC50 value for each epitope variant and the HLA allele of the patient by using NetMHCpan 3.0, resulting in a csv file as output result. We validated this new tool by comparing Sanger and NGS (454, Roche) sequences obtained from the proviral DNA of patients at success of ART included in the Provir Latitude 45 study and showed a 90% correlation between the quantitative results of NGS and Sanger. This automated analysis combined with complementary samples should yield more data regarding the archived CTL epitopes according to the patients' HLA alleles and will be useful for screening epitopes that in theory are presented efficiently to the HLA groove, thus constituting promising immunogenic peptides for a therapeutic vaccine.

  19. Trace fossils, storm beds, and depositional sequences in a clastic shelf setting, Upper Cretaceous of Utah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frey, R.W.

    In Coal Creek Canyon, Utah the Spring Canyon Member of the Blackhawk Formation is divisible into four regressive hemicycles of deposition each representing the downdip part of a nearshore-to-offshore sequence punctuated locally by hummocky cross-stratification. Bedding units span middle shoreface to lower offshore shelf lithofacies, the latter corresponding to a transgressive intertongue of the Mancos Shale. Trace fossil assemblage include 21 ichnospecies distributed among 17 ichnogenera: Ancorichnus, Aulichnites, Chondrites, Cylindrichnus, Ophiomorpha, Palaeophycus, Phoebichnus, Planolites, Rosselia, Schaubcylindrichnus, Scolicia, Skolithos, Taenidium, Teichichnus, Terebellina, Thalassinoides, and Uchirites. Distal deposits are typified by bioturbate textures; Cylindrichnus concentricus, Palaeophycus heberti, and Rosselia socialis otherwise aremore » prevalent throughout the lithofacies suite. Ophiomorpha irregulaire and Schaubcylindrichnus are most common in middle shoreface beds and Chondrites sp. in upper offshore beds; O. nodosa and O. annulata also are common in this part of the sequence. Planolites-type feeding burrows must have been predominant in many depositional settings but now remain inconspicuous and poorly preserved. Despite gradients in environmental distributions of trace fossils, all resident ichnofaunas are referable to the archetypical Cruziana ichnocoenose. Ichnofaunas in hummocky beds mainly represent either an archetypical Skolithos ichnocoenose or mixed Skolithos-Cruziana ichnocoenose. These post-storm ichnocoenoses correspond primarily to a sere of opportunistic pioneers and secondarily to ensuing seres of resilient resident populations. Differences in ichnofacies also are related to differences in post-storm rates of deposition: the slower the rate of sediment accumulation, the greater the degree of overprinting by burrows from subsequent seres or equilibrium communities.« less

  20. X-rays across the galaxy population - I. Tracing the main sequence of star formation

    NASA Astrophysics Data System (ADS)

    Aird, J.; Coil, A. L.; Georgakakis, A.

    2017-03-01

    We use deep Chandra imaging to measure the distribution of X-ray luminosities (LX) for samples of star-forming galaxies as a function of stellar mass and redshift, using a Bayesian method to push below the nominal X-ray detection limits. Our luminosity distributions all show narrow peaks at LX ≲ 1042 erg s-1 that we associate with star formation, as opposed to AGN that are traced by a broad tail to higher LX. Tracking the luminosity of these peaks as a function of stellar mass reveals an 'X-ray main sequence' with a constant slope ≈0.63 ± 0.03 over 8.5 ≲ log {M}_{ast }/M_{⊙} ≲ 11.5 and 0.1 ≲ z ≲ 4, with a normalization that increases with redshift as (1 + z)3.79 ± 0.12. We also compare the peak X-ray luminosities with UV-to-IR tracers of star formation rates (SFRs) to calibrate the scaling between LX and SFR. We find that LX ∝ SFR0.83 × (1 + z)1.3, where the redshift evolution and non-linearity likely reflect changes in high-mass X-ray binary populations of star-forming galaxies. Using galaxies with a broader range of SFR, we also constrain a stellar-mass-dependent contribution to LX, likely related to low-mass X-ray binaries. Using this calibration, we convert our X-ray main sequence to SFRs and measure a star-forming main sequence with a constant slope ≈0.76 ± 0.06 and a normalization that evolves with redshift as (1 + z)2.95 ± 0.33. Based on the X-ray emission, there is no evidence for a break in the main sequence at high stellar masses, although we cannot rule out a turnover given the uncertainties in the scaling of LX to SFR.

  1. A sparse differential clustering algorithm for tracing cell type changes via single-cell RNA-sequencing data

    PubMed Central

    Barron, Martin; Zhang, Siyuan

    2018-01-01

    Abstract Cell types in cell populations change as the condition changes: some cell types die out, new cell types may emerge and surviving cell types evolve to adapt to the new condition. Using single-cell RNA-sequencing data that measure the gene expression of cells before and after the condition change, we propose an algorithm, SparseDC, which identifies cell types, traces their changes across conditions and identifies genes which are marker genes for these changes. By solving a unified optimization problem, SparseDC completes all three tasks simultaneously. SparseDC is highly computationally efficient and demonstrates its accuracy on both simulated and real data. PMID:29140455

  2. Seven Salmonella Typhimurium Outbreaks in Australia Linked by Trace-Back and Whole Genome Sequencing.

    PubMed

    Ford, Laura; Wang, Qinning; Stafford, Russell; Ressler, Kelly-Anne; Norton, Sophie; Shadbolt, Craig; Hope, Kirsty; Franklin, Neil; Krsteski, Radomir; Carswell, Adrienne; Carter, Glen P; Seemann, Torsten; Howard, Peter; Valcanis, Mary; Castillo, Cristina Fabiola Sotomayor; Bates, John; Glass, Kathryn; Williamson, Deborah A; Sintchenko, Vitali; Howden, Benjamin P; Kirk, Martyn D

    2018-05-01

    Salmonella Typhimurium is a common cause of foodborne illness in Australia. We report on seven outbreaks of Salmonella Typhimurium multilocus variable-number tandem-repeat analysis (MLVA) 03-26-13-08-523 (European convention 2-24-12-7-0212) in three Australian states and territories investigated between November 2015 and March 2016. We identified a common egg grading facility in five of the outbreaks. While no Salmonella Typhimurium was detected at the grading facility and eggs could not be traced back to a particular farm, whole genome sequencing (WGS) of isolates from cases from all seven outbreaks indicated a common source. WGS was able to provide higher discriminatory power than MLVA and will likely link more Salmonella Typhimurium cases between states and territories in the future. National harmonization of Salmonella surveillance is important for effective implementation of WGS for Salmonella outbreak investigations.

  3. Sex Genotyping of Archival Fixed and Immunolabeled Guinea Pig Cochleas.

    PubMed

    Depreux, Frédéric F; Czech, Lyubov; Whitlon, Donna S

    2018-03-26

    For decades, outbred guinea pigs (GP) have been used as research models. Various past research studies using guinea pigs used measures that, unknown at the time, may be sex-dependent, but from which today, archival tissues may be all that remain. We aimed to provide a protocol for sex-typing archival guinea pig tissue, whereby past experiments could be re-evaluated for sex effects. No PCR sex-genotyping protocols existed for GP. We found that published sequence of the GP Sry gene differed from that in two separate GP stocks. We used sequences from other species to deduce PCR primers for Sry. After developing a genomic DNA extraction for archival, fixed, decalcified, immunolabeled, guinea pig cochlear half-turns, we used a multiplex assay (Y-specific Sry; X-specific Dystrophin) to assign sex to tissue as old as 3 years. This procedure should allow reevaluation of prior guinea pig studies in various research areas for the effects of sex on experimental outcomes.

  4. The ExoMars science data archive: status and plans

    NASA Astrophysics Data System (ADS)

    Heather, David

    2016-07-01

    The ExoMars program, a cooperation between ESA and Roscosmos, comprises two missions: the Trace Gas Orbiter, to be launched in 2016, and a rover and surface platform, due for launch in 2018. This will be the first time ESA has operated a rover, and the archiving and management of the science data to be returned will require a significant effort in development of the new Planetary Science Archive (PSA). The ExoMars mission data will also be formatted according to the new PDS4 Standards, based in XML, and this will be the first data of that format to be archived in the PSA. There are significant differences in the way in which a scientist will want to query, retrieve, and use data from a suite of rover instruments as opposed to remote sensing instrumentation from an orbiter. The PSA data holdings and the accompanying services are currently driven more towards the management of remote sensing data, so some significant changes will be needed. Among them will be a much closer link to the operational information than is currently available for our missions. NASA have a strong user community interaction with their analysts notebook, which provides detailed operational information to explain why, where and when operations took place. A similar approach will be needed for the future PSA, which is currently being designed. In addition to the archiving interface itself, there are differences with the overall archiving process being followed for ExoMars compared to previous ESA planetary missions. The Trace Gas Orbiter data pipelines for the first level of processing from telemetry to raw data, will be hosted directly by ESA's ground segment at ESAC in Madrid, where the archive itself resides. Data will have a continuous flow direct to the PSA, where after the given proprietary period, it will be directly released to the community via the new user interface. For the rover mission, the data pipelines are being developed by European industry, in close collaboration with ESA PSA

  5. AMPLIFICATION OF RIBOSOMAL RNA SEQUENCES

    EPA Science Inventory

    This book chapter offers an overview of the use of ribosomal RNA sequences. A history of the technology traces the evolution of techniques to measure bacterial phylogenetic relationships and recent advances in obtaining rRNA sequence information. The manual also describes procedu...

  6. AXAF FITS standard for ray trace interchange

    NASA Technical Reports Server (NTRS)

    Hsieh, Paul F.

    1993-01-01

    A standard data format for the archival and transport of x-ray events generated by ray trace models is described. Upon review and acceptance by the Advanced X-ray Astrophysics Facility (AXAF) Software Systems Working Group (SSWG), this standard shall become the official AXAF data format for ray trace events. The Flexible Image Transport System (FITS) is well suited for the purposes of the standard and was selected to be the basis of the standard. FITS is both flexible and efficient and is also widely used within the astronomical community for storage and transfer of data. In addition, software to read and write FITS format files are widely available. In selecting quantities to be included within the ray trace standard, the AXAF Mission Support team, Science Instruments team, and the other contractor teams were surveyed. From the results of this survey, the following requirements were established: (1) for the scientific needs, each photon should have associated with it: position, direction, energy, and statistical weight; the standard must also accommodate path length (relative phase), and polarization. (2) a unique photon identifier is necessary for bookkeeping purposes; (3) a log of individuals, organizations, and software packages that have modified the data must be maintained in order to create an audit trail; (4) a mechanism for extensions to the basic kernel should be provided; and (5) the ray trace standard should integrate with future AXAF data product standards.

  7. AXAF FITS standard for ray trace interchange

    NASA Astrophysics Data System (ADS)

    Hsieh, Paul F.

    1993-07-01

    A standard data format for the archival and transport of x-ray events generated by ray trace models is described. Upon review and acceptance by the Advanced X-ray Astrophysics Facility (AXAF) Software Systems Working Group (SSWG), this standard shall become the official AXAF data format for ray trace events. The Flexible Image Transport System (FITS) is well suited for the purposes of the standard and was selected to be the basis of the standard. FITS is both flexible and efficient and is also widely used within the astronomical community for storage and transfer of data. In addition, software to read and write FITS format files are widely available. In selecting quantities to be included within the ray trace standard, the AXAF Mission Support team, Science Instruments team, and the other contractor teams were surveyed. From the results of this survey, the following requirements were established: (1) for the scientific needs, each photon should have associated with it: position, direction, energy, and statistical weight; the standard must also accommodate path length (relative phase), and polarization. (2) a unique photon identifier is necessary for bookkeeping purposes; (3) a log of individuals, organizations, and software packages that have modified the data must be maintained in order to create an audit trail; (4) a mechanism for extensions to the basic kernel should be provided; and (5) the ray trace standard should integrate with future AXAF data product standards.

  8. Studies of Global Solar Magnetic Field Patterns Using a Newly Digitized Archive

    NASA Astrophysics Data System (ADS)

    Hewins, I.; Webb, D. F.; Gibson, S. E.; McFadden, R.; Emery, B. A.; Malanushenko, A. V.

    2017-12-01

    The McIntosh Archive consists of a set of hand-drawn solar Carrington maps created by Patrick McIntosh from 1964 to 2009. McIntosh used mainly Ha, He 10830Å and photospheric magnetic measurements from both ground-based and NASA satellite observations. With these he traced polarity inversion lines (PILs), filaments, sunspots and plage and, later, coronal holes, yielding a unique 45-year record of features associated with the large-scale organization of the solar magnetic field. We discuss our efforts to preserve and digitize this archive; the original hand-drawn maps have been scanned, a method for processing these scans into digital, searchable format has been developed, and a website and an archival repository at NOAA's National Centers for Environmental Information (NCEI) has been created. The archive is complete for SC 23 and partially complete for SCs 21 and 22. In this paper we show examples of how the data base can be utilized for scientific applications. We compare the evolution of the areas and boundaries of CHs with other recent results, and we use the maps to track the global, SC-evolution of filaments, large-scale positive and negative polarity regions, PILs and sunspots.

  9. Contribution of finger tracing to the recognition of Chinese characters.

    PubMed

    Yim-Ng, Y Y; Varley, R; Andrade, J

    2000-01-01

    Finger tracing is a simulation of the act of writing without the use of pen and paper. It is claimed to help in the processing of Chinese characters, possibly by providing additional motor coding. In this study, blindfolded subjects were equally good at identifying Chinese characters and novel visual stimuli through passive movements made with the index finger of the preferred hand and those made with the last finger of that hand. This suggests that finger tracing provides a relatively high level of coding specific to individual characters, but non-specific to motor effectors. Beginning each stroke from the same location, i.e. removing spatial information, impaired recognition of the familiar characters and the novel nonsense figures. Passively tracing the strokes in a random sequence also impaired recognition of the characters. These results therefore suggest that the beneficial effect of finger tracing on writing or recall of Chinese characters is mediated by sequence and spatial information embedded in the motor movements, and that proprioceptive channel may play a part in mediating visuo-spatial information. Finger tracing may be a useful strategy for remediation of Chinese language impairments.

  10. Sequence of events from the onset to the demise of the Last Interglacial: Evaluating strengths and limitations of chronologies used in climatic archives

    NASA Astrophysics Data System (ADS)

    Govin, A.; Capron, E.; Tzedakis, P. C.; Verheyden, S.; Ghaleb, B.; Hillaire-Marcel, C.; St-Onge, G.; Stoner, J. S.; Bassinot, F.; Bazin, L.; Blunier, T.; Combourieu-Nebout, N.; El Ouahabi, A.; Genty, D.; Gersonde, R.; Jimenez-Amat, P.; Landais, A.; Martrat, B.; Masson-Delmotte, V.; Parrenin, F.; Seidenkrantz, M.-S.; Veres, D.; Waelbroeck, C.; Zahn, R.

    2015-12-01

    The Last Interglacial (LIG) represents an invaluable case study to investigate the response of components of the Earth system to global warming. However, the scarcity of absolute age constraints in most archives leads to extensive use of various stratigraphic alignments to different reference chronologies. This feature sets limitations to the accuracy of the stratigraphic assignment of the climatic sequence of events across the globe during the LIG. Here, we review the strengths and limitations of the methods that are commonly used to date or develop chronologies in various climatic archives for the time span (∼140-100 ka) encompassing the penultimate deglaciation, the LIG and the glacial inception. Climatic hypotheses underlying record alignment strategies and the interpretation of tracers are explicitly described. Quantitative estimates of the associated absolute and relative age uncertainties are provided. Recommendations are subsequently formulated on how best to define absolute and relative chronologies. Future climato-stratigraphic alignments should provide (1) a clear statement of climate hypotheses involved, (2) a detailed understanding of environmental parameters controlling selected tracers and (3) a careful evaluation of the synchronicity of aligned paleoclimatic records. We underscore the need to (1) systematically report quantitative estimates of relative and absolute age uncertainties, (2) assess the coherence of chronologies when comparing different records, and (3) integrate these uncertainties in paleoclimatic interpretations and comparisons with climate simulations. Finally, we provide a sequence of major climatic events with associated age uncertainties for the period 140-105 ka, which should serve as a new benchmark to disentangle mechanisms of the Earth system's response to orbital forcing and evaluate transient climate simulations.

  11. Getting Personal: Personal Archives in Archival Programs and Curricula

    ERIC Educational Resources Information Center

    Douglas, Jennifer

    2017-01-01

    In 2001, Catherine Hobbs referred to silences around personal archives, suggesting that these types of archives were not given as much attention as organizational archives in the development of archival theory and methodology. The aims of this article are twofold: 1) to investigate the extent to which such silences exist in archival education…

  12. Key Role of Sequencing to Trace Hepatitis A Viruses Circulating in Italy During a Large Multi-Country European Foodborne Outbreak in 2013

    PubMed Central

    Bruni, Roberto; Taffon, Stefania; Equestre, Michele; Chionne, Paola; Madonna, Elisabetta; Rizzo, Caterina; Tosti, Maria Elena; Alfonsi, Valeria; Ricotta, Lara; De Medici, Dario; Di Pasquale, Simona; Scavia, Gaia; Pavoni, Enrico; Losio, Marina Nadia; Romanò, Luisa; Zanetti, Alessandro Remo; Morea, Anna; Pacenti, Monia; Palù, Giorgio; Capobianchi, Maria Rosaria; Chironna, Maria; Pompa, Maria Grazia; Ciccaglione, Anna Rita

    2016-01-01

    Background Foodborne Hepatitis A Virus (HAV) outbreaks are being recognized as an emerging public health problem in industrialized countries. In 2013 three foodborne HAV outbreaks occurred in Europe and one in USA. During the largest of the three European outbreaks, most cases occurred in Italy (>1,200 cases as of March 31, 2014). A national Task Force was established at the beginning of the outbreak by the Ministry of Health. Mixed frozen berries were early demonstrated to be the source of infection by the identity of viral sequences in patients and in food. In the present study the molecular characterization of HAV isolates from 355 Italian cases is reported. Methods Molecular characterization was carried out by PCR/sequencing (VP1/2A region), comparison with reference strains and phylogenetic analysis. Results A unique strain was responsible for most characterized cases (235/355, 66.1%). Molecular data had a key role in tracing this outbreak, allowing 110 out of the 235 outbreak cases (46.8%) to be recognized in absence of any other link. The data also showed background circulation of further unrelated strains, both autochthonous and travel related, whose sequence comparison highlighted minor outbreaks and small clusters, most of them unrecognized on the basis of epidemiological data. Phylogenetic analysis showed most isolates from travel related cases clustering with reference strains originating from the same geographical area of travel. Conclusions In conclusion, the study documents, in a real outbreak context, the crucial role of molecular analysis in investigating an old but re-emerging pathogen. Improving the molecular knowledge of HAV strains, both autochthonous and circulating in countries from which potentially contaminated foods are imported, will become increasingly important to control outbreaks by supporting trace back activities, aiming to identify the geographical source(s) of contaminated food, as well as public health interventions. PMID

  13. Key Role of Sequencing to Trace Hepatitis A Viruses Circulating in Italy During a Large Multi-Country European Foodborne Outbreak in 2013.

    PubMed

    Bruni, Roberto; Taffon, Stefania; Equestre, Michele; Chionne, Paola; Madonna, Elisabetta; Rizzo, Caterina; Tosti, Maria Elena; Alfonsi, Valeria; Ricotta, Lara; De Medici, Dario; Di Pasquale, Simona; Scavia, Gaia; Pavoni, Enrico; Losio, Marina Nadia; Romanò, Luisa; Zanetti, Alessandro Remo; Morea, Anna; Pacenti, Monia; Palù, Giorgio; Capobianchi, Maria Rosaria; Chironna, Maria; Pompa, Maria Grazia; Ciccaglione, Anna Rita

    2016-01-01

    Foodborne Hepatitis A Virus (HAV) outbreaks are being recognized as an emerging public health problem in industrialized countries. In 2013 three foodborne HAV outbreaks occurred in Europe and one in USA. During the largest of the three European outbreaks, most cases occurred in Italy (>1,200 cases as of March 31, 2014). A national Task Force was established at the beginning of the outbreak by the Ministry of Health. Mixed frozen berries were early demonstrated to be the source of infection by the identity of viral sequences in patients and in food. In the present study the molecular characterization of HAV isolates from 355 Italian cases is reported. Molecular characterization was carried out by PCR/sequencing (VP1/2A region), comparison with reference strains and phylogenetic analysis. A unique strain was responsible for most characterized cases (235/355, 66.1%). Molecular data had a key role in tracing this outbreak, allowing 110 out of the 235 outbreak cases (46.8%) to be recognized in absence of any other link. The data also showed background circulation of further unrelated strains, both autochthonous and travel related, whose sequence comparison highlighted minor outbreaks and small clusters, most of them unrecognized on the basis of epidemiological data. Phylogenetic analysis showed most isolates from travel related cases clustering with reference strains originating from the same geographical area of travel. In conclusion, the study documents, in a real outbreak context, the crucial role of molecular analysis in investigating an old but re-emerging pathogen. Improving the molecular knowledge of HAV strains, both autochthonous and circulating in countries from which potentially contaminated foods are imported, will become increasingly important to control outbreaks by supporting trace back activities, aiming to identify the geographical source(s) of contaminated food, as well as public health interventions.

  14. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR ARCHIVE PROCEDURE FOR STUDY SAMPLES (UA-G-4.0)

    EPA Science Inventory

    The purpose of this SOP is to outline the archive/custody guidelines used by the NHEXAS Arizona research project. This procedure was followed to maintain and locate samples, extracts, tracings and hard copy results after laboratory analysis during the Arizona NHEXAS project and ...

  15. Tidying Up International Nucleotide Sequence Databases: Ecological, Geographical and Sequence Quality Annotation of ITS Sequences of Mycorrhizal Fungi

    PubMed Central

    Tedersoo, Leho; Abarenkov, Kessy; Nilsson, R. Henrik; Schüssler, Arthur; Grelet, Gwen-Aëlle; Kohout, Petr; Oja, Jane; Bonito, Gregory M.; Veldre, Vilmar; Jairus, Teele; Ryberg, Martin; Larsson, Karl-Henrik; Kõljalg, Urmas

    2011-01-01

    Sequence analysis of the ribosomal RNA operon, particularly the internal transcribed spacer (ITS) region, provides a powerful tool for identification of mycorrhizal fungi. The sequence data deposited in the International Nucleotide Sequence Databases (INSD) are, however, unfiltered for quality and are often poorly annotated with metadata. To detect chimeric and low-quality sequences and assign the ectomycorrhizal fungi to phylogenetic lineages, fungal ITS sequences were downloaded from INSD, aligned within family-level groups, and examined through phylogenetic analyses and BLAST searches. By combining the fungal sequence database UNITE and the annotation and search tool PlutoF, we also added metadata from the literature to these accessions. Altogether 35,632 sequences belonged to mycorrhizal fungi or originated from ericoid and orchid mycorrhizal roots. Of these sequences, 677 were considered chimeric and 2,174 of low read quality. Information detailing country of collection, geographical coordinates, interacting taxon and isolation source were supplemented to cover 78.0%, 33.0%, 41.7% and 96.4% of the sequences, respectively. These annotated sequences are publicly available via UNITE (http://unite.ut.ee/) for downstream biogeographic, ecological and taxonomic analyses. In European Nucleotide Archive (ENA; http://www.ebi.ac.uk/ena/), the annotated sequences have a special link-out to UNITE. We intend to expand the data annotation to additional genes and all taxonomic groups and functional guilds of fungi. PMID:21949797

  16. Trace elements at the intersection of marine biological and geochemical evolution

    USGS Publications Warehouse

    Robbins, Leslie J.; Lalonde, Stefan V.; Planavsky, Noah J.; Partin, Camille A.; Reinhard, Christopher T.; Kendall, Brian; Scott, Clinton T.; Hardisty, Dalton S.; Gill, Benjamin C.; Alessi, Daniel S.; Dupont, Christopher L.; Saito, Mak A.; Crowe, Sean A.; Poulton, Simon W.; Bekker, Andrey; Lyons, Timothy W.; Konhauser, Kurt O.

    2016-01-01

    Life requires a wide variety of bioessential trace elements to act as structural components and reactive centers in metalloenzymes. These requirements differ between organisms and have evolved over geological time, likely guided in some part by environmental conditions. Until recently, most of what was understood regarding trace element concentrations in the Precambrian oceans was inferred by extrapolation, geochemical modeling, and/or genomic studies. However, in the past decade, the increasing availability of trace element and isotopic data for sedimentary rocks of all ages has yielded new, and potentially more direct, insights into secular changes in seawater composition – and ultimately the evolution of the marine biosphere. Compiled records of many bioessential trace elements (including Ni, Mo, P, Zn, Co, Cr, Se, and I) provide new insight into how trace element abundance in Earth's ancient oceans may have been linked to biological evolution. Several of these trace elements display redox-sensitive behavior, while others are redox-sensitive but not bioessential (e.g., Cr, U). Their temporal trends in sedimentary archives provide useful constraints on changes in atmosphere-ocean redox conditions that are linked to biological evolution, for example, the activity of oxygen-producing, photosynthetic cyanobacteria. In this review, we summarize available Precambrian trace element proxy data, and discuss how temporal trends in the seawater concentrations of specific trace elements may be linked to the evolution of both simple and complex life. We also examine several biologically relevant and/or redox-sensitive trace elements that have yet to be fully examined in the sedimentary rock record (e.g., Cu, Cd, W) and suggest several directions for future studies.

  17. The use of museum specimens with high-throughput DNA sequencers

    PubMed Central

    Burrell, Andrew S.; Disotell, Todd R.; Bergey, Christina M.

    2015-01-01

    Natural history collections have long been used by morphologists, anatomists, and taxonomists to probe the evolutionary process and describe biological diversity. These biological archives also offer great opportunities for genetic research in taxonomy, conservation, systematics, and population biology. They allow assays of past populations, including those of extinct species, giving context to present patterns of genetic variation and direct measures of evolutionary processes. Despite this potential, museum specimens are difficult to work with because natural postmortem processes and preservation methods fragment and damage DNA. These problems have restricted geneticists’ ability to use natural history collections primarily by limiting how much of the genome can be surveyed. Recent advances in DNA sequencing technology, however, have radically changed this, making truly genomic studies from museum specimens possible. We review the opportunities and drawbacks of the use of museum specimens, and suggest how to best execute projects when incorporating such samples. Several high-throughput (HT) sequencing methodologies, including whole genome shotgun sequencing, sequence capture, and restriction digests (demonstrated here), can be used with archived biomaterials. PMID:25532801

  18. NCBI Reference Sequence (RefSeq): a curated non-redundant sequence database of genomes, transcripts and proteins

    PubMed Central

    Pruitt, Kim D.; Tatusova, Tatiana; Maglott, Donna R.

    2005-01-01

    The National Center for Biotechnology Information (NCBI) Reference Sequence (RefSeq) database (http://www.ncbi.nlm.nih.gov/RefSeq/) provides a non-redundant collection of sequences representing genomic data, transcripts and proteins. Although the goal is to provide a comprehensive dataset representing the complete sequence information for any given species, the database pragmatically includes sequence data that are currently publicly available in the archival databases. The database incorporates data from over 2400 organisms and includes over one million proteins representing significant taxonomic diversity spanning prokaryotes, eukaryotes and viruses. Nucleotide and protein sequences are explicitly linked, and the sequences are linked to other resources including the NCBI Map Viewer and Gene. Sequences are annotated to include coding regions, conserved domains, variation, references, names, database cross-references, and other features using a combined approach of collaboration and other input from the scientific community, automated annotation, propagation from GenBank and curation by NCBI staff. PMID:15608248

  19. The ExoMars Science Data Archive: Status and Plans

    NASA Astrophysics Data System (ADS)

    Heather, David; Barbarisi, Isa; Brumfitt, Jon; Lim, Tanya; Metcalfe, Leo; Villacorta, Antonio

    2017-04-01

    The ExoMars program is a co-operation between ESA and Roscosmos comprising two missions: the first, launched on 14 March 2016, included the Trace Gas Orbiter and Schiaparelli lander; the second, due for launch in 2020, will be a Rover and Surface Platform (RSP). The archiving and management of the science data to be returned from ExoMars will require a significant development effort for the new Planetary Science Archive (PSA). These are the first data in the PSA to be formatted according to the new PDS4 Standards, and there are also significant differences in the way in which a scientist will want to query, retrieve, and use data from a suite of rover instruments as opposed to remote sensing instrumentation from an orbiter. NASA has a strong user community interaction for their rovers, and a similar approach to their 'Analysts Notebook' will be needed for the future PSA. In addition to the archiving interface itself, there are differences with the overall archiving process being followed for ExoMars compared to previous ESA planetary missions. The first level of data processing for the 2016 mission, from telemetry to raw, is completed by ESA at ESAC in Madrid, where the archive itself resides. Data continuously flow direct to the PSA, where after the given proprietary period, they will be released to the community via the user interfaces. For the rover mission, the data pipelines are being developed by European industry, in close collaboration with ESA PSA experts and with the instrument teams. The first level of data processing will be carried out for all instruments at ALTEC in Turin where the pipelines are developed, and from where the rover operations will also be run. This presentation will focus on the challenges involved in archiving the data from the ExoMars Program, and will outline the plans and current status of the system being developed to respond to the needs of the missions.

  20. Whole Genome Amplification and Reduced-Representation Genome Sequencing of Schistosoma japonicum Miracidia

    PubMed Central

    Shortt, Jonathan A.; Card, Daren C.; Schield, Drew R.; Liu, Yang; Zhong, Bo; Castoe, Todd A.

    2017-01-01

    Background In areas where schistosomiasis control programs have been implemented, morbidity and prevalence have been greatly reduced. However, to sustain these reductions and move towards interruption of transmission, new tools for disease surveillance are needed. Genomic methods have the potential to help trace the sources of new infections, and allow us to monitor drug resistance. Large-scale genotyping efforts for schistosome species have been hindered by cost, limited numbers of established target loci, and the small amount of DNA obtained from miracidia, the life stage most readily acquired from humans. Here, we present a method using next generation sequencing to provide high-resolution genomic data from S. japonicum for population-based studies. Methodology/Principal Findings We applied whole genome amplification followed by double digest restriction site associated DNA sequencing (ddRADseq) to individual S. japonicum miracidia preserved on Whatman FTA cards. We found that we could effectively and consistently survey hundreds of thousands of variants from 10,000 to 30,000 loci from archived miracidia as old as six years. An analysis of variation from eight miracidia obtained from three hosts in two villages in Sichuan showed clear population structuring by village and host even within this limited sample. Conclusions/Significance This high-resolution sequencing approach yields three orders of magnitude more information than microsatellite genotyping methods that have been employed over the last decade, creating the potential to answer detailed questions about the sources of human infections and to monitor drug resistance. Costs per sample range from $50-$200, depending on the amount of sequence information desired, and we expect these costs can be reduced further given continued reductions in sequencing costs, improvement of protocols, and parallelization. This approach provides new promise for using modern genome-scale sampling to S. japonicum surveillance

  1. Whole Genome Amplification and Reduced-Representation Genome Sequencing of Schistosoma japonicum Miracidia.

    PubMed

    Shortt, Jonathan A; Card, Daren C; Schield, Drew R; Liu, Yang; Zhong, Bo; Castoe, Todd A; Carlton, Elizabeth J; Pollock, David D

    2017-01-01

    In areas where schistosomiasis control programs have been implemented, morbidity and prevalence have been greatly reduced. However, to sustain these reductions and move towards interruption of transmission, new tools for disease surveillance are needed. Genomic methods have the potential to help trace the sources of new infections, and allow us to monitor drug resistance. Large-scale genotyping efforts for schistosome species have been hindered by cost, limited numbers of established target loci, and the small amount of DNA obtained from miracidia, the life stage most readily acquired from humans. Here, we present a method using next generation sequencing to provide high-resolution genomic data from S. japonicum for population-based studies. We applied whole genome amplification followed by double digest restriction site associated DNA sequencing (ddRADseq) to individual S. japonicum miracidia preserved on Whatman FTA cards. We found that we could effectively and consistently survey hundreds of thousands of variants from 10,000 to 30,000 loci from archived miracidia as old as six years. An analysis of variation from eight miracidia obtained from three hosts in two villages in Sichuan showed clear population structuring by village and host even within this limited sample. This high-resolution sequencing approach yields three orders of magnitude more information than microsatellite genotyping methods that have been employed over the last decade, creating the potential to answer detailed questions about the sources of human infections and to monitor drug resistance. Costs per sample range from $50-$200, depending on the amount of sequence information desired, and we expect these costs can be reduced further given continued reductions in sequencing costs, improvement of protocols, and parallelization. This approach provides new promise for using modern genome-scale sampling to S. japonicum surveillance, and could be applied to other schistosome species and other

  2. Gaia archive

    NASA Astrophysics Data System (ADS)

    Hypki, Arkadiusz; Brown, Anthony

    2016-06-01

    The Gaia archive is being designed and implemented by the DPAC Consortium. The purpose of the archive is to maximize the scientific exploitation of the Gaia data by the astronomical community. Thus, it is crucial to gather and discuss with the community the features of the Gaia archive as much as possible. It is especially important from the point of view of the GENIUS project to gather the feedback and potential use cases for the archive. This paper presents very briefly the general ideas behind the Gaia archive and presents which tools are already provided to the community.

  3. Critical issues in trace gas biogeochemistry and global change.

    PubMed

    Beerling, David J; Nicholas Hewitt, C; Pyle, John A; Raven, John A

    2007-07-15

    The atmospheric composition of trace gases and aerosols is determined by the emission of compounds from the marine and terrestrial biospheres, anthropogenic sources and their chemistry and deposition processes. Biogenic emissions depend upon physiological processes and climate, and the atmospheric chemistry is governed by climate and feedbacks involving greenhouse gases themselves. Understanding and predicting the biogeochemistry of trace gases in past, present and future climates therefore demands an interdisciplinary approach integrating across physiology, atmospheric chemistry, physics and meteorology. Here, we highlight critical issues raised by recent findings in all of these key areas to provide a framework for better understanding the past and possible future evolution of the atmosphere. Incorporating recent experimental and observational findings, especially the influence of CO2 on trace gas emissions from marine algae and terrestrial plants, into earth system models remains a major research priority. As we move towards this goal, archives of the concentration and isotopes of N2O and CH4 from polar ice cores extending back over 650,000 years will provide a valuable benchmark for evaluating such models. In the Pre-Quaternary, synthesis of theoretical modelling with geochemical and palaeontological evidence is also uncovering the roles played by trace gases in episodes of abrupt climatic warming and ozone depletion. Finally, observations and palaeorecords across a range of timescales allow assessment of the Earth's climate sensitivity, a metric influencing our ability to decide what constitutes 'dangerous' climate change.

  4. Archive & Data Management Activities for ISRO Science Archives

    NASA Astrophysics Data System (ADS)

    Thakkar, Navita; Moorthi, Manthira; Gopala Krishna, Barla; Prashar, Ajay; Srinivasan, T. P.

    2012-07-01

    ISRO has kept a step ahead by extending remote sensing missions to planetary and astronomical exploration. It has started with Chandrayaan-1 and successfully completed the moon imaging during its life time in the orbit. Now, in future ISRO is planning to launch Chandrayaan-2 (next moon mission), Mars Mission and Astronomical mission ASTROSAT. All these missions are characterized by the need to receive process, archive and disseminate the acquired science data to the user community for analysis and scientific use. All these science missions will last for a few months to a few years but the data received are required to be archived, interoperable and requires a seamless access to the user community for the future. ISRO has laid out definite plans to archive these data sets in specified standards and develop relevant access tools to be able to serve the user community. To achieve this goal, a Data Center is set up at Bangalore called Indian Space Science Data Center (ISSDC). This is the custodian of all the data sets of the current and future science missions of ISRO . Chandrayaan-1 is the first among the planetary missions launched/to be launched by ISRO and we had taken the challenge and developed a system for data archival and dissemination of the payload data received. For Chandrayaan-1 the data collected from all the instruments are processed and is archived in the archive layer in the Planetary Data System (PDS 3.0) standards, through the automated pipeline. But the dataset once stored is of no use unless it is made public, which requires a Web-based dissemination system that can be accessible to all the planetary scientists/data users working in this field. Towards this, a Web- based Browse and Dissemination system has been developed, wherein users can register and search for their area of Interest and view the data archived for TMC & HYSI with relevant Browse chips and Metadata of the data. Users can also order the data and get it on their desktop in the PDS

  5. The Self-Organized Archive: SPASE, PDS and Archive Cooperatives

    NASA Astrophysics Data System (ADS)

    King, T. A.; Hughes, J. S.; Roberts, D. A.; Walker, R. J.; Joy, S. P.

    2005-05-01

    Information systems with high quality metadata enable uses and services which often go beyond the original purpose. There are two types of metadata: annotations which are items that comment on or describe the content of a resource and identification attributes which describe the external properties of the resource itself. For example, annotations may indicate which columns are present in a table of data, whereas an identification attribute would indicate source of the table, such as the observatory, instrument, organization, and data type. When the identification attributes are collected and used as the basis of a search engine, a user can constrain on an attribute, the archive can then self-organize around the constraint, presenting the user with a particular view of the archive. In an archive cooperative where each participating data system or archive may have its own metadata standards, providing a multi-system search engine requires that individual archive metadata be mapped to a broad based standard. To explore how cooperative archives can form a larger self-organized archive we will show how the Space Physics Archive Search and Extract (SPASE) data model will allow different systems to create a cooperative and will use Planetary Data System (PDS) plus existing space physics activities as a demonstration.

  6. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR ARCHIVE PROCEDURE FOR STUDY SAMPLES (UA-G-4.0)

    EPA Science Inventory

    The purpose of this SOP is to outline the archive/custody guidelines used by the Arizona Border Study. This procedure was followed to maintain and locate samples, extracts, tracings and hard copy results after laboratory analysis during the Arizona NHEXAS project and the Border ...

  7. The McIntosh Archive: A solar feature database spanning four solar cycles

    NASA Astrophysics Data System (ADS)

    Gibson, S. E.; Malanushenko, A. V.; Hewins, I.; McFadden, R.; Emery, B.; Webb, D. F.; Denig, W. F.

    2016-12-01

    The McIntosh Archive consists of a set of hand-drawn solar Carrington maps created by Patrick McIntosh from 1964 to 2009. McIntosh used mainly H-alpha, He-1 10830 and photospheric magnetic measurements from both ground-based and NASA satellite observations. With these he traced coronal holes, polarity inversion lines, filaments, sunspots and plage, yielding a unique 45-year record of the features associated with the large-scale solar magnetic field. We will present the results of recent efforts to preserve and digitize this archive. Most of the original hand-drawn maps have been scanned, a method for processing these scans into digital, searchable format has been developed and streamlined, and an archival repository at NOAA's National Centers for Environmental Information (NCEI) has been created. We will demonstrate how Solar Cycle 23 data may now be accessed and how it may be utilized for scientific applications. In addition, we will discuss how this database of human-recognized features, which overlaps with the onset of high-resolution, continuous modern solar data, may act as a training set for computer feature recognition algorithms.

  8. A Systematic Analysis of the Structures of Heterologously Expressed Proteins and Those from Their Native Hosts in the RCSB PDB Archive

    PubMed Central

    Zhou, Ren-Bin; Lu, Hui-Meng; Liu, Jie; Shi, Jian-Yu; Zhu, Jing; Lu, Qin-Qin; Yin, Da-Chuan

    2016-01-01

    Recombinant expression of proteins has become an indispensable tool in modern day research. The large yields of recombinantly expressed proteins accelerate the structural and functional characterization of proteins. Nevertheless, there are literature reported that the recombinant proteins show some differences in structure and function as compared with the native ones. Now there have been more than 100,000 structures (from both recombinant and native sources) publicly available in the Protein Data Bank (PDB) archive, which makes it possible to investigate if there exist any proteins in the RCSB PDB archive that have identical sequence but have some difference in structures. In this paper, we present the results of a systematic comparative study of the 3D structures of identical naturally purified versus recombinantly expressed proteins. The structural data and sequence information of the proteins were mined from the RCSB PDB archive. The combinatorial extension (CE), FATCAT-flexible and TM-Align methods were employed to align the protein structures. The root-mean-square distance (RMSD), TM-score, P-value, Z-score, secondary structural elements and hydrogen bonds were used to assess the structure similarity. A thorough analysis of the PDB archive generated five-hundred-seventeen pairs of native and recombinant proteins that have identical sequence. There were no pairs of proteins that had the same sequence and significantly different structural fold, which support the hypothesis that expression in a heterologous host usually could fold correctly into their native forms. PMID:27517583

  9. A Systematic Analysis of the Structures of Heterologously Expressed Proteins and Those from Their Native Hosts in the RCSB PDB Archive.

    PubMed

    Zhou, Ren-Bin; Lu, Hui-Meng; Liu, Jie; Shi, Jian-Yu; Zhu, Jing; Lu, Qin-Qin; Yin, Da-Chuan

    2016-01-01

    Recombinant expression of proteins has become an indispensable tool in modern day research. The large yields of recombinantly expressed proteins accelerate the structural and functional characterization of proteins. Nevertheless, there are literature reported that the recombinant proteins show some differences in structure and function as compared with the native ones. Now there have been more than 100,000 structures (from both recombinant and native sources) publicly available in the Protein Data Bank (PDB) archive, which makes it possible to investigate if there exist any proteins in the RCSB PDB archive that have identical sequence but have some difference in structures. In this paper, we present the results of a systematic comparative study of the 3D structures of identical naturally purified versus recombinantly expressed proteins. The structural data and sequence information of the proteins were mined from the RCSB PDB archive. The combinatorial extension (CE), FATCAT-flexible and TM-Align methods were employed to align the protein structures. The root-mean-square distance (RMSD), TM-score, P-value, Z-score, secondary structural elements and hydrogen bonds were used to assess the structure similarity. A thorough analysis of the PDB archive generated five-hundred-seventeen pairs of native and recombinant proteins that have identical sequence. There were no pairs of proteins that had the same sequence and significantly different structural fold, which support the hypothesis that expression in a heterologous host usually could fold correctly into their native forms.

  10. 454 next generation-sequencing outperforms allele-specific PCR, Sanger sequencing, and pyrosequencing for routine KRAS mutation analysis of formalin-fixed, paraffin-embedded samples

    PubMed Central

    Altimari, Annalisa; de Biase, Dario; De Maglio, Giovanna; Gruppioni, Elisa; Capizzi, Elisa; Degiovanni, Alessio; D’Errico, Antonia; Pession, Annalisa; Pizzolitto, Stefano; Fiorentino, Michelangelo; Tallini, Giovanni

    2013-01-01

    Detection of KRAS mutations in archival pathology samples is critical for therapeutic appropriateness of anti-EGFR monoclonal antibodies in colorectal cancer. We compared the sensitivity, specificity, and accuracy of Sanger sequencing, ARMS-Scorpion (TheraScreen®) real-time polymerase chain reaction (PCR), pyrosequencing, chip array hybridization, and 454 next-generation sequencing to assess KRAS codon 12 and 13 mutations in 60 nonconsecutive selected cases of colorectal cancer. Twenty of the 60 cases were detected as wild-type KRAS by all methods with 100% specificity. Among the 40 mutated cases, 13 were discrepant with at least one method. The sensitivity was 85%, 90%, 93%, and 92%, and the accuracy was 90%, 93%, 95%, and 95% for Sanger sequencing, TheraScreen real-time PCR, pyrosequencing, and chip array hybridization, respectively. The main limitation of Sanger sequencing was its low analytical sensitivity, whereas TheraScreen real-time PCR, pyrosequencing, and chip array hybridization showed higher sensitivity but suffered from the limitations of predesigned assays. Concordance between the methods was k = 0.79 for Sanger sequencing and k > 0.85 for the other techniques. Tumor cell enrichment correlated significantly with the abundance of KRAS-mutated deoxyribonucleic acid (DNA), evaluated as ΔCt for TheraScreen real-time PCR (P = 0.03), percentage of mutation for pyrosequencing (P = 0.001), ratio for chip array hybridization (P = 0.003), and percentage of mutation for 454 next-generation sequencing (P = 0.004). Also, 454 next-generation sequencing showed the best cross correlation for quantification of mutation abundance compared with all the other methods (P < 0.001). Our comparison showed the superiority of next-generation sequencing over the other techniques in terms of sensitivity and specificity. Next-generation sequencing will replace Sanger sequencing as the reference technique for diagnostic detection of KRAS mutation in archival tumor tissues. PMID

  11. A Method to Evaluate Genome-Wide Methylation in Archival Formalin-Fixed, Paraffin-Embedded Ovarian Epithelial Cells

    PubMed Central

    Li, Qiling; Li, Min; Ma, Li; Li, Wenzhi; Wu, Xuehong; Richards, Jendai; Fu, Guoxing; Xu, Wei; Bythwood, Tameka; Li, Xu; Wang, Jianxin; Song, Qing

    2014-01-01

    Background The use of DNA from archival formalin and paraffin embedded (FFPE) tissue for genetic and epigenetic analyses may be problematic, since the DNA is often degraded and only limited amounts may be available. Thus, it is currently not known whether genome-wide methylation can be reliably assessed in DNA from archival FFPE tissue. Methodology/Principal Findings Ovarian tissues, which were obtained and formalin-fixed and paraffin-embedded in either 1999 or 2011, were sectioned and stained with hematoxylin-eosin (H&E).Epithelial cells were captured by laser micro dissection, and their DNA subjected to whole genomic bisulfite conversion, whole genomic polymerase chain reaction (PCR) amplification, and purification. Sequencing and software analyses were performed to identify the extent of genomic methylation. We observed that 31.7% of sequence reads from the DNA in the 1999 archival FFPE tissue, and 70.6% of the reads from the 2011 sample, could be matched with the genome. Methylation rates of CpG on the Watson and Crick strands were 32.2% and 45.5%, respectively, in the 1999 sample, and 65.1% and 42.7% in the 2011 sample. Conclusions/Significance We have developed an efficient method that allows DNA methylation to be assessed in archival FFPE tissue samples. PMID:25133528

  12. Sequencing the Unknown

    NASA Image and Video Library

    2017-12-19

    Being able to identify microbes in real time aboard the International Space Station, without having to send them back to Earth for identification first, would be revolutionary for the world of microbiology and space exploration, and the Genes in Space-3 team turned that possibility into a reality this year when it completed the first-ever sample-to-sequence process entirely aboard the space station. This advance could aid in the ability to diagnose and treat astronaut ailments in real time, as well as assisting in the identification of DNA-based life on other planets. It could also benefit other experiments aboard the orbiting laboratory. HD Download: https://archive.org/details/jsc2017m001160_Sequencing_the_Unknown _______________________________________ FOLLOW THE SPACE STATION! Twitter: https://twitter.com/Space_Station Facebook: https://www.facebook.com/ISS Instagram: https://instagram.com/iss/

  13. The Gediz River fluvial archive: A benchmark for Quaternary research in Western Anatolia

    NASA Astrophysics Data System (ADS)

    Maddy, D.; Veldkamp, A.; Demir, T.; van Gorp, W.; Wijbrans, J. R.; van Hinsbergen, D. J. J.; Dekkers, M. J.; Schreve, D.; Schoorl, J. M.; Scaife, R.; Stemerdink, C.; van der Schriek, T.; Bridgland, D. R.; Aytaç, A. S.

    2017-06-01

    The Gediz River, one of the principal rivers of Western Anatolia, has an extensive Pleistocene fluvial archive that potentially offers a unique window into fluvial system behaviour on the western margins of Asia during the Quaternary. In this paper we review our work on the Quaternary Gediz River Project (2001-2010) and present new data which leads to a revised stratigraphical model for the Early Pleistocene development of this fluvial system. In previous work we confirmed the preservation of eleven buried Early Pleistocene fluvial terraces of the Gediz River (designated GT11, the oldest and highest, to GT1, the youngest and lowest) which lie beneath the basalt-covered plateaux of the Kula Volcanic Province. Deciphering the information locked in this fluvial archive requires the construction of a robust geochronology. Fortunately, the Gediz archive provides ample opportunity for age-constraint based upon age estimates derived from basaltic lava flows that repeatedly entered the palaeo-Gediz valley floors. In this paper we present, for the first time, our complete dataset of 40Ar/39Ar age estimates and associated palaeomagnetic measurements. These data, which can be directly related to the underlying fluvial deposits, provide age constraints critical to our understanding of this sequence. The new chronology establishes the onset of Quaternary volcanism at ∼1320ka (MIS42). This volcanism, which is associated with GT6, confirms a pre-MIS42 age for terraces GT11-GT7. Evidence from the colluvial sequences directly overlying these early terraces suggests that they formed in response to hydrological and sediment budget changes forced by climate-driven vegetation change. The cyclic formation of terraces and their timing suggests they represent the obliquity-driven climate changes of the Early Pleistocene. By way of contrast the GT5-GT1 terrace sequence, constrained by a lava flow with an age estimate of ∼1247ka, span the time-interval MIS42 - MIS38 and therefore do not

  14. High quality methylome-wide investigations through next-generation sequencing of DNA from a single archived dry blood spot

    PubMed Central

    Aberg, Karolina A.; Xie, Lin Y.; Nerella, Srilaxmi; Copeland, William E.; Costello, E. Jane; van den Oord, Edwin J.C.G.

    2013-01-01

    The potential importance of DNA methylation in the etiology of complex diseases has led to interest in the development of methylome-wide association studies (MWAS) aimed at interrogating all methylation sites in the human genome. When using blood as biomaterial for a MWAS the DNA is typically extracted directly from fresh or frozen whole blood that was collected via venous puncture. However, DNA extracted from dry blood spots may also be an alternative starting material. In the present study, we apply a methyl-CpG binding domain (MBD) protein enrichment-based technique in combination with next generation sequencing (MBD-seq) to assess the methylation status of the ~27 million CpGs in the human autosomal reference genome. We investigate eight methylomes using DNA from blood spots. This data are compared with 1,500 methylomes previously assayed with the same MBD-seq approach using DNA from whole blood. When investigating the sequence quality and the enrichment profile across biological features, we find that DNA extracted from blood spots gives comparable results with DNA extracted from whole blood. Only if the amount of starting material is ≤ 0.5µg DNA we observe a slight decrease in the assay performance. In conclusion, we show that high quality methylome-wide investigations using MBD-seq can be conducted in DNA extracted from archived dry blood spots without sacrificing quality and without bias in enrichment profile as long as the amount of starting material is sufficient. In general, the amount of DNA extracted from a single blood spot is sufficient for methylome-wide investigations with the MBD-seq approach. PMID:23644822

  15. High quality methylome-wide investigations through next-generation sequencing of DNA from a single archived dry blood spot.

    PubMed

    Aberg, Karolina A; Xie, Lin Y; Nerella, Srilaxmi; Copeland, William E; Costello, E Jane; van den Oord, Edwin J C G

    2013-05-01

    The potential importance of DNA methylation in the etiology of complex diseases has led to interest in the development of methylome-wide association studies (MWAS) aimed at interrogating all methylation sites in the human genome. When using blood as biomaterial for a MWAS the DNA is typically extracted directly from fresh or frozen whole blood that was collected via venous puncture. However, DNA extracted from dry blood spots may also be an alternative starting material. In the present study, we apply a methyl-CpG binding domain (MBD) protein enrichment-based technique in combination with next generation sequencing (MBD-seq) to assess the methylation status of the ~27 million CpGs in the human autosomal reference genome. We investigate eight methylomes using DNA from blood spots. This data are compared with 1,500 methylomes previously assayed with the same MBD-seq approach using DNA from whole blood. When investigating the sequence quality and the enrichment profile across biological features, we find that DNA extracted from blood spots gives comparable results with DNA extracted from whole blood. Only if the amount of starting material is ≤ 0.5µg DNA we observe a slight decrease in the assay performance. In conclusion, we show that high quality methylome-wide investigations using MBD-seq can be conducted in DNA extracted from archived dry blood spots without sacrificing quality and without bias in enrichment profile as long as the amount of starting material is sufficient. In general, the amount of DNA extracted from a single blood spot is sufficient for methylome-wide investigations with the MBD-seq approach.

  16. On missing Data Treatment for degraded video and film archives: a survey and a new Bayesian approach.

    PubMed

    Kokaram, Anil C

    2004-03-01

    Image sequence restoration has been steadily gaining in importance with the increasing prevalence of visual digital media. The demand for content increases the pressure on archives to automate their restoration activities for preservation of the cultural heritage that they hold. There are many defects that affect archived visual material and one central issue is that of Dirt and Sparkle, or "Blotches." Research in archive restoration has been conducted for more than a decade and this paper places that material in context to highlight the advances made during that time. The paper also presents a new and simpler Bayesian framework that achieves joint processing of noise, missing data, and occlusion.

  17. The ISO Data Archive and Interoperability with Other Archives

    NASA Astrophysics Data System (ADS)

    Salama, Alberto; Arviset, Christophe; Hernández, José; Dowson, John; Osuna, Pedro

    The ESA's Infrared Space Observatory (ISO), an unprecedented observatory for infrared astronomy launched in November 1995, successfully made nearly 30,000 scientific observations in its 2.5-year mission. The ISO data can be retrieved from the ISO Data Archive, available at ISO Data Archive , and comprised of about 150,000 observations, including parallel and serendipity mode observations. A user-friendly Java interface permits queries to the database and data retrieval. The interface currently offers a wide variety of links to other archives, such as name resolution with NED and SIMBAD, access to electronic articles from ADS and CDS/VizieR, and access to IRAS data. In the past year development has been focused on improving the IDA interoperability with other astronomical archives, either by accessing other relevant archives or by providing direct access to the ISO data for external services. A mechanism of information transfer has been developed, allowing direct query to the IDA via a Java Server Page, returning quick look ISO images and relevant, observation-specific information embedded in an HTML page. This method has been used to link from the CDS/Vizier Data Centre and ADS, and work with IPAC to allow access to the ISO Archive from IRSA, including display capabilities of the observed sky regions onto other mission images, is in progress. Prospects for further links to and from other archives and databases are also addressed.

  18. Trace elements records from vermetids aragonite as millennial paleo-oceanographic archives in the South-East Mediterranean

    NASA Astrophysics Data System (ADS)

    Jacobson, Yitzhak; Yam, Ruth; Shemesh, Aldo

    2017-04-01

    The Mediterranean Sea is a region under high anthropogenic stress, thus a hotspot for climate change studies. Natural conditions, such as SST, productivity, precipitation and dust fluxes along with human induced activity affect seawater chemistry. We study millennial variability of trace elements in East Mediterranean Sea high-resolution records, in attempt to connect them to environmental factors. The Mediterranean reef builder Vermetid, D. petraeum is a sessile gastropod, secreting its aragonite shells in tidal zones. Cores of Vermetid reefs from the South Eastern Mediterranean (Israel) were previously analyzed by Sisma?Ventura et al. (2014) to reconstruct seawater surface temperature (SST) and δ13C of dissolved inorganic carbon (DIC). In this study we analyzed trace elements of these vermetid cores, and reconstructed millennial records of elements to calcium (el/Ca) molar ratios. Vermetid trace element contents from recent decades are mostly in agreement with known values for marine biogenic aragonites from corals and mollusk. We divide vermetid trace element records into three element groups: 1) Sr and U are related to SST and DIC. These elements correlate with major climatic events of the last millennium, such as the Medieval Warm Period (900-1300 AD) and the Little Ice Age (1450-1850 AD). 2) Pb and Cd are related to anthropogenic pollution and demonstrate industrial sourced trends throughout the anthropocene (since 1750 AD). 3) Terrogenous elements, including Fe, Al, Mn and V. Al in seawater and sediments has been used to trace water masses and land derived sediment source. We observe a major change in average vermetid Al/Fe ratios from 0.5 to 2.5 over the recorded period (n=72). This vermetid Al/Fe change points at a possible shift from Nilotic sediments (0.1-0.5 Al/Fe molar ratio) to Saharan dust ratio (2-4 Al/Fe molar ratio). Mn and V show a similar variability to Fe. Understanding the variability of vermetid TE can help us interpret the relative

  19. The Planetary Archive

    NASA Astrophysics Data System (ADS)

    Penteado, Paulo F.; Trilling, David; Szalay, Alexander; Budavári, Tamás; Fuentes, César

    2014-11-01

    We are building the first system that will allow efficient data mining in the astronomical archives for observations of Solar System Bodies. While the Virtual Observatory has enabled data-intensive research making use of large collections of observations across multiple archives, Planetary Science has largely been denied this opportunity: most astronomical data services are built based on sky positions, and moving objects are often filtered out.To identify serendipitous observations of Solar System objects, we ingest the archive metadata. The coverage of each image in an archive is a volume in a 3D space (RA,Dec,time), which we can represent efficiently through a hierarchical triangular mesh (HTM) for the spatial dimensions, plus a contiguous time interval. In this space, an asteroid occupies a curve, which we determine integrating its orbit into the past. Thus when an asteroid trajectory intercepts the volume of an archived image, we have a possible observation of that body. Our pipeline then looks in the archive's catalog for a source with the corresponding coordinates, to retrieve its photometry. All these matches are stored into a database, which can be queried by object identifier.This database consists of archived observations of known Solar System objects. This means that it grows not only from the ingestion of new images, but also from the growth in the number of known objects. As new bodies are discovered, our pipeline can find archived observations where they could have been recorded, providing colors for these newly-found objects. This growth becomes more relevant with the new generation of wide-field surveys, particularly LSST.We also present one use case of our prototype archive: after ingesting the metadata for SDSS, 2MASS and GALEX, we were able to identify serendipitous observations of Solar System bodies in these 3 archives. Cross-matching these occurrences provided us with colors from the UV to the IR, a much wider spectral range than that

  20. Classification and Lineage Tracing of SH2 Domains Throughout Eukaryotes.

    PubMed

    Liu, Bernard A

    2017-01-01

    Today there exists a rapidly expanding number of sequenced genomes. Cataloging protein interaction domains such as the Src Homology 2 (SH2) domain across these various genomes can be accomplished with ease due to existing algorithms and predictions models. An evolutionary analysis of SH2 domains provides a step towards understanding how SH2 proteins integrated with existing signaling networks to position phosphotyrosine signaling as a crucial driver of robust cellular communication networks in metazoans. However organizing and tracing SH2 domain across organisms and understanding their evolutionary trajectory remains a challenge. This chapter describes several methodologies towards analyzing the evolutionary trajectory of SH2 domains including a global SH2 domain classification system, which facilitates annotation of new SH2 sequences essential for tracing the lineage of SH2 domains throughout eukaryote evolution. This classification utilizes a combination of sequence homology, protein domain architecture and the boundary positions between introns and exons within the SH2 domain or genes encoding these domains. Discrete SH2 families can then be traced across various genomes to provide insight into its origins. Furthermore, additional methods for examining potential mechanisms for divergence of SH2 domains from structural changes to alterations in the protein domain content and genome duplication will be discussed. Therefore a better understanding of SH2 domain evolution may enhance our insight into the emergence of phosphotyrosine signaling and the expansion of protein interaction domains.

  1. The last Deglaciation in the Mediterranean region: a multi-archives synthesis

    NASA Astrophysics Data System (ADS)

    Bazin, Lucie; Siani, Giuseppe; Landais, Amaelle; Bassinot, Frank; Genty, Dominique; Govin, Aline; Michel, Elisabeth; Nomade, Sebastien; Waelbroeck, Claire

    2016-04-01

    Multiple proxies record past climatic changes in different climate archives. These proxies are influenced by different component of the climate system and bring complementary information on past climate variability. The major limitation when combining proxies from different archives comes from the coherency of their chronologies. Indeed, each climate archives possess their own dating methods, not necessarily coherent with each other's. Consequently, when we want to assess the latitudinal changes and mechanisms behind a climate event, we often have to rely on assumptions of synchronisation between the different archives, such as synchronous temperature changes during warming events (Austin and Hibbert 2010). Recently, a dating method originally developed to produce coherent chronologies for ice cores (Datice,Lemieux-Dudon et al., 2010) has been adapted in order to integrate different climate archives (ice cores, sediment cores and speleothems (Lemieux-Dudon et al., 2015, Bazin et al., in prep)). In this presentation we present the validation of this multi-archives dating tool with a first application covering the last Deglaciation in the Mediterranean region. For this experiment, we consider the records from Monticchio, the MD90-917, Tenaghi Philippon and Lake Orhid sediment cores as well as continuous speleothems from Sofular, Soreq and La Mine caves. Using the Datice dating tool, and with the identification of common tephra layers between the cores considered, we are able to produce a multi-archives coherent chronology for this region, independently of any climatic assumption. Using this common chronological framework, we show that the usual climatic synchronisation assumptions are not valid over this region for the last glacial-interglacial transition. Finally, we compare our coherent Mediterranean chronology with Greenland ice core records in order to discuss the sequence of events of the last Deglaciation between these two regions.

  2. Ancient DNA analysis identifies marine mollusc shells as new metagenomic archives of the past.

    PubMed

    Der Sarkissian, Clio; Pichereau, Vianney; Dupont, Catherine; Ilsøe, Peter C; Perrigault, Mickael; Butler, Paul; Chauvaud, Laurent; Eiríksson, Jón; Scourse, James; Paillard, Christine; Orlando, Ludovic

    2017-09-01

    Marine mollusc shells enclose a wealth of information on coastal organisms and their environment. Their life history traits as well as (palaeo-) environmental conditions, including temperature, food availability, salinity and pollution, can be traced through the analysis of their shell (micro-) structure and biogeochemical composition. Adding to this list, the DNA entrapped in shell carbonate biominerals potentially offers a novel and complementary proxy both for reconstructing palaeoenvironments and tracking mollusc evolutionary trajectories. Here, we assess this potential by applying DNA extraction, high-throughput shotgun DNA sequencing and metagenomic analyses to marine mollusc shells spanning the last ~7,000 years. We report successful DNA extraction from shells, including a variety of ancient specimens, and find that DNA recovery is highly dependent on their biomineral structure, carbonate layer preservation and disease state. We demonstrate positive taxonomic identification of mollusc species using a combination of mitochondrial DNA genomes, barcodes, genome-scale data and metagenomic approaches. We also find shell biominerals to contain a diversity of microbial DNA from the marine environment. Finally, we reconstruct genomic sequences of organisms closely related to the Vibrio tapetis bacteria from Manila clam shells previously diagnosed with Brown Ring Disease. Our results reveal marine mollusc shells as novel genetic archives of the past, which opens new perspectives in ancient DNA research, with the potential to reconstruct the evolutionary history of molluscs, microbial communities and pathogens in the face of environmental changes. Other future applications include conservation of endangered mollusc species and aquaculture management. © 2017 John Wiley & Sons Ltd.

  3. VLBA Archive &Distribution Architecture

    NASA Astrophysics Data System (ADS)

    Wells, D. C.

    1994-01-01

    Signals from the 10 antennas of NRAO's VLBA [Very Long Baseline Array] are processed by a Correlator. The complex fringe visibilities produced by the Correlator are archived on magnetic cartridges using a low-cost architecture which is capable of scaling and evolving. Archive files are copied to magnetic media to be distributed to users in FITS format, using the BINTABLE extension. Archive files are labelled using SQL INSERT statements, in order to bind the DBMS-based archive catalog to the archive media.

  4. K-Shell Photoabsorption and Photoionisation of Trace Elements I. Isoelectronic Sequences With Electron Number 3< or = N < or = 11

    NASA Technical Reports Server (NTRS)

    Palmeri, P.; Quinet, P.; Mendoza, C.; Bautista, M. A.; Witthoeft, M. C.; Kallman, T. R.

    2016-01-01

    Context. With the recent launching of the Hitomi X-ray space observatory, K lines and edges of chemical elements with low cosmic abundances, namely F, Na, P, Cl, K, Sc, Ti, V, Cr, Mn, Co, Cu and Zn, can be resolved and used to determine important properties of supernova remnants, galaxy clusters and accreting black holes and neutron stars.Aims. The second stage of the present ongoing project involves the computation of the accurate photoabsorption and photoionisation cross sections required to interpret the X-ray spectra of such trace elements.Methods. Depending on target complexity and computer tractability, ground-state cross sections are computed either with the close-coupling Breit-Pauli R-matrix method or with the autostructure atomic structure code in the isolated-resonance approximation. The intermediate-coupling scheme is used whenever possible. In order to determine a realistic K-edge behaviour for each species, both radiative and Auger dampings are taken into account, the latter being included in the R-matrix formalism by means of an optical potential.Results. Photoabsorption and total and partial photoionisation cross sections are reported for isoelectronic sequences with electron numbers 3< or = N< or = 11. The Na sequence (N=11) is used to estimate the contributions from configurations with a 2s hole (i.e. [2s]) and those containing 3d orbitals, which will be crucial when considering sequences with N 11.Conclusions. It is found that the [2s/u] configurations must be included in the target representations of species with N> 11 as they contribute significantly to the monotonic background of the cross section between the L and K edges. Configurations with 3d orbitals are important in rendering an accurate L edge, but they can be practically neglected in the K-edge region.

  5. Trace conditioning in insects—keep the trace!

    PubMed Central

    Dylla, Kristina V.; Galili, Dana S.; Szyszka, Paul; Lüdke, Alja

    2013-01-01

    Trace conditioning is a form of associative learning that can be induced by presenting a conditioned stimulus (CS) and an unconditioned stimulus (US) following each other, but separated by a temporal gap. This gap distinguishes trace conditioning from classical delay conditioning, where the CS and US overlap. To bridge the temporal gap between both stimuli and to form an association between CS and US in trace conditioning, the brain must keep a neural representation of the CS after its termination—a stimulus trace. Behavioral and physiological studies on trace and delay conditioning revealed similarities between the two forms of learning, like similar memory decay and similar odor identity perception in invertebrates. On the other hand differences were reported also, like the requirement of distinct brain structures in vertebrates or disparities in molecular mechanisms in both vertebrates and invertebrates. For example, in commonly used vertebrate conditioning paradigms the hippocampus is necessary for trace but not for delay conditioning, and Drosophila delay conditioning requires the Rutabaga adenylyl cyclase (Rut-AC), which is dispensable in trace conditioning. It is still unknown how the brain encodes CS traces and how they are associated with a US in trace conditioning. Insects serve as powerful models to address the mechanisms underlying trace conditioning, due to their simple brain anatomy, behavioral accessibility and established methods of genetic interference. In this review we summarize the recent progress in insect trace conditioning on the behavioral and physiological level and emphasize similarities and differences compared to delay conditioning. Moreover, we examine proposed molecular and computational models and reassess different experimental approaches used for trace conditioning. PMID:23986710

  6. My Dream Archive

    ERIC Educational Resources Information Center

    Phelps, Christopher

    2007-01-01

    In this article, the author shares his experience as he traveled from island to island with a single objective--to reach the archives. He found out that not all archives are the same. In recent months, his daydreaming in various facilities has yielded a recurrent question on what would constitute the Ideal Archive. What follows, in no particular…

  7. Dose-Response Analysis of RNA-Seq Profiles in Archival Formalin-Fixed Paraffin-Embedded (FFPE) Samples.

    EPA Science Inventory

    Use of archival resources has been limited to date by inconsistent methods for genomic profiling of degraded RNA from formalin-fixed paraffin-embedded (FFPE) samples. RNA-sequencing offers a promising way to address this problem. Here we evaluated transcriptomic dose responses us...

  8. HIV-TRACE (Transmission Cluster Engine): a tool for large scale molecular epidemiology of HIV-1 and other rapidly evolving pathogens.

    PubMed

    Kosakovsky Pond, Sergei L; Weaver, Steven; Leigh Brown, Andrew J; Wertheim, Joel O

    2018-01-31

    In modern applications of molecular epidemiology, genetic sequence data are routinely used to identify clusters of transmission in rapidly evolving pathogens, most notably HIV-1. Traditional 'shoeleather' epidemiology infers transmission clusters by tracing chains of partners sharing epidemiological connections (e.g., sexual contact). Here, we present a computational tool for identifying a molecular transmission analog of such clusters: HIV-TRACE (TRAnsmission Cluster Engine). HIV-TRACE implements an approach inspired by traditional epidemiology, by identifying chains of partners whose viral genetic relatedness imply direct or indirect epidemiological connections. Molecular transmission clusters are constructed using codon-aware pairwise alignment to a reference sequence followed by pairwise genetic distance estimation among all sequences. This approach is computationally tractable and is capable of identifying HIV-1 transmission clusters in large surveillance databases comprising tens or hundreds of thousands of sequences in near real time, i.e., on the order of minutes to hours. HIV-TRACE is available at www.hivtrace.org and from github.com/veg/hivtrace, along with the accompanying result visualization module from github.com/veg/hivtrace-viz. Importantly, the approach underlying HIV-TRACE is not limited to the study of HIV-1 and can be applied to study outbreaks and epidemics of other rapidly evolving pathogens. © The Author 2018. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Archive of Digital Chirp Subbottom Profile Data Collected During USGS Cruise 14BIM05 Offshore of Breton Island, Louisiana, August 2014

    USGS Publications Warehouse

    Forde, Arnell S.; Flocks, James G.; Wiese, Dana S.; Fredericks, Jake J.

    2016-03-29

    The archived trace data are in standard SEG Y rev. 0 format (Barry and others, 1975); the first 3,200 bytes of the card image header are in American Standard Code for Information Interchange (ASCII) format instead of Extended Binary Coded Decimal Interchange Code (EBCDIC) format. The SEG Y files are available on the DVD version of this report or online, downloadable via the USGS Coastal and Marine Geoscience Data System (http://cmgds.marine.usgs.gov). The data are also available for viewing using GeoMapApp (http://www.geomapapp.org) and Virtual Ocean (http://www.virtualocean.org) multi-platform open source software. The Web version of this archive does not contain the SEG Y trace files. To obtain the complete DVD archive, contact USGS Information Services at 1-888-ASK-USGS or infoservices@usgs.gov. The SEG Y files may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU) (Cohen and Stockwell, 2010). See the How To Download SEG Y Data page for download instructions. The printable profiles are provided as Graphics Interchange Format (GIF) images processed and gained using SU software and can be viewed from theProfiles page or by using the links located on the trackline maps; refer to the Software page for links to example SU processing scripts.

  10. Fermilab History and Archives Project | Home

    Science.gov Websites

    Fermilab History and Archives Project Fermilab History and Archives Project Fermi National Accelerator Laboratory Home About the Archives History & Archives Online Request Contact Us Site Index SEARCH the site: History & Archives Project Fermilab History and Archives Project The History of

  11. GTSO: Global Trace Synchronization and Ordering Mechanism for Wireless Sensor Network Monitoring Platforms

    PubMed Central

    Bonastre, Alberto; Ors, Rafael

    2017-01-01

    Monitoring is one of the best ways to evaluate the behavior of computer systems. When the monitored system is a distributed system—such as a wireless sensor network (WSN)—the monitoring operation must also be distributed, providing a distributed trace for further analysis. The temporal sequence of occurrence of the events registered by the distributed monitoring platform (DMP) must be correctly established to provide cause-effect relationships between them, so the logs obtained in different monitor nodes must be synchronized. Many of synchronization mechanisms applied to DMPs consist in adjusting the internal clocks of the nodes to the same value as a reference time. However, these mechanisms can create an incoherent event sequence. This article presents a new method to achieve global synchronization of the traces obtained in a DMP. It is based on periodic synchronization signals that are received by the monitor nodes and logged along with the recorded events. This mechanism processes all traces and generates a global post-synchronized trace by scaling all times registered proportionally according with the synchronization signals. It is intended to be a simple but efficient offline mechanism. Its application in a WSN-DMP demonstrates that it guarantees a correct ordering of the events, avoiding the aforementioned issues. PMID:29295494

  12. GTSO: Global Trace Synchronization and Ordering Mechanism for Wireless Sensor Network Monitoring Platforms.

    PubMed

    Navia, Marlon; Campelo, José Carlos; Bonastre, Alberto; Ors, Rafael

    2017-12-23

    Monitoring is one of the best ways to evaluate the behavior of computer systems. When the monitored system is a distributed system-such as a wireless sensor network (WSN)-the monitoring operation must also be distributed, providing a distributed trace for further analysis. The temporal sequence of occurrence of the events registered by the distributed monitoring platform (DMP) must be correctly established to provide cause-effect relationships between them, so the logs obtained in different monitor nodes must be synchronized. Many of synchronization mechanisms applied to DMPs consist in adjusting the internal clocks of the nodes to the same value as a reference time. However, these mechanisms can create an incoherent event sequence. This article presents a new method to achieve global synchronization of the traces obtained in a DMP. It is based on periodic synchronization signals that are received by the monitor nodes and logged along with the recorded events. This mechanism processes all traces and generates a global post-synchronized trace by scaling all times registered proportionally according with the synchronization signals. It is intended to be a simple but efficient offline mechanism. Its application in a WSN-DMP demonstrates that it guarantees a correct ordering of the events, avoiding the aforementioned issues.

  13. A mid-twentieth century reduction in tropical upwelling inferred from coralline trace element proxies

    NASA Astrophysics Data System (ADS)

    Reuer, Matthew K.; Boyle, Edward A.; Cole, Julia E.

    2003-05-01

    The Cariaco Basin is an important archive of past climate variability given its response to inter- and extratropical climate forcing and the accumulation of annually laminated sediments within an anoxic water column. This study presents high-resolution surface coral trace element records ( Montastrea annularis and Siderastrea siderea) from Isla Tortuga, Venezuela, located within the upwelling center of this region. A two-fold reduction in Cd/Ca ratios (3.5-1.7 nmol/mol) is observed from 1946 to 1952 with no concurrent shift in Ba/Ca ratios. This reduction agrees with the hydrographic distribution of dissolved cadmium and barium and their expected response to upwelling. Significant anthropogenic variability is also observed from Pb/Ca analysis, observing three lead maxima since 1920. Kinetic control of trace element ratios is inferred from an interspecies comparison of Cd/Ca and Ba/Ca ratios (consistent with the Sr/Ca kinetic artifact), but these artifacts are smaller than the environmental signal and do not explain the Cd/Ca transition. The trace element records agree with historical climate data and differ from sedimentary faunal abundance records, suggesting a linear response to North Atlantic extratropical forcing cannot account for the observed historical variability in this region.

  14. Listening to the Mind: Tracing the Auditory History of Mental Illness in Archives and Exhibitions.

    PubMed

    Birdsall, Carolyn; Parry, Manon; Tkaczyk, Viktoria

    2015-11-01

    With increasing interest in the representation of histories of mental health in museums, sound has played a key role as a tool to access a range of voices. This essay discusses how sound can be used to give voice to those previously silenced. The focus is on the use of sound recording in the history of mental health care, and the archival sources left behind for potential reuse. Exhibition strategies explored include the use of sound to interrogate established narratives, to interrupt associations visitors make when viewing the material culture of mental health, and to foster empathic listening among audiences.

  15. Single-Center Experience with a Targeted Next Generation Sequencing Assay for Assessment of Relevant Somatic Alterations in Solid Tumors.

    PubMed

    Paasinen-Sohns, Aino; Koelzer, Viktor H; Frank, Angela; Schafroth, Julian; Gisler, Aline; Sachs, Melanie; Graber, Anne; Rothschild, Sacha I; Wicki, Andreas; Cathomas, Gieri; Mertz, Kirsten D

    2017-03-01

    Companion diagnostics rely on genomic testing of molecular alterations to enable effective cancer treatment. Here we report the clinical application and validation of the Oncomine Focus Assay (OFA), an integrated, commercially available next-generation sequencing (NGS) assay for the rapid and simultaneous detection of single nucleotide variants, short insertions and deletions, copy number variations, and gene rearrangements in 52 cancer genes with therapeutic relevance. Two independent patient cohorts were investigated to define the workflow, turnaround times, feasibility, and reliability of OFA targeted sequencing in clinical application and using archival material. Cohort I consisted of 59 diagnostic clinical samples from the daily routine submitted for molecular testing over a 4-month time period. Cohort II consisted of 39 archival melanoma samples that were up to 15years old. Libraries were prepared from isolated nucleic acids and sequenced on the Ion Torrent PGM sequencer. Sequencing datasets were analyzed using the Ion Reporter software. Genomic alterations were identified and validated by orthogonal conventional assays including pyrosequencing and immunohistochemistry. Sequencing results of both cohorts, including archival formalin-fixed, paraffin-embedded material stored up to 15years, were consistent with published variant frequencies. A concordance of 100% between established assays and OFA targeted NGS was observed. The OFA workflow enabled a turnaround of 3½ days. Taken together, OFA was found to be a convenient tool for fast, reliable, broadly applicable and cost-effective targeted NGS of tumor samples in routine diagnostics. Thus, OFA has strong potential to become an important asset for precision oncology. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  16. (Per)Forming Archival Research Methodologies

    ERIC Educational Resources Information Center

    Gaillet, Lynee Lewis

    2012-01-01

    This article raises multiple issues associated with archival research methodologies and methods. Based on a survey of recent scholarship and interviews with experienced archival researchers, this overview of the current status of archival research both complicates traditional conceptions of archival investigation and encourages scholars to adopt…

  17. Archive of digital CHIRP seismic reflection data collected during USGS cruise 06FSH01 offshore of Siesta Key, Florida, May 2006

    USGS Publications Warehouse

    Harrison, Arnell S.; Dadisman, Shawn V.; Flocks, James G.; Wiese, Dana S.; Robbins, Lisa L.

    2007-01-01

    In May of 2006, the U.S. Geological Survey conducted geophysical surveys offshore of Siesta Key, Florida. This report serves as an archive of unprocessed digital chirp seismic reflection data, trackline maps, navigation files, GIS information, Field Activity Collection System (FACS) logs, observer's logbook, and formal FGDC metadata. Gained digital images of the seismic profiles are also provided. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry and others, 1975) and may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU). Example SU processing scripts and USGS software for viewing the SEG-Y files (Zihlman, 1992) are also provided.

  18. Archive of digital CHIRP seismic reflection data collected during USGS cruise 06SCC01 offshore of Isles Dernieres, Louisiana, June 2006

    USGS Publications Warehouse

    Harrison, Arnell S.; Dadisman, Shawn V.; Ferina, Nick F.; Wiese, Dana S.; Flocks, James G.

    2007-01-01

    In June of 2006, the U.S. Geological Survey conducted a geophysical survey offshore of Isles Dernieres, Louisiana. This report serves as an archive of unprocessed digital CHIRP seismic reflection data, trackline maps, navigation files, GIS information, Field Activity Collection System (FACS) logs, observer's logbook, and formal FGDC metadata. Gained digital images of the seismic profiles are also provided. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry and others, 1975) and may be downloaded and processed with commercial or public domain software such as Seismic UNIX (SU). Example SU processing scripts and USGS software for viewing the SEG-Y files (Zihlman, 1992) are also provided.

  19. The HIPPO Project Archive: Carbon Cycle and Greenhouse Gas Data

    NASA Astrophysics Data System (ADS)

    Christensen, S. W.; Aquino, J.; Hook, L.; Williams, S. F.

    2012-12-01

    The HIAPER (NSF/NCAR Gulfstream V Aircraft) Pole-to-Pole Observations (HIPPO) project measured a comprehensive suite of atmospheric trace gases and aerosols pertinent to understanding the global carbon cycle from the surface to the tropopause and approximately pole-to-pole over the Pacific Ocean. Flights took place over five missions during different seasons from 2009 to 2011. Data and documentation are available to the public from two archives: (1) NCAR's Earth Observing Laboratory (EOL) provides complete aircraft and flight operational data, and (2) the U.S. DOE's Carbon Dioxide Information Analysis Center (CDIAC) provides integrated measurement data products. The integrated products are more generally useful for secondary analyses. Data processing is nearing completion, although improvements to the data will continue to evolve and analyses will continue many years into the future. Periodic new releases of integrated measurement (merged) products will be generated by EOL when individual measurement data have been updated as directed by the Lead Principal Investigator. The EOL and CDIAC archives will share documentation and supplemental links and will ensure that the latest versions of data products are available to users of both archives. The EOL archive (http://www.eol.ucar.edu/projects/hippo/) provides the underlying investigator-provided data, including supporting data sets (e.g. operational satellite, model output, global observations, etc.), and ancillary flight operational information including field catalogs, data quality reports, software, documentation, publications, photos/imagery, and other detailed information about the HIPPO missions. The CDIAC archive provides integrated measurement data products, user documentation, and metadata through the HIPPO website (http://hippo.ornl.gov). These merged products were derived by consistently combining the aircraft state parameters for position, time, temperature, pressure, and wind speed with meteorological

  20. Home - Libraries, Archives, & Museums - Libraries, Archives, & Museums at

    Science.gov Websites

    Alaska State Library Skip to main content State of Alaska myAlaska Departments State Employees Statewide Links × Upcoming Holiday Closure for Memorial Day The Alaska State Libraries, Archives, & Tuesday, May 29. Department of Education and Early Development Alaska State Libraries, Archives, and

  1. Trace element storage capacity of sediments in dead Posidonia oceanica mat from a chronically contaminated marine ecosystem.

    PubMed

    Di Leonardo, Rossella; Mazzola, Antonio; Cundy, Andrew B; Tramati, Cecilia Doriana; Vizzini, Salvatrice

    2017-01-01

    Posidonia oceanica mat is considered a long-term bioindicator of contamination. Storage and sequestration of trace elements and organic carbon (C org ) were assessed in dead P. oceanica mat and bare sediments from a highly polluted coastal marine area (Augusta Bay, central Mediterranean). Sediment elemental composition and sources of organic matter have been altered since the 1950s. Dead P. oceanica mat displayed a greater ability to bury and store trace elements and C org than nearby bare sediments, acting as a long-term contaminant sink over the past 120 yr. Trace elements, probably associated with the mineral fraction, were stabilized and trapped despite die-off of the overlying P. oceanica meadow. Mat deposits registered historic contamination phases well, confirming their role as natural archives for recording trace element trends in marine coastal environments. This sediment typology is enriched with seagrass-derived refractory organic matter, which acts mainly as a diluent of trace elements. Bare sediments showed evidence of inwash of contaminated sediments via reworking; more rapid and irregular sediment accumulation; and, because of the high proportions of labile organic matter, a greater capacity to store trace elements. Through different processes, both sediment typologies represent a repository for chemicals and may pose a risk to the marine ecosystem as a secondary source of contaminants in the case of sediment dredging or erosion. Environ Toxicol Chem 2017;36:49-58. © 2016 SETAC. © 2016 SETAC.

  2. Mantle-derived trace element variability in olivines and their melt inclusions

    NASA Astrophysics Data System (ADS)

    Neave, David A.; Shorttle, Oliver; Oeser, Martin; Weyer, Stefan; Kobayashi, Katsura

    2018-02-01

    Trace element variability in oceanic basalts is commonly used to constrain the physics of mantle melting and the chemistry of Earth's deep interior. However, the geochemical properties of mantle melts are often overprinted by mixing and crystallisation processes during ascent and storage. Studying primitive melt inclusions offers one solution to this problem, but the fidelity of the melt-inclusion archive to bulk magma chemistry has been repeatedly questioned. To provide a novel check of the melt inclusion record, we present new major and trace element analyses from olivine macrocrysts in the products of two geographically proximal, yet compositionally distinct, primitive eruptions from the Reykjanes Peninsula of Iceland. By combining these macrocryst analyses with new and published melt inclusion analyses we demonstrate that olivines have similar patterns of incompatible trace element (ITE) variability to the inclusions they host, capturing chemical systematics on intra- and inter-eruption scales. ITE variability (element concentrations, ratios, variances and variance ratios) in olivines from the ITE-enriched Stapafell eruption is best accounted for by olivine-dominated fractional crystallisation. In contrast, ITE variability in olivines and inclusions from the ITE-depleted Háleyjabunga eruption cannot be explained by crystallisation alone, and must have originated in the mantle. Compatible trace element (CTE) variability is best described by crystallisation processes in both eruptions. Modest correlations between host and inclusion ITE contents in samples from Háleyjabunga suggest that melt inclusions can be faithful archives of melting and magmatic processes. It also indicates that degrees of ITE enrichment can be estimated from olivines directly when melt inclusion and matrix glass records of geochemical variability are poor or absent. Inter-eruption differences in olivine ITE systematics between Stapafell and Háleyjabunga mirror differences in melt

  3. Extracting DNA from 'jaws': high yield and quality from archived tiger shark (Galeocerdo cuvier) skeletal material.

    PubMed

    Nielsen, E E; Morgan, J A T; Maher, S L; Edson, J; Gauthier, M; Pepperell, J; Holmes, B J; Bennett, M B; Ovenden, J R

    2017-05-01

    Archived specimens are highly valuable sources of DNA for retrospective genetic/genomic analysis. However, often limited effort has been made to evaluate and optimize extraction methods, which may be crucial for downstream applications. Here, we assessed and optimized the usefulness of abundant archived skeletal material from sharks as a source of DNA for temporal genomic studies. Six different methods for DNA extraction, encompassing two different commercial kits and three different protocols, were applied to material, so-called bio-swarf, from contemporary and archived jaws and vertebrae of tiger sharks (Galeocerdo cuvier). Protocols were compared for DNA yield and quality using a qPCR approach. For jaw swarf, all methods provided relatively high DNA yield and quality, while large differences in yield between protocols were observed for vertebrae. Similar results were obtained from samples of white shark (Carcharodon carcharias). Application of the optimized methods to 38 museum and private angler trophy specimens dating back to 1912 yielded sufficient DNA for downstream genomic analysis for 68% of the samples. No clear relationships between age of samples, DNA quality and quantity were observed, likely reflecting different preparation and storage methods for the trophies. Trial sequencing of DNA capture genomic libraries using 20 000 baits revealed that a significant proportion of captured sequences were derived from tiger sharks. This study demonstrates that archived shark jaws and vertebrae are potential high-yield sources of DNA for genomic-scale analysis. It also highlights that even for similar tissue types, a careful evaluation of extraction protocols can vastly improve DNA yield. © 2016 John Wiley & Sons Ltd.

  4. Effective removal of hazardous trace metals from recovery boiler fly ashes.

    PubMed

    Kinnarinen, Teemu; Golmaei, Mohammad; Jernström, Eeva; Häkkinen, Antti

    2018-02-15

    The objective of this study is to introduce a treatment sequence enabling straightforward and effective recovery of hazardous trace elements from recovery boiler fly ash (RBFA) by a novel method, and to demonstrate the subsequent removal of Cl and K with the existing crystallization technology. The treatment sequence comprises two stages: dissolution of most other RBFA components than the hazardous trace elements in water in Step 1 of the treatment, and crystallization of the process chemicals in Step 2. Solid-liquid separation has an important role in the treatment, due to the need to separate first the small solid residue containing the trace elements, and to separate the valuable crystals, containing Na and S, from the liquid rich in Cl and K. According to the results, nearly complete recovery of cadmium, lead and zinc can be reached even without pH adjustment. Some other metals, such as Mg and Mn, are removed together with the hazardous metals. Regarding the removal of Cl and K from the process, in this non-optimized case the removal efficiency was satisfactory: 60-70% for K when 80% of sodium was recovered, and close to 70% for Cl when 80% of sulfate was recovered. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Lessons learned from planetary science archiving

    NASA Astrophysics Data System (ADS)

    Zender, J.; Grayzeck, E.

    2006-01-01

    The need for scientific archiving of past, current, and future planetary scientific missions, laboratory data, and modeling efforts is indisputable. To quote from a message by G. Santayama carved over the entrance of the US Archive in Washington DC “Those who can not remember the past are doomed to repeat it.” The design, implementation, maintenance, and validation of planetary science archives are however disputed by the involved parties. The inclusion of the archives into the scientific heritage is problematic. For example, there is the imbalance between space agency requirements and institutional and national interests. The disparity of long-term archive requirements and immediate data analysis requests are significant. The discrepancy between the space missions archive budget and the effort required to design and build the data archive is large. An imbalance exists between new instrument development and existing, well-proven archive standards. The authors present their view on the problems and risk areas in the archiving concepts based on their experience acquired within NASA’s Planetary Data System (PDS) and ESA’s Planetary Science Archive (PSA). Individual risks and potential problem areas are discussed based on a model derived from a system analysis done upfront. The major risk for a planetary mission science archive is seen in the combination of minimal involvement by Mission Scientists and inadequate funding. The authors outline how the risks can be reduced. The paper ends with the authors view on future planetary archive implementations including the archive interoperability aspect.

  6. Dispersion model on PM₂.₅ fugitive dust and trace metals levels in Kuwait Governorates.

    PubMed

    Bu-Olayan, A H; Thomas, B V

    2012-03-01

    Frequent dust storms and recent environmental changes were found to affect the human health especially in residents of arid countries. Investigations on the PM(2.5) fugitive dust in six Kuwait Governorate areas using dispersion Gaussian plume modeling revealed significant relationship between low rate of pollutant emission, low wind velocity, and stable weather conditions' matrix causing high rate of dust deposition in summer than in winter. The rate of dust deposition and trace metals levels in PM(2.5) were in the sequence of G-VI > G-I > G-II > G-V > G-III > G-IV. Trace metals were observed in the sequence of Al > Fe > Zn > Ni > Pb > Cd irrespective of the Governorate areas and the two seasons. The high rate of dust deposition and trace metals in PM(2.5) was reflected by the vast open area, wind velocity, and rapid industrialization besides natural and anthropogenic sources. A combination of air dispersion modeling and nephalometric and gravimetric studies of this kind not only determines the seasonal qualitative and quantitative analyses on the PM(2.5) dust deposition besides trace metals apportionment in six Kuwait Governorate areas, but also characterizes air pollution factors that could be used by environmentalist to deduce preventive measures.

  7. The Rosetta Science Archive: Status and Plans for Enhancing the Archive Content

    NASA Astrophysics Data System (ADS)

    Heather, David; Barthelemy, Maud; Besse, Sebastien; Fraga, Diego; Grotheer, Emmanuel; O'Rourke, Laurence; Taylor, Matthew; Vallat, Claire

    2017-04-01

    On 30 September 2016, Rosetta completed its incredible mission by landing on the surface of Comet 67P/Churyumov-Gerasimenko. Although this marked an end to the spacecraft's active operations, intensive work is still ongoing with instrument teams preparing their final science data deliveries for ingestion into ESA's Planetary Science Archive (PSA). In addition, ESA is establishing contracts with some instrument teams to enhance their data and documentation in an effort to provide the best long-term archive possible for the Rosetta mission. Currently, the majority of teams have delivered all of their data from the nominal mission (end of 2015), and are working on their remaining increments from the 1-year mission extension. The aim is to complete the nominal archiving with data from the complete mission by the end of this year, when a full mission archive review will be held. This review will assess the complete data holdings from Rosetta and ensure that the archive is ready for the long-term. With the resources from the operational mission coming to an end, ESA has established a number of 'enhanced archiving' contracts to ensure that the best possible data are delivered to the archive before instrument teams disband. Updates are focused on key aspects of an instrument's calibration or the production of higher level data / information, and are therefore specific to each instrument's needs. These contracts are currently being kicked off, and will run for various lengths depending upon the activities to be undertaken. The full 'archive enhancement' process will run until September 2019, when the post operations activities for Rosetta will end. Within these contracts, most instrument teams will work on providing a Science User Guide for their data, as well as updating calibrations. Several teams will also be delivering higher level and derived products. For example, the VIRTIS team will be updating both their spectral and geometrical calibrations, and will aim to

  8. Archiving Derrida

    ERIC Educational Resources Information Center

    Morris, Marla

    2003-01-01

    Derrida's archive, broadly speaking, is brilliantly mad, for he digs exegetically into the most difficult textual material and combines the most unlikely texts--from Socrates to Freud, from postcards to encyclopedias, from madness(es) to the archive, from primal scenes to death. In this paper, the author would like to do a brief study of the…

  9. Social Media and Archives: A Survey of Archive Users

    ERIC Educational Resources Information Center

    Washburn, Bruce; Eckert, Ellen; Proffitt, Merrilee

    2013-01-01

    In April and May of 2012, the Online Computer Library Center (OCLC) Research conducted a survey of users of archives to learn more about their habits and preferences. In particular, they focused on the roles that social media, recommendations, reviews, and other forms of user-contributed annotation play in archival research. OCLC surveyed faculty,…

  10. Status of worldwide Landsat archive

    USGS Publications Warehouse

    Warriner, Howard W.

    1987-01-01

    In cooperation with the International Landsat community, and through the Landsat Technical Working Group (LTWG), NOAA is assembling information about the status of the Worldwide Landsat Archive. During LTWG 9, member nations agreed to participate in a survey of International Landsat data holding and of their archive experiences with Landsat data. The goal of the effort was two-fold; one, to document the Landsat archive to date, and, two, to ensure that specific nations' experience with long-term Landsat archival problems were available to others. The survey requested details such as amount of data held, the format of the archive holdings by Spacecraft/Sensor, and acquisition years; the estimated costs to accumulated process, and replace the data (if necessary); the storage space required, and any member nation's plans that would establish the insurance of continuing quality. As a group, the LTWG nations are concerned about the characteristics and reliability of long-term magnetic media storage. Each nation's experience with older data retrieval is solicited in the survey. This information will allow nations to anticipate and plan for required changes to their archival holdings. Also solicited were reports of any upgrades to a nation's archival system that are currently planned and all results of attempts to reduce archive holdings including methodology, current status, and the planned access rates and product support that are anticipated for responding to future archival usage.

  11. The GTC Public Archive

    NASA Astrophysics Data System (ADS)

    Alacid, J. Manuel; Solano, Enrique

    2015-12-01

    The Gran Telescopio Canarias (GTC) archive is operational since November 2011. The archive, maintained by the Data Archive Unit at CAB in the framework of the Spanish Virtual Observatory project, provides access to both raw and science ready data and has been designed in compliance with the standards defined by the International Virtual Observatory Alliance (IVOA) to guarantee a high level of data accessibility and handling. In this presentation I will describe the main capabilities the GTC archive offers to the community, in terms of functionalities and data collections, to carry out an efficient scientific exploitation of GTC data.

  12. FDSTools: A software package for analysis of massively parallel sequencing data with the ability to recognise and correct STR stutter and other PCR or sequencing noise.

    PubMed

    Hoogenboom, Jerry; van der Gaag, Kristiaan J; de Leeuw, Rick H; Sijen, Titia; de Knijff, Peter; Laros, Jeroen F J

    2017-03-01

    Massively parallel sequencing (MPS) is on the advent of a broad scale application in forensic research and casework. The improved capabilities to analyse evidentiary traces representing unbalanced mixtures is often mentioned as one of the major advantages of this technique. However, most of the available software packages that analyse forensic short tandem repeat (STR) sequencing data are not well suited for high throughput analysis of such mixed traces. The largest challenge is the presence of stutter artefacts in STR amplifications, which are not readily discerned from minor contributions. FDSTools is an open-source software solution developed for this purpose. The level of stutter formation is influenced by various aspects of the sequence, such as the length of the longest uninterrupted stretch occurring in an STR. When MPS is used, STRs are evaluated as sequence variants that each have particular stutter characteristics which can be precisely determined. FDSTools uses a database of reference samples to determine stutter and other systemic PCR or sequencing artefacts for each individual allele. In addition, stutter models are created for each repeating element in order to predict stutter artefacts for alleles that are not included in the reference set. This information is subsequently used to recognise and compensate for the noise in a sequence profile. The result is a better representation of the true composition of a sample. Using Promega Powerseq™ Auto System data from 450 reference samples and 31 two-person mixtures, we show that the FDSTools correction module decreases stutter ratios above 20% to below 3%. Consequently, much lower levels of contributions in the mixed traces are detected. FDSTools contains modules to visualise the data in an interactive format allowing users to filter data with their own preferred thresholds. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  13. The proactive historian: Methodological opportunities presented by the new archives documenting genomics.

    PubMed

    García-Sancho, Miguel

    2016-02-01

    In this paper, I propose a strategy for navigating newly available archives in the study of late-twentieth century genomics. I demonstrate that the alleged 'explosion of data' characteristic of genomics-and of contemporary science in general-is not a new problem and that historians of earlier periods have dealt with information overload by relying on the 'perspective of time': the filtering effect the passage of time naturally exerts on both sources and memories. I argue that this reliance on the selective capacity of time results in inheriting archives curated by others and, consequently, poses the risk of reifying ahistorical scientific discourses. Through a preliminary examination of archives documenting early attempts at mapping and sequencing the human genome, I propose an alternative approach, in which historians proactively problematize and improve available sources. This approach provides historians with a voice in the socio-political management of scientific heritage and advances methodological innovations in the use of oral histories. It also provides a narrative framework in which to address big science initiatives by following second order administrators, rather than individual scientists. The new genomic archives thus represent an opportunity for historians to take an active role in current debates concerning 'big data' and critically embed the humanities in pressing global problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Seasonality Records From Stable Isotopes and Trace Elements in Mussel and Limpet Shells From Archaeological Sites on Gibraltar

    NASA Astrophysics Data System (ADS)

    Fa, D.; Ferguson, J. E.; Atkinson, T. C.; Barton, R. N.; Ditchfield, P.; Finlayson, G.; Finlayson, J. C.; Henderson, G. M.

    2007-12-01

    Seasonal resolution climate records from mid and high latitudes would allow investigation of the role of seasonality in controlling mean climate on diverse timescales, and of the evolution of climate systems such as the North Atlantic Oscillation (NAO). But achieving such seasonal resolution is difficult for regions outside the growth range of surface corals. Marine mollusc shells provide a possible archive and contain growth increments varying in scale from tidal to annual. However, finding and dating sequences of marine mollusc shells spanning long periods of time is difficult due to sea-level change and the destructional nature of most coastal environments. In this study, we have made use of the habit of hominins on Gibraltar to collect molluscs for food over at least the last 120 kyr. In archaeological excavations of two caves (Gorham's and Vanguard Caves), mollusc shells were found, in habitation levels and in sediment blown into the caves. Existing 14C, OSL, and U-series chronologies provide a chronological framework for this suite of samples. The species found are predominantly Mytilus (mussels) or Patella (limpets). Gibraltar is an interesting location for paleoclimate reconstruction due to its proximity to the boundary of modern day climate belts but also due to its anthropological and archaeological importance. To gain a quantitative understanding of the local controls on stable isotopes and trace elements within Gibraltarian shells, we have initiated a water-sampling programme; emplaced a temperature and salinity logger near the sampling site; and marked live Patella and Mytilus with fluorescent dye to firmly establish growth rates and controls on chemical composition. We have also conducted stable-isotope and trace-element analysis of modern and fossil Patella and Mytilus shells by micromilling. Recent Patella and Mytilus shells show that the oxygen isotope composition of modern shells allow the accurate reconstruction of the full seasonal range in sea

  15. Archive of digital boomer and CHIRP seismic reflection data collected during USGS cruise 06FSH03 offshore of Fort Lauderdale, Florida, September 2006

    USGS Publications Warehouse

    Harrison, Arnell S.; Dadisman, Shawn V.; Reich, Christopher D.; Wiese, Dana S.; Greenwood, Jason W.; Swarzenski, Peter W.

    2007-01-01

    In September of 2006, the U.S. Geological Survey conducted geophysical surveys offshore of Fort Lauderdale, FL. This report serves as an archive of unprocessed digital boomer and CHIRP seismic reflection data, trackline maps, navigation files, GIS information, Field Activity Collection System (FACS) logs, observer's logbook, and formal FGDC metadata. Filtered and gained digital images of the seismic profiles are also provided. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry and others, 1975) and may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU). Example SU processing scripts and USGS software for viewing the SEG-Y files (Zihlman, 1992) are also provided.

  16. Cassini Archive Tracking System

    NASA Technical Reports Server (NTRS)

    Conner, Diane; Sayfi, Elias; Tinio, Adrian

    2006-01-01

    The Cassini Archive Tracking System (CATS) is a computer program that enables tracking of scientific data transfers from originators to the Planetary Data System (PDS) archives. Without CATS, there is no systematic means of locating products in the archive process or ensuring their completeness. By keeping a database of transfer communications and status, CATS enables the Cassini Project and the PDS to efficiently and accurately report on archive status. More importantly, problem areas are easily identified through customized reports that can be generated on the fly from any Web-enabled computer. A Web-browser interface and clearly defined authorization scheme provide safe distributed access to the system, where users can perform functions such as create customized reports, record a transfer, and respond to a transfer. CATS ensures that Cassini provides complete science archives to the PDS on schedule and that those archives are available to the science community by the PDS. The three-tier architecture is loosely coupled and designed for simple adaptation to multimission use. Written in the Java programming language, it is portable and can be run on any Java-enabled Web server.

  17. Fermilab History and Archives Project | Norman F. Ramsey

    Science.gov Websites

    Fermilab History and Archives Project Fermilab History and Archives Project Fermilab History and Archives Project Home About the Archives History and Archives Online Request Contact Us History & ; Archives Project Fermilab History and Archives Project Norman F. Ramsey Back to History and Archives

  18. Parametric Trace Slicing

    NASA Technical Reports Server (NTRS)

    Rosu, Grigore (Inventor); Chen, Feng (Inventor); Chen, Guo-fang; Wu, Yamei; Meredith, Patrick O. (Inventor)

    2014-01-01

    A program trace is obtained and events of the program trace are traversed. For each event identified in traversing the program trace, a trace slice of which the identified event is a part is identified based on the parameter instance of the identified event. For each trace slice of which the identified event is a part, the identified event is added to an end of a record of the trace slice. These parametric trace slices can be used in a variety of different manners, such as for monitoring, mining, and predicting.

  19. Strain-specific and pooled genome sequences for populations of Drosophila melanogaster from three continents.

    PubMed

    Bergman, Casey M; Haddrill, Penelope R

    2015-01-01

    To contribute to our general understanding of the evolutionary forces that shape variation in genome sequences in nature, we have sequenced genomes from 50 isofemale lines and six pooled samples from populations of Drosophila melanogaster on three continents. Analysis of raw and reference-mapped reads indicates the quality of these genomic sequence data is very high. Comparison of the predicted and experimentally-determined Wolbachia infection status of these samples suggests that strain or sample swaps are unlikely to have occurred in the generation of these data. Genome sequences are freely available in the European Nucleotide Archive under accession ERP009059. Isofemale lines can be obtained from the Drosophila Species Stock Center.

  20. Ethics and Truth in Archival Research

    ERIC Educational Resources Information Center

    Tesar, Marek

    2015-01-01

    The complexities of the ethics and truth in archival research are often unrecognised or invisible in educational research. This paper complicates the process of collecting data in the archives, as it problematises notions of ethics and truth in the archives. The archival research took place in the former Czechoslovakia and its turbulent political…

  1. A MySQL Based EPICS Archiver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christopher Slominski

    2009-10-01

    Archiving a large fraction of the EPICS signals within the Jefferson Lab (JLAB) Accelerator control system is vital for postmortem and real-time analysis of the accelerator performance. This analysis is performed on a daily basis by scientists, operators, engineers, technicians, and software developers. Archiving poses unique challenges due to the magnitude of the control system. A MySQL Archiving system (Mya) was developed to scale to the needs of the control system; currently archiving 58,000 EPICS variables, updating at a rate of 11,000 events per second. In addition to the large collection rate, retrieval of the archived data must also bemore » fast and robust. Archived data retrieval clients obtain data at a rate over 100,000 data points per second. Managing the data in a relational database provides a number of benefits. This paper describes an archiving solution that uses an open source database and standard off the shelf hardware to reach high performance archiving needs. Mya has been in production at Jefferson Lab since February of 2007.« less

  2. Design and implementation of scalable tape archiver

    NASA Technical Reports Server (NTRS)

    Nemoto, Toshihiro; Kitsuregawa, Masaru; Takagi, Mikio

    1996-01-01

    In order to reduce costs, computer manufacturers try to use commodity parts as much as possible. Mainframes using proprietary processors are being replaced by high performance RISC microprocessor-based workstations, which are further being replaced by the commodity microprocessor used in personal computers. Highly reliable disks for mainframes are also being replaced by disk arrays, which are complexes of disk drives. In this paper we try to clarify the feasibility of a large scale tertiary storage system composed of 8-mm tape archivers utilizing robotics. In the near future, the 8-mm tape archiver will be widely used and become a commodity part, since recent rapid growth of multimedia applications requires much larger storage than disk drives can provide. We designed a scalable tape archiver which connects as many 8-mm tape archivers (element archivers) as possible. In the scalable archiver, robotics can exchange a cassette tape between two adjacent element archivers mechanically. Thus, we can build a large scalable archiver inexpensively. In addition, a sophisticated migration mechanism distributes frequently accessed tapes (hot tapes) evenly among all of the element archivers, which improves the throughput considerably. Even with the failures of some tape drives, the system dynamically redistributes hot tapes to the other element archivers which have live tape drives. Several kinds of specially tailored huge archivers are on the market, however, the 8-mm tape scalable archiver could replace them. To maintain high performance in spite of high access locality when a large number of archivers are attached to the scalable archiver, it is necessary to scatter frequently accessed cassettes among the element archivers and to use the tape drives efficiently. For this purpose, we introduce two cassette migration algorithms, foreground migration and background migration. Background migration transfers cassettes between element archivers to redistribute frequently accessed

  3. Major and trace element abundances in volcanic rocks of orogenic areas.

    NASA Technical Reports Server (NTRS)

    Jakes, P.; White, A. J. R.

    1972-01-01

    The composition of recent island-arc volcanic rocks in relation to their geographic and stratigraphic relations is discussed. The differences in composition between volcanic rocks and those in continental margins are pointed out. Trace elements and major elements are shown to suggest a continuous gradational sequence from tholeiites through calc-alkaline rocks to shoshonites.

  4. The Ethics of Archival Research

    ERIC Educational Resources Information Center

    McKee, Heidi A.; Porter, James E.

    2012-01-01

    What are the key ethical issues involved in conducting archival research? Based on examination of cases and interviews with leading archival researchers in composition, this article discusses several ethical questions and offers a heuristic to guide ethical decision making. Key to this process is recognizing the person-ness of archival materials.…

  5. Complementary concept for an image archive and communication system in a cardiological department based on CD-medical, an online archive, and networking facilities

    NASA Astrophysics Data System (ADS)

    Oswald, Helmut; Mueller-Jones, Kay; Builtjes, Jan; Fleck, Eckart

    1998-07-01

    The developments in information technologies -- computer hardware, networking and storage media -- has led to expectations that these advances make it possible to replace 35 mm film completely by digital techniques in the catheter laboratory. Besides the role of an archival medium, cine film is used as the major image review and exchange medium in cardiology. None of the today technologies can fulfill completely the requirements to replace cine film. One of the major drawbacks of cine film is the single access in time and location. For the four catheter laboratories in our institutions we have designed a complementary concept combining the CD-R, also called CD-medical, as a single patient storage and exchange medium, and a digital archive for on-line access and image review of selected frames or short sequences on adequate medical workstations. The image data from various modalities as well as all digital documents regarding to a patient are part of an electronic patient record. The access, the processing and the display of documents is supported by an integrated medical application.

  6. Earth observation archive activities at DRA Farnborough

    NASA Technical Reports Server (NTRS)

    Palmer, M. D.; Williams, J. M.

    1993-01-01

    Space Sector, Defence Research Agency (DRA), Farnborough have been actively involved in the acquisition and processing of Earth Observation data for over 15 years. During that time an archive of over 20,000 items has been built up. This paper describes the major archive activities, including: operation and maintenance of the main DRA Archive, the development of a prototype Optical Disc Archive System (ODAS), the catalog systems in use at DRA, the UK Processing and Archive Facility for ERS-1 data, and future plans for archiving activities.

  7. Archive and records management-Fiscal year 2010 offline archive media trade study

    USGS Publications Warehouse

    Bodoh, Tom; Boettcher, Ken; Gacke, Ken; Greenhagen, Cheryl; Engelbrecht, Al

    2010-01-01

    This document is a trade study comparing offline digital archive storage technologies. The document compares and assesses several technologies and recommends which technologies could be deployed as the next generation standard for the U.S. Geological Survey (USGS). Archives must regularly migrate to the next generation of digital archive technology, and the technology selected must maintain data integrity until the next migration. This document is the fiscal year 2010 (FY10) revision of a study completed in FY01 and revised in FY03, FY04, FY06, and FY08.

  8. Dynamic Data Management Based on Archival Process Integration at the Centre for Environmental Data Archival

    NASA Astrophysics Data System (ADS)

    Conway, Esther; Waterfall, Alison; Pepler, Sam; Newey, Charles

    2015-04-01

    In this paper we decribe a business process modelling approach to the integration of exisiting archival activities. We provide a high level overview of existing practice and discuss how procedures can be extended and supported through the description of preservation state. The aim of which is to faciliate the dynamic controlled management of scientific data through its lifecycle. The main types of archival processes considered are: • Management processes that govern the operation of an archive. These management processes include archival governance (preservation state management, selection of archival candidates and strategic management) . • Operational processes that constitute the core activities of the archive which maintain the value of research assets. These operational processes are the acquisition, ingestion, deletion, generation of metadata and preservation actvities, • Supporting processes, which include planning, risk analysis and monitoring of the community/preservation environment. We then proceed by describing the feasability testing of extended risk management and planning procedures which integrate current practices. This was done through the CEDA Archival Format Audit which inspected British Atmospherics Data Centre and National Earth Observation Data Centre Archival holdings. These holdings are extensive, comprising of around 2PB of data and 137 million individual files which were analysed and characterised in terms of format based risk. We are then able to present an overview of the risk burden faced by a large scale archive attempting to maintain the usability of heterogeneous environmental data sets. We conclude by presenting a dynamic data management information model that is capable of describing the preservation state of archival holdings throughout the data lifecycle. We provide discussion of the following core model entities and their relationships: • Aspirational entities, which include Data Entity definitions and their associated

  9. Trace fossils and sedimentology of a Late Cretaceous Progradational Barrier Island sequence: Bearpaw and Horseshoe Canyon Formations, Dorothy, Alberta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saunders, T.D.; Pemberton, A.G.; Ranger, M.J.

    A well-exposed example of a regressive barrier island succession crops out in the Alberta badlands along the Red Deer River Valley. In the most landward (northwestern) corner of the study area, only shallow-water and subaerial deposits are represented and are dominated by tidal inlet related facies. Seaward (southeast), water depth increases and the succession is typified by open-marine beach to offshore-related facies arranged in coarsening-upward progradational sequence. Detailed sedimentologic and ichnologic analyses of this sequence have allowed for its division into three distinct environmental zones (lower, middle, and upper). The lower zone comprises a laterally diverse assemblage of storm-influenced, lowermore » shoreface through offshore deposits. Outcrop in the northeast is dominated by thick beds of hummocky and/or swaley cross-stratified storm sand. In the southeast, storm events have only minor influence. This lower zone contains a wide diversity of well-preserved trace fossils whose distribution appears to have been influenced by gradients in wave energy, bottom stagnation, and the interplay of storm and fair-weather processes. The middle zone records deposition across an upper shoreface environment. Here, horizontal to low-angle bedding predominates, with interspersed sets of small- and large-scale cross-bedding increasing toward the top. A characteristic feature of the upper part of this zone is the lack of biogenic structures suggesting deposition in an exposed high-energy surf zone. The upper zone records intertidal to supratidal progradation of the shoreline complex. Planar-laminated sandstone forms a distinct foreshore interval above which rhizoliths and organic material become increasingly abundant, marking transition to the backshore. A significant feature of this zone is the occurrence of an intensely bioturbated interval toward the top of the foreshore.« less

  10. Geoseq: a tool for dissecting deep-sequencing datasets.

    PubMed

    Gurtowski, James; Cancio, Anthony; Shah, Hardik; Levovitz, Chaya; George, Ajish; Homann, Robert; Sachidanandam, Ravi

    2010-10-12

    Datasets generated on deep-sequencing platforms have been deposited in various public repositories such as the Gene Expression Omnibus (GEO), Sequence Read Archive (SRA) hosted by the NCBI, or the DNA Data Bank of Japan (ddbj). Despite being rich data sources, they have not been used much due to the difficulty in locating and analyzing datasets of interest. Geoseq http://geoseq.mssm.edu provides a new method of analyzing short reads from deep sequencing experiments. Instead of mapping the reads to reference genomes or sequences, Geoseq maps a reference sequence against the sequencing data. It is web-based, and holds pre-computed data from public libraries. The analysis reduces the input sequence to tiles and measures the coverage of each tile in a sequence library through the use of suffix arrays. The user can upload custom target sequences or use gene/miRNA names for the search and get back results as plots and spreadsheet files. Geoseq organizes the public sequencing data using a controlled vocabulary, allowing identification of relevant libraries by organism, tissue and type of experiment. Analysis of small sets of sequences against deep-sequencing datasets, as well as identification of public datasets of interest, is simplified by Geoseq. We applied Geoseq to, a) identify differential isoform expression in mRNA-seq datasets, b) identify miRNAs (microRNAs) in libraries, and identify mature and star sequences in miRNAS and c) to identify potentially mis-annotated miRNAs. The ease of using Geoseq for these analyses suggests its utility and uniqueness as an analysis tool.

  11. Chemically-dissected Rotation Curves of the Galactic Bulge from Hubble Space Telescope Proper Motions on the Main Sequence

    NASA Astrophysics Data System (ADS)

    Clarkson, William I.; Calamida, Annalisa; Sahu, Kailash C.; Gennaro, Mario; Brown, Thomas M.; Avila, Roberto J.; Rich, R. Michael; Debattista, Victor P.

    2018-01-01

    We report results from a pilot study using archival Hubble Space Telescope imaging observations in seven filters over a multi-year time-baseline to probe the co-dependence of chemical abundance and kinematics, using proper motion-based rotation curves selected on relative metallicity. With spectroscopic studies suggesting the metallicity distribution of the Bulge may be bimodal, we follow a data-driven approach to classify stars as belonging to metal-rich or metal-poor ends of the observed relative photometric metallicity distribution, with classification implemented using standard unsupervised learning techniques. We detect clear differences in both slope and amplitude of the proper motion-based rotation curve as traced by the more “metal-rich” and “metal-poor” samples. The sense of the discrepancy is qualitatively in agreement both with recent observational and theoretical indications; the “metal-poor” sample does indeed show a weaker rotation signature.This is the first study to dissect the proper motion rotation curve of the Bulge by chemical abundance using main-sequence targets, which are orders of magnitude more common on the sky than bright giants. These techniques thus offer a pencil-beam complement to wide-field studies that use more traditional tracer populations.

  12. Strain-specific and pooled genome sequences for populations of Drosophila melanogaster from three continents.

    PubMed Central

    Bergman, Casey M.; Haddrill, Penelope R.

    2015-01-01

    To contribute to our general understanding of the evolutionary forces that shape variation in genome sequences in nature, we have sequenced genomes from 50 isofemale lines and six pooled samples from populations of Drosophila melanogaster on three continents. Analysis of raw and reference-mapped reads indicates the quality of these genomic sequence data is very high. Comparison of the predicted and experimentally-determined Wolbachia infection status of these samples suggests that strain or sample swaps are unlikely to have occurred in the generation of these data. Genome sequences are freely available in the European Nucleotide Archive under accession ERP009059. Isofemale lines can be obtained from the Drosophila Species Stock Center. PMID:25717372

  13. NASA's Astrophysics Data Archives

    NASA Astrophysics Data System (ADS)

    Hasan, H.; Hanisch, R.; Bredekamp, J.

    2000-09-01

    The NASA Office of Space Science has established a series of archival centers where science data acquired through its space science missions is deposited. The availability of high quality data to the general public through these open archives enables the maximization of science return of the flight missions. The Astrophysics Data Centers Coordinating Council, an informal collaboration of archival centers, coordinates data from five archival centers distiguished primarily by the wavelength range of the data deposited there. Data are available in FITS format. An overview of NASA's data centers and services is presented in this paper. A standard front-end modifyer called `Astrowbrowse' is described. Other catalog browsers and tools include WISARD and AMASE supported by the National Space Scince Data Center, as well as ISAIA, a follow on to Astrobrowse.

  14. SODA: Smart Objects, Dumb Archives

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Maly, Kurt; Zubair, Mohammad; Shen, Stewart N. T.

    2004-01-01

    We present the Smart Object, Dumb Archive (SODA) model for digital libraries (DLs). The SODA model transfers functionality traditionally associated with archives to the archived objects themselves. We are exploiting this shift of responsibility to facilitate other DL goals, such as interoperability, object intelligence and mobility, and heterogeneity. Objects in a SODA DL negotiate presentation of content and handle their own terms and conditions. In this paper we present implementations of our smart objects, buckets, and our dumb archive (DA). We discuss the status of buckets and DA and how they are used in a variety of DL projects.

  15. Trace elements in fish from Taihu Lake, China: levels, associated risks, and trophic transfer.

    PubMed

    Hao, Ying; Chen, Liang; Zhang, Xiaolan; Zhang, Dongping; Zhang, Xinyu; Yu, Yingxin; Fu, Jiamo

    2013-04-01

    Concentrations of eight trace elements [iron (Fe), manganese (Mn), zinc (Zn), chromium (Cr), mercury (Hg), cadmium (Cd), lead (Pb), and arsenic (As)] were measured in a total of 198 samples covering 24 fish species collected from Taihu Lake, China, in September 2009. The trace elements were detected in all samples, and the total mean concentrations ranged from 18.2 to 215.8 μg/g dw (dry weight). The concentrations of the trace elements followed the sequence of Zn>Fe>Mn>Cr>As>Hg>Pb>Cd. The measured trace element concentrations in fish from Taihu Lake were similar to or lower than the reported values in fish around the world. The metal pollution index was used to compare the total trace element accumulation levels among various species. Toxabramis swinhonis (1.606) accumulated the highest level of the total trace elements, and Saurogobio dabryi (0.315) contained the lowest. The concentrations of human non-essential trace elements (Hg, Cd, Pb, and As) were lower than the allowable maximum levels in fish in China and the European Union. The relationships between the trace element concentrations and the δ(15)N values of fish species were used to investigate the trophic transfer potential of the trace elements. Of the trace elements, Hg might be biomagnified through the food chain in Taihu Lake if the significant level of p-value was set at 0.1. No biomagnification and biodilution were observed for other trace elements. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Tracing Melioidosis Back to the Source: Using Whole-Genome Sequencing To Investigate an Outbreak Originating from a Contaminated Domestic Water Supply

    PubMed Central

    McRobb, Evan; Kaestli, Mirjam; Mayo, Mark; Keim, Paul

    2015-01-01

    Melioidosis, a disease of public health importance in Southeast Asia and northern Australia, is caused by the Gram-negative soil bacillus Burkholderia pseudomallei. Melioidosis is typically acquired through environmental exposure, and case clusters are rare, even in regions where the disease is endemic. B. pseudomallei is classed as a tier 1 select agent by the Centers for Disease Control and Prevention; from a biodefense perspective, source attribution is vital in an outbreak scenario to rule out a deliberate release. Two cases of melioidosis within a 3-month period at a residence in rural northern Australia prompted an investigation to determine the source of exposure. B. pseudomallei isolates from the property's groundwater supply matched the multilocus sequence type of the clinical isolates. Whole-genome sequencing confirmed the water supply as the probable source of infection in both cases, with the clinical isolates differing from the likely infecting environmental strain by just one single nucleotide polymorphism (SNP) each. For the first time, we report a phylogenetic analysis of genomewide insertion/deletion (indel) data, an approach conventionally viewed as problematic due to high mutation rates and homoplasy. Our whole-genome indel analysis was concordant with the SNP phylogeny, and these two combined data sets provided greater resolution and a better fit with our epidemiological chronology of events. Collectively, this investigation represents a highly accurate account of source attribution in a melioidosis outbreak and gives further insight into a frequently overlooked reservoir of B. pseudomallei. Our methods and findings have important implications for outbreak source tracing of this bacterium and other highly recombinogenic pathogens. PMID:25631791

  17. Archives: New Horizons in Astronomy

    NASA Astrophysics Data System (ADS)

    Bobis, L.; Laurenceau, A.

    2010-10-01

    The scientific archives in the Paris Observatory's library date back to the XVIIth century. In addition to the preservation and the valorisation of these historic archives, the library is also responsible for the efficient and timely management of contemporary documents to ensure their optimum conservation and identification once they become historical. Oral, iconographic and electronic documents complement these paper archives.

  18. Mining dynamic noteworthy functions in software execution sequences

    PubMed Central

    Huang, Guoyan; Wang, Yuqian; He, Haitao; Ren, Jiadong

    2017-01-01

    As the quality of crucial entities can directly affect that of software, their identification and protection become an important premise for effective software development, management, maintenance and testing, which thus contribute to improving the software quality and its attack-defending ability. Most analysis and evaluation on important entities like codes-based static structure analysis are on the destruction of the actual software running. In this paper, from the perspective of software execution process, we proposed an approach to mine dynamic noteworthy functions (DNFM)in software execution sequences. First, according to software decompiling and tracking stack changes, the execution traces composed of a series of function addresses were acquired. Then these traces were modeled as execution sequences and then simplified so as to get simplified sequences (SFS), followed by the extraction of patterns through pattern extraction (PE) algorithm from SFS. After that, evaluating indicators inner-importance and inter-importance were designed to measure the noteworthiness of functions in DNFM algorithm. Finally, these functions were sorted by their noteworthiness. Comparison and contrast were conducted on the experiment results from two traditional complex network-based node mining methods, namely PageRank and DegreeRank. The results show that the DNFM method can mine noteworthy functions in software effectively and precisely. PMID:28278276

  19. Mining dynamic noteworthy functions in software execution sequences.

    PubMed

    Zhang, Bing; Huang, Guoyan; Wang, Yuqian; He, Haitao; Ren, Jiadong

    2017-01-01

    As the quality of crucial entities can directly affect that of software, their identification and protection become an important premise for effective software development, management, maintenance and testing, which thus contribute to improving the software quality and its attack-defending ability. Most analysis and evaluation on important entities like codes-based static structure analysis are on the destruction of the actual software running. In this paper, from the perspective of software execution process, we proposed an approach to mine dynamic noteworthy functions (DNFM)in software execution sequences. First, according to software decompiling and tracking stack changes, the execution traces composed of a series of function addresses were acquired. Then these traces were modeled as execution sequences and then simplified so as to get simplified sequences (SFS), followed by the extraction of patterns through pattern extraction (PE) algorithm from SFS. After that, evaluating indicators inner-importance and inter-importance were designed to measure the noteworthiness of functions in DNFM algorithm. Finally, these functions were sorted by their noteworthiness. Comparison and contrast were conducted on the experiment results from two traditional complex network-based node mining methods, namely PageRank and DegreeRank. The results show that the DNFM method can mine noteworthy functions in software effectively and precisely.

  20. Community archiving of imaging studies

    NASA Astrophysics Data System (ADS)

    Fritz, Steven L.; Roys, Steven R.; Munjal, Sunita

    1996-05-01

    The quantity of image data created in a large radiology practice has long been a challenge for available archiving technology. Traditional methods ofarchiving the large quantity of films generated in radiology have relied on warehousing in remote sites, with courier delivery of film files for historical comparisons. A digital community archive, accessible via a wide area network, represents a feasible solution to the problem of archiving digital images from a busy practice. In addition, it affords a physician caring for a patient access to imaging studies performed at a variety ofhealthcare institutions without the need to repeat studies. Security problems include both network security issues in the WAN environment and access control for patient, physician and imaging center. The key obstacle to developing a community archive is currently political. Reluctance to participate in a community archive can be reduced by appropriate design of the access mechanisms.

  1. Sequencing of bimaxillary surgery in the correction of vertical maxillary excess: retrospective study.

    PubMed

    Salmen, F S; de Oliveira, T F M; Gabrielli, M A C; Pereira Filho, V A; Real Gabrielli, M F

    2018-06-01

    The aim of this study was to evaluate the precision of bimaxillary surgery performed to correct vertical maxillary excess, when the procedure is sequenced with mandibular surgery first or maxillary surgery first. Thirty-two patients, divided into two groups, were included in this retrospective study. Group 1 comprised patients who received bimaxillary surgery following the classical sequence with repositioning of the maxilla first. Patients in group 2 received bimaxillary surgery, but the mandible was operated on first. The precision of the maxillomandibular repositioning was determined by comparison of the digital prediction and postoperative tracings superimposed on the cranial base. The data were tabulated and analyzed statistically. In this sample, both surgical sequences provided adequate clinical accuracy. The classical sequence, repositioning the maxilla first, resulted in greater accuracy for A-point and the upper incisor edge vertical position. Repositioning the mandible first allowed greater precision in the vertical position of pogonion. In conclusion, although both surgical sequences may be used, repositioning the mandible first will result in greater imprecision in relation to the predictive tracing than repositioning the maxilla first. The classical sequence resulted in greater accuracy in the vertical position of the maxilla, which is key for aesthetics. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  2. The COROT Archive at LAEFF

    NASA Astrophysics Data System (ADS)

    Velasco, Almudena; Gutiérrez, Raúl; Solano, Enrique; García-Torres, Miguel; López, Mauro; Sarro, Luis Manuel

    We describe here the main capabilities of the COROT archive. The archive (http://sdc.laeff.inta.es/corotfa/jsp/searchform.jsp), managed at LAEFF in the framework of the Spanish Virtual Observatory (http://svo.laeff.inta.es), has been developed following the standards and requirements defined by IVOA (http://www.ivoa.net). The COROT archive at LAEFF will be publicly available by the end of 2008.

  3. The extreme ultraviolet explorer archive

    NASA Astrophysics Data System (ADS)

    Polomski, E.; Drake, J. J.; Dobson, C.; Christian, C.

    1993-09-01

    The Extreme Ultrviolet Explorer (EUVE) public archive was created to handle the storage, maintenance, and distribution of EUVE data and ancillary documentation, information, and software. Access to the archive became available to the public on July 17, 1992, only 40 days after the launch of the EUVE satellite. A brief overview of the archive's contents and the various methods of access will be described.

  4. The Gaia Archive at ESAC: a VO-inside archive

    NASA Astrophysics Data System (ADS)

    Gonzalez-Nunez, J.

    2015-12-01

    The ESDC (ESAC Science Data Center) is one of the active members of the IVOA (International Virtual Observatory Alliance) that have defined a set of standards, libraries and concepts that allows to create flexible,scalable and interoperable architectures on the data archives development. In the case of astronomy science that involves the use of big catalogues, as in Gaia or Euclid, TAP, UWS and VOSpace standards can be used to create an architecture that allows the explotation of this valuable data from the community. Also, new challenges arise like the implementation of the new paradigm "move code close to the data", what can be partially obtained by the extension of the protocols (TAP+, UWS+, etc) or the languages (ADQL). We explain how we have used VO standards and libraries for the Gaia Archive that, not only have producing an open and interoperable archive but, also, minimizing the developement on certain areas. Also we will explain how we have extended these protocols and the future plans.

  5. Sampling solution traces for the problem of sorting permutations by signed reversals

    PubMed Central

    2012-01-01

    Background Traditional algorithms to solve the problem of sorting by signed reversals output just one optimal solution while the space of all optimal solutions can be huge. A so-called trace represents a group of solutions which share the same set of reversals that must be applied to sort the original permutation following a partial ordering. By using traces, we therefore can represent the set of optimal solutions in a more compact way. Algorithms for enumerating the complete set of traces of solutions were developed. However, due to their exponential complexity, their practical use is limited to small permutations. A partial enumeration of traces is a sampling of the complete set of traces and can be an alternative for the study of distinct evolutionary scenarios of big permutations. Ideally, the sampling should be done uniformly from the space of all optimal solutions. This is however conjectured to be ♯P-complete. Results We propose and evaluate three algorithms for producing a sampling of the complete set of traces that instead can be shown in practice to preserve some of the characteristics of the space of all solutions. The first algorithm (RA) performs the construction of traces through a random selection of reversals on the list of optimal 1-sequences. The second algorithm (DFALT) consists in a slight modification of an algorithm that performs the complete enumeration of traces. Finally, the third algorithm (SWA) is based on a sliding window strategy to improve the enumeration of traces. All proposed algorithms were able to enumerate traces for permutations with up to 200 elements. Conclusions We analysed the distribution of the enumerated traces with respect to their height and average reversal length. Various works indicate that the reversal length can be an important aspect in genome rearrangements. The algorithms RA and SWA show a tendency to lose traces with high average reversal length. Such traces are however rare, and qualitatively our results

  6. Archival Information Management System.

    DTIC Science & Technology

    1995-02-01

    management system named Archival Information Management System (AIMS), designed to meet the audit trail requirement for studies completed under the...are to be archived to the extent that future reproducibility and interrogation of results will exist. This report presents a prototype information

  7. The Agh Band loess-palaeosol sequence in Northern Iran - a detailed archive for climate and environmental change during the last and penultimate glacial - interglacial cycles

    NASA Astrophysics Data System (ADS)

    Lauer, Tobias; Frechen, Manfred; Vlaminck, Stefan; Kehl, Martin; Sharifi, Jafar; Rolf, Christian; Khormali, Farhad

    2016-04-01

    The northern Iranian loess profiles host important information on quaternary climate and palaeoenvironmental changes in the region. Furthermore, they build an important link to correlate European and Central Asian archives. Due to a significant climatic gradient with decreasing precipitation from the west to the east and from the south to the north, loess-palaeosol sequences which were formed synchronously under different climatic conditions can be studied. The Agh Band profile is located in the so called Iranian "Loess Plateau", a semi-arid region with about 300 mm annual precipitation. The loess deposits reach a thickness of > 60 meters and are subdivided by several weak soil horizons in the more upper part and by a pedo-complex of 3 Bw(y) horizons in the lower part of the loess. The Agh Band profile was sampled in 2 cm intervals for multi-proxy analyses (e.g. magnetic susceptibility and grain size measurements). Furthermore, samples for palaeomangentic studies and luminescence dating were collected and a pIRIR290 approach was applied to fine-grain polyminerals. The results show that the Agh Band profile yields a climate archive reaching from MIS 7 to MIS 2. Several chronological hiatuses of some 10 ka show that periods of intense loess accumulation were interrupted by phases of only minor loess sedimentation and/or erosion. The Agh Band profile hosts an extraordinary good temporal resolution for MIS 4 and MIS 5. The pedocomplex at the bottom part of the profile indicates a period of increased humidity and landscape stability during late MIS 7 and MIS 6 following the luminescence ages. The loess-profile is also subdivided by several shifts in grain-size distribution. The coarsening- and fining up trends correlate with increasing and decreasing wind- velocity, respectively.

  8. Use of Archived Information by the United States National Data Center

    NASA Astrophysics Data System (ADS)

    Junek, W. N.; Pope, B. M.; Roman-Nieves, J. I.; VanDeMark, T. F.; Ichinose, G. A.; Poffenberger, A.; Woods, M. T.

    2012-12-01

    The United States National Data Center (US NDC) is responsible for monitoring international compliance to nuclear test ban treaties, acquiring data and data products from the International Data Center (IDC), and distributing data according to established policy. The archive of automated and reviewed event solutions residing at the US NDC is a valuable resource for assessing and improving the performance of signal detection, event formation, location, and discrimination algorithms. Numerous research initiatives are currently underway that are focused on optimizing these processes using historic waveform data and alphanumeric information. Identification of optimum station processing parameters is routinely performed through the analysis of archived waveform data. Station specific detector tuning studies produce and compare receiver operating characteristics for multiple detector configurations (e.g., detector type, filter passband) to identify an optimum set of processing parameters with an acceptable false alarm rate. Large aftershock sequences can inundate automated phase association algorithms with numerous detections that are closely spaced in time, which increases the number of false and/or mixed associations in automated event solutions and increases analyst burden. Archived waveform data and alphanumeric information are being exploited to develop an aftershock processor that will construct association templates to assist the Global Association (GA) application, reduce the number of false and merged phase associations, and lessen analyst burden. Statistical models are being developed and evaluated for potential use by the GA application for identifying and rejecting unlikely preliminary event solutions. Other uses of archived data at the US NDC include: improved event locations using empirical travel time corrections and discrimination via a statistical framework known as the event classification matrix (ECM).

  9. The Hubble Spectroscopic Legacy Archive

    NASA Astrophysics Data System (ADS)

    Peeples, M.; Tumlinson, J.; Fox, A.; Aloisi, A.; Fleming, S.; Jedrzejewski, R.; Oliveira, C.; Ayres, T.; Danforth, C.; Keeney, B.; Jenkins, E.

    2017-04-01

    With no future space ultraviolet instruments currently planned, the data from the UV spectrographs aboard the Hubble Space Telescope have a legacy value beyond their initial science goals. The goal of the Hubble Spectroscopic Legacy Archive(HSLA) is to provide to the community new science-grade combined spectra for all publicly available data obtained by the Cosmic Origins Spectrograph (COS)and the Space Telescope Imaging Spectrograph (STIS). These data are packaged into "smart archives" according to target type and scientific themes to facilitate the construction of archival samples for common science uses. A new "quick look" capability makes the data easy for users to quickly access, assess the quality of,and download for archival science. The first generation of these products for the far-ultraviolet (FUV) modes of COS was made available online via the Mikulski Archive for Space Telescopes (MAST) in early 2016 and updated in early 2017; future releases will include COS/NUV and STIS/UV data.

  10. Archiving, sharing, processing and publishing historical earthquakes data: the IT point of view

    NASA Astrophysics Data System (ADS)

    Locati, Mario; Rovida, Andrea; Albini, Paola

    2014-05-01

    Digital tools devised for seismological data are mostly designed for handling instrumentally recorded data. Researchers working on historical seismology are forced to perform their daily job using a general purpose tool and/or coding their own to address their specific tasks. The lack of out-of-the-box tools expressly conceived to deal with historical data leads to a huge amount of time lost in performing tedious task to search for the data and, to manually reformat it in order to jump from one tool to the other, sometimes causing a loss of the original data. This reality is common to all activities related to the study of earthquakes of the past centuries, from the interpretations of past historical sources, to the compilation of earthquake catalogues. A platform able to preserve the historical earthquake data, trace back their source, and able to fulfil many common tasks was very much needed. In the framework of two European projects (NERIES and SHARE) and one global project (Global Earthquake History, GEM), two new data portals were designed and implemented. The European portal "Archive of Historical Earthquakes Data" (AHEAD) and the worldwide "Global Historical Earthquake Archive" (GHEA), are aimed at addressing at least some of the above mentioned issues. The availability of these new portals and their well-defined standards makes it easier than before the development of side tools for archiving, publishing and processing the available historical earthquake data. The AHEAD and GHEA portals, their underlying technologies and the developed side tools are presented.

  11. Petroleum formation during serpentinization: the evidence of trace elements

    NASA Astrophysics Data System (ADS)

    Szatmari, P.; Fonseca, T. C.; Miekeley, N. F.

    2002-05-01

    elements are at higher levels than those of the first group, about 300 times less than their abundances in mantle peridotites, reflecting their higher availability during serpentinization. Within both groups, trace metal ratios and A/(A+B) type proportionalities in the oils are close to mantle peridotites. V behaves somewhat differently: in lacustrine sequences V contents in the oils are low and the ratios of V to other elements of the second group are mantle-like, whereas in marine sequences V and its ratios to other trace elements rise by orders of magnitude. Trace elements commonly enriched in formation fluids and hydrothermal brines (Rb, Sr, Ba, Cu, Zn), when normalized to mantle peridotites, are enriched in the oils by about 0.5 order of magnitude relative to other elements of the second group. The third group of elements includes S, Mo, and As. These elements occur in the oils at abundances similar to sea water and are, when normalized to mantle peridotites and Ni, enriched in the oils by several orders of magnitude, indicating sea water reacting with peridotites during sepentinization as their possible source. Finally trace elements of the fourth group, such as Pb and Ag, are enriched in the oils by several orders of magnitude relative to both mantle peridotites and sea water and were presumably mobilized from shales by hydrothermal fluids. References:Holm, N.G. and Charlou, J.L., 2001, EPSL 191, 1-8. Janecky, D.R. and Seyfried, W.E., 1986, Geochim. Cosmochim. Acta 50, 1357-1378. Szatmari, P., 1989, AAPG Bull. 73, 989-998.

  12. A generic archive protocol and an implementation

    NASA Technical Reports Server (NTRS)

    Jordan, J. M.; Jennings, D. G.; Mcglynn, T. A.; Ruggiero, N. G.; Serlemitsos, T. A.

    1992-01-01

    Archiving vast amounts of data has become a major part of every scientific space mission today. The Generic Archive/Retrieval Services Protocol (GRASP) addresses the question of how to archive the data collected in an environment where the underlying hardware archives may be rapidly changing. GRASP is a device independent specification defining a set of functions for storing and retrieving data from an archive, as well as other support functions. GRASP is divided into two levels: the Transfer Interface and the Action Interface. The Transfer Interface is computer/archive independent code while the Action Interface contains code which is dedicated to each archive/computer addressed. Implementations of the GRASP specification are currently available for DECstations running Ultrix, Sparcstations running SunOS, and microVAX/VAXstation 3100's. The underlying archive is assumed to function as a standard Unix or VMS file system. The code, written in C, is a single suite of files. Preprocessing commands define the machine unique code sections in the device interface. The implementation was written, to the greatest extent possible, using only ANSI standard C functions.

  13. About Fermilab - History and Archives Project

    Science.gov Websites

    Fermilab Organization Chart Diversity Architecture History and Archives Project Sustainability Nature Accommodations Recreation Architecture & History Nature/Ecology Order Fermilab Merchandise Online Education K Fermilab History and Archives Project Archives Project main page | Fermilab History main page A Brief

  14. The GTC scientific archive

    NASA Astrophysics Data System (ADS)

    Gutiérrez, R.; Solano, E.

    2011-11-01

    At present, data management in telescopes ofclass 8-10 meters is very inefficient. The Gran Telescopio Canarias(GTC) scientific archive that is being developed by the Centro deAstrobiología (CAB) in the framework of the Spanish Virtual Observatoryis aimed at avoiding this situation, providing the telescope with anarchive accessible via internet, guaranteeing the accessibility,efficiency, visibility and data security demanded by a telescope of itsentity. The GTC archive will also be adapted to the standards defined bythe International Virtual Observatory, maximizing the visibility of thedata produced by the telescope. The main characteristics of the GTCscientific archive are described in this poster.

  15. A Background to Motion Picture Archives.

    ERIC Educational Resources Information Center

    Fletcher, James E.; Bolen, Donald L., Jr.

    The emphasis of archives is on the maintenance and preservation of materials for scholarly research and professional reference. Archives may be established as separate entities or as part of a library or museum. Film archives may include camera originals (positive and negative), sound recordings, outtakes, scripts, contracts, advertising…

  16. Trace fossil evidence for late Permian shallow water condition in Guryul ravine, Kashmir, India

    NASA Astrophysics Data System (ADS)

    Parcha, Suraj; Horacek, Micha; Krystyn, Leopold; Pandey, Shivani

    2015-04-01

    The present study is focused on the Late Permian (Changhsingian) succession, present in the Guryul ravine, Kashmir Basin. The basin has a complete Cambro-Triassic sequence and thus contains a unique position in the geology of Himalaya. The Guryul Ravine Permian mainly comprises of mixed siliciclastic-carbonate sediments deposited in a shallow-shelf or ramp setting. The present assemblage of Ichnofossils is the first significant report of trace fossils in the Guryul ravine since early reports in the 1970s. The Ichnofossils reported from this section include: Diplichnites, Dimorphichnus, Monomorphichnus, Planolites, Skolithos along with burrow, scratch marks and annelid worm traces?. The ichnofossils are mainly preserved in medium grain sandstone-mudstone facies. The Ichnofossils are widely distributed throughout the section and are mostly belonging to arthropods and annelid origin, showing behavioral activity, mainly dwelling and feeding, and evidence the dominant presence of deposit feeders. The vertical to slightly inclined biogenic structures are commonly recognized from semi-consolidated substrate which are characteristic features of the near shore/foreshore marine environment, with moderate to high energy conditions. The topmost layer of silty shale contains trace fossils like Skolithos and poorly preserved burrows. The burrow material filled is same as that of host rock. The studied Zewan C and D sequence represents the early to late part of the Changhsingian stage, from 40 to 5 m below the top of Zewan D member with bioturbation still evident in some limestone layers till 2 metres above. No trace fossils could be recognized in the topmost 3 m beds of Zewan D due to their gliding related amalgamated structure. The widespread distribution of traces and their in situ nature will be useful for interpretation of the paleoecological and paleoenvironmental conditions during the late Permian in the Guryul ravine of Kashmir.

  17. One-Dimensional Signal Extraction Of Paper-Written ECG Image And Its Archiving

    NASA Astrophysics Data System (ADS)

    Zhang, Zhi-ni; Zhang, Hong; Zhuang, Tian-ge

    1987-10-01

    A method for converting paper-written electrocardiograms to one dimensional (1-D) signals for archival storage on floppy disk is presented here. Appropriate image processing techniques were employed to remove the back-ground noise inherent to ECG recorder charts and to reconstruct the ECG waveform. The entire process consists of (1) digitization of paper-written ECGs with an image processing system via a TV camera; (2) image preprocessing, including histogram filtering and binary image generation; (3) ECG feature extraction and ECG wave tracing, and (4) transmission of the processed ECG data to IBM-PC compatible floppy disks for storage and retrieval. The algorithms employed here may also be used in the recognition of paper-written EEG or EMG and may be useful in robotic vision.

  18. Internet FAQ Archives - Online Education - faqs.org

    Science.gov Websites

    faqs.org Internet FAQ Archives - Online Education faqs.org faqs.org - Internet FAQ Archives Internet FAQ Archives Online Education Internet RFC Index Usenet FAQ Index Other FAQs Documents Tools IFC Rated FAQs Internet RFC/STD/FYI/BCP Archives The Internet RFC series of documents is also available from

  19. Image dissemination and archiving.

    PubMed

    Robertson, Ian

    2007-08-01

    Images generated as part of the sonographic examination are an integral part of the medical record and must be retained according to local regulations. The standard medical image format, known as DICOM (Digital Imaging and COmmunications in Medicine) makes it possible for images from many different imaging modalities, including ultrasound, to be distributed via a standard internet network to distant viewing workstations and a central archive in an almost seamless fashion. The DICOM standard is a truly universal standard for the dissemination of medical images. When purchasing an ultrasound unit, the consumer should research the unit's capacity to generate images in a DICOM format, especially if one wishes interconnectivity with viewing workstations and an image archive that stores other medical images. PACS, an acronym for Picture Archive and Communication System refers to the infrastructure that links modalities, workstations, the image archive, and the medical record information system into an integrated system, allowing for efficient electronic distribution and storage of medical images and access to medical record data.

  20. Building a COTS archive for satellite data

    NASA Technical Reports Server (NTRS)

    Singer, Ken; Terril, Dave; Kelly, Jack; Nichols, Cathy

    1994-01-01

    The goal of the NOAA/NESDIS Active Archive was to provide a method of access to an online archive of satellite data. The archive had to manage and store the data, let users interrogate the archive, and allow users to retrieve data from the archive. Practical issues of the system design such as implementation time, cost and operational support were examined in addition to the technical issues. There was a fixed window of opportunity to create an operational system, along with budget and staffing constraints. Therefore, the technical solution had to be designed and implemented subject to constraint imposed by the practical issues. The NOAA/NESDIS Active Archive came online in July of 1994, meeting all of its original objectives.

  1. Metaphor as a Possible Pathway to More Formal Understanding of the Definition of Sequence Convergence

    ERIC Educational Resources Information Center

    Dawkins, Paul Christian

    2012-01-01

    This study presents how the introduction of a metaphor for sequence convergence constituted an experientially real context in which an undergraduate real analysis student developed a property-based definition of sequence convergence. I use elements from Zandieh and Rasmussen's (2010) Defining as a Mathematical Activity framework to trace the…

  2. Status of the ISS Trace Contaminant Control System

    NASA Technical Reports Server (NTRS)

    Macatangay, Ariel V.; Perry, Jay L.; Johnson, Sharon A.; Belcher, Paul A.

    2009-01-01

    A habitable atmosphere is a fundamental requirement for human spaceflight. To meet such a requirement, the cabin atmosphere must be constantly scrubbed to maintain human life and system functionality. The primary system for atmospheric scrubbing of the US on-orbit segment (USOS) of the International Space Station (ISS) is the Trace Contaminant Control System (TCCS). As part of the Environmental Control and Life Support Systems (ECLSS) atmosphere revitalization rack in the US Lab, the TCCS operates continuously, scrubbing trace contaminants generated primarily by two sources: the metabolic offgassing of crew members and the offgassing of equipment in the ISS. It has been online for approximately 95% since activated in February 2001. The TCCS is comprised of a charcoal bed, a catalytic oxidizer, and a lithium hydroxide post-sorbent bed, all of which are designed to be replaced onorbit when necessary. In 2006, all three beds were replaced following an observed increase in the system resistance that occurred over a period several months. The beds were returned to ground and subjected to a test, teardown and evaluation to investigate the root cause(s) of the decrease in flow rate through the system. In addition, various chemical and physical analyses of the bed materials were performed to determine contaminant loading and any changes in performance. This paper will mainly focus on the results of these analyses and how this correlates with what has been observed from archival sampling and onorbit events. This may provide insight into the future performance of the TCCS and rate of change for orbital replacement units in the TCCS.

  3. Stewardship of very large digital data archives

    NASA Technical Reports Server (NTRS)

    Savage, Patric

    1991-01-01

    An archive is a permanent store. There are relatively few very large digital data archives in existence. Most business records are expired within five or ten years. Many kinds of business records that do have long lives are embedded in data bases that are continually updated and re-issued cyclically. Also, a great deal of permanent business records are actually archived as microfilm, fiche, or optical disk images - their digital version being an operational convenience rather than an archive. The problems forseen in stewarding the very large digital data archives that will accumulate during the mission of the Earth Observing System (EOS) are addressed. It focuses on the function of shepherding archived digital data into an endless future. Stewardship entails storing and protecting the archive and providing meaningful service to the community of users. The steward will (1) provide against loss due to physical phenomena; (2) assure that data is not lost due to storage technology obsolescence; and (3) maintain data in a current formatting methodology.

  4. ExoMars Trace Gas Orbiter (TGO) Science Ground Segment (SGS)

    NASA Astrophysics Data System (ADS)

    Metcalfe, L.; Aberasturi, M.; Alonso, E.; Álvarez, R.; Ashman, M.; Barbarisi, I.; Brumfitt, J.; Cardesín, A.; Coia, D.; Costa, M.; Fernández, R.; Frew, D.; Gallegos, J.; García Beteta, J. J.; Geiger, B.; Heather, D.; Lim, T.; Martin, P.; Muñoz Crego, C.; Muñoz Fernandez, M.; Villacorta, A.; Svedhem, H.

    2018-06-01

    The ExoMars Trace Gas Orbiter (TGO) Science Ground Segment (SGS), comprised of payload Instrument Team, ESA and Russian operational centres, is responsible for planning the science operations of the TGO mission and for the generation and archiving of the scientific data products to levels meeting the scientific aims and criteria specified by the ESA Project Scientist as advised by the Science Working Team (SWT). The ExoMars SGS builds extensively upon tools and experience acquired through earlier ESA planetary missions like Mars and Venus Express, and Rosetta, but also is breaking ground in various respects toward the science operations of future missions like BepiColombo or JUICE. A productive interaction with the Russian partners in the mission facilitates broad and effective collaboration. This paper describes the global organisation and operation of the SGS, with reference to its principal systems, interfaces and operational processes.

  5. Tracing melioidosis back to the source: using whole-genome sequencing to investigate an outbreak originating from a contaminated domestic water supply.

    PubMed

    McRobb, Evan; Sarovich, Derek S; Price, Erin P; Kaestli, Mirjam; Mayo, Mark; Keim, Paul; Currie, Bart J

    2015-04-01

    Melioidosis, a disease of public health importance in Southeast Asia and northern Australia, is caused by the Gram-negative soil bacillus Burkholderia pseudomallei. Melioidosis is typically acquired through environmental exposure, and case clusters are rare, even in regions where the disease is endemic. B. pseudomallei is classed as a tier 1 select agent by the Centers for Disease Control and Prevention; from a biodefense perspective, source attribution is vital in an outbreak scenario to rule out a deliberate release. Two cases of melioidosis within a 3-month period at a residence in rural northern Australia prompted an investigation to determine the source of exposure. B. pseudomallei isolates from the property's groundwater supply matched the multilocus sequence type of the clinical isolates. Whole-genome sequencing confirmed the water supply as the probable source of infection in both cases, with the clinical isolates differing from the likely infecting environmental strain by just one single nucleotide polymorphism (SNP) each. For the first time, we report a phylogenetic analysis of genomewide insertion/deletion (indel) data, an approach conventionally viewed as problematic due to high mutation rates and homoplasy. Our whole-genome indel analysis was concordant with the SNP phylogeny, and these two combined data sets provided greater resolution and a better fit with our epidemiological chronology of events. Collectively, this investigation represents a highly accurate account of source attribution in a melioidosis outbreak and gives further insight into a frequently overlooked reservoir of B. pseudomallei. Our methods and findings have important implications for outbreak source tracing of this bacterium and other highly recombinogenic pathogens. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  6. HST archive primer, version 4.1

    NASA Technical Reports Server (NTRS)

    Fruchter, A. (Editor); Baum, S. (Editor)

    1994-01-01

    This version of the HST Archive Primer provides the basic information a user needs to know to access the HST archive via StarView the new user interface to the archive. Using StarView, users can search for observations interest, find calibration reference files, and retrieve data from the archive. Both the terminal version of StarView and the X-windows version feature a name resolver which simplifies searches of the HST archive based on target name. In addition, the X-windows version of StarView allows preview of all public HST data; compressed versions of public images are displayed via SAOIMAGE, while spectra are plotted using the public plotting package, XMGR. Finally, the version of StarView described here features screens designed for observers preparing Cycle 5 HST proposals.

  7. Biological nanopore MspA for DNA sequencing

    NASA Astrophysics Data System (ADS)

    Manrao, Elizabeth A.

    Unlocking the information hidden in the human genome provides insight into the inner workings of complex biological systems and can be used to greatly improve health-care. In order to allow for widespread sequencing, new technologies are required that provide fast and inexpensive readings of DNA. Nanopore sequencing is a third generation DNA sequencing technology that is currently being developed to fulfill this need. In nanopore sequencing, a voltage is applied across a small pore in an electrolyte solution and the resulting ionic current is recorded. When DNA passes through the channel, the ionic current is partially blocked. If the DNA bases uniquely modulate the ionic current flowing through the channel, the time trace of the current can be related to the sequence of DNA passing through the pore. There are two main challenges to realizing nanopore sequencing: identifying a pore with sensitivity to single nucleotides and controlling the translocation of DNA through the pore so that the small single nucleotide current signatures are distinguishable from background noise. In this dissertation, I explore the use of Mycobacterium smegmatis porin A (MspA) for nanopore sequencing. In order to determine MspA's sensitivity to single nucleotides, DNA strands of various compositions are held in the pore as the resulting ionic current is measured. DNA is immobilized in MspA by attaching it to a large molecule which acts as an anchor. This technique confirms the single nucleotide resolution of the pore and additionally shows that MspA is sensitive to epigenetic modifications and single nucleotide polymorphisms. The forces from the electric field within MspA, the effective charge of nucleotides, and elasticity of DNA are estimated using a Freely Jointed Chain model of single stranded DNA. These results offer insight into the interactions of DNA within the pore. With the nucleotide sensitivity of MspA confirmed, a method is introduced to controllably pass DNA through the pore

  8. Providing comprehensive and consistent access to astronomical observatory archive data: the NASA archive model

    NASA Astrophysics Data System (ADS)

    McGlynn, Thomas; Fabbiano, Giuseppina; Accomazzi, Alberto; Smale, Alan; White, Richard L.; Donaldson, Thomas; Aloisi, Alessandra; Dower, Theresa; Mazzerella, Joseph M.; Ebert, Rick; Pevunova, Olga; Imel, David; Berriman, Graham B.; Teplitz, Harry I.; Groom, Steve L.; Desai, Vandana R.; Landry, Walter

    2016-07-01

    Since the turn of the millennium a constant concern of astronomical archives have begun providing data to the public through standardized protocols unifying data from disparate physical sources and wavebands across the electromagnetic spectrum into an astronomical virtual observatory (VO). In October 2014, NASA began support for the NASA Astronomical Virtual Observatories (NAVO) program to coordinate the efforts of NASA astronomy archives in providing data to users through implementation of protocols agreed within the International Virtual Observatory Alliance (IVOA). A major goal of the NAVO collaboration has been to step back from a piecemeal implementation of IVOA standards and define what the appropriate presence for the US and NASA astronomy archives in the VO should be. This includes evaluating what optional capabilities in the standards need to be supported, the specific versions of standards that should be used, and returning feedback to the IVOA, to support modifications as needed. We discuss a standard archive model developed by the NAVO for data archive presence in the virtual observatory built upon a consistent framework of standards defined by the IVOA. Our standard model provides for discovery of resources through the VO registries, access to observation and object data, downloads of image and spectral data and general access to archival datasets. It defines specific protocol versions, minimum capabilities, and all dependencies. The model will evolve as the capabilities of the virtual observatory and needs of the community change.

  9. Providing Comprehensive and Consistent Access to Astronomical Observatory Archive Data: The NASA Archive Model

    NASA Technical Reports Server (NTRS)

    McGlynn, Thomas; Guiseppina, Fabbiano A; Accomazzi, Alberto; Smale, Alan; White, Richard L.; Donaldson, Thomas; Aloisi, Alessandra; Dower, Theresa; Mazzerella, Joseph M.; Ebert, Rick; hide

    2016-01-01

    Since the turn of the millennium a constant concern of astronomical archives have begun providing data to the public through standardized protocols unifying data from disparate physical sources and wavebands across the electromagnetic spectrum into an astronomical virtual observatory (VO). In October 2014, NASA began support for the NASA Astronomical Virtual Observatories (NAVO) program to coordinate the efforts of NASA astronomy archives in providing data to users through implementation of protocols agreed within the International Virtual Observatory Alliance (IVOA). A major goal of the NAVO collaboration has been to step back from a piecemeal implementation of IVOA standards and define what the appropriate presence for the US and NASA astronomy archives in the VO should be. This includes evaluating what optional capabilities in the standards need to be supported, the specific versions of standards that should be used, and returning feedback to the IVOA, to support modifications as needed. We discuss a standard archive model developed by the NAVO for data archive presence in the virtual observatory built upon a consistent framework of standards defined by the IVOA. Our standard model provides for discovery of resources through the VO registries, access to observation and object data, downloads of image and spectral data and general access to archival datasets. It defines specific protocol versions, minimum capabilities, and all dependencies. The model will evolve as the capabilities of the virtual observatory and needs of the community change.

  10. Mantis: A Fast, Small, and Exact Large-Scale Sequence-Search Index.

    PubMed

    Pandey, Prashant; Almodaresi, Fatemeh; Bender, Michael A; Ferdman, Michael; Johnson, Rob; Patro, Rob

    2018-06-18

    Sequence-level searches on large collections of RNA sequencing experiments, such as the NCBI Sequence Read Archive (SRA), would enable one to ask many questions about the expression or variation of a given transcript in a population. Existing approaches, such as the sequence Bloom tree, suffer from fundamental limitations of the Bloom filter, resulting in slow build and query times, less-than-optimal space usage, and potentially large numbers of false-positives. This paper introduces Mantis, a space-efficient system that uses new data structures to index thousands of raw-read experiments and facilitates large-scale sequence searches. In our evaluation, index construction with Mantis is 6× faster and yields a 20% smaller index than the state-of-the-art split sequence Bloom tree (SSBT). For queries, Mantis is 6-108× faster than SSBT and has no false-positives or -negatives. For example, Mantis was able to search for all 200,400 known human transcripts in an index of 2,652 RNA sequencing experiments in 82 min; SSBT took close to 4 days. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. The new European Hubble archive

    NASA Astrophysics Data System (ADS)

    De Marchi, Guido; Arevalo, Maria; Merin, Bruno

    2016-01-01

    The European Hubble Archive (hereafter eHST), hosted at ESA's European Space Astronomy Centre, has been released for public use in October 2015. The eHST is now fully integrated with the other ESA science archives to ensure long-term preservation of the Hubble data, consisting of more than 1 million observations from 10 different scientific instruments. The public HST data, the Hubble Legacy Archive, and the high-level science data products are now all available to scientists through a single, carefully designed and user friendly web interface. In this talk, I will show how the the eHST can help boost archival research, including how to search on sources in the field of view thanks to precise footprints projected onto the sky, how to obtain enhanced previews of imaging data and interactive spectral plots, and how to directly link observations with already published papers. To maximise the scientific exploitation of Hubble's data, the eHST offers connectivity to virtual observatory tools, easily integrates with the recently released Hubble Source Catalog, and is fully accessible through ESA's archives multi-mission interface.

  12. Long-term data archiving

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, David Steven

    2009-01-01

    Long term data archiving has much value for chemists, not only to retain access to research and product development records, but also to enable new developments and new discoveries. There are some recent regulatory requirements (e.g., FDA 21 CFR Part 11), but good science and good business both benefit regardless. A particular example of the benefits of and need for long term data archiving is the management of data from spectroscopic laboratory instruments. The sheer amount of spectroscopic data is increasing at a scary rate, and the pressures to archive come from the expense to create the data (or recreatemore » it if it is lost) as well as its high information content. The goal of long-term data archiving is to save and organize instrument data files as well as any needed meta data (such as sample ID, LIMS information, operator, date, time, instrument conditions, sample type, excitation details, environmental parameters, etc.). This editorial explores the issues involved in long-term data archiving using the example of Raman spectral databases. There are at present several such databases, including common data format libraries and proprietary libraries. However, such databases and libraries should ultimately satisfy stringent criteria for long term data archiving, including readability for long times into the future, robustness to changes in computer hardware and operating systems, and use of public domain data formats. The latter criterion implies the data format should be platform independent and the tools to create the data format should be easily and publicly obtainable or developable. Several examples of attempts at spectral libraries exist, such as the ASTM ANDI format, and the JCAMP-DX format. On the other hand, proprietary library spectra can be exchanged and manipulated using proprietary tools. As the above examples have deficiencies according to the three long term data archiving criteria, Extensible Markup Language (XML; a product of the World Wide

  13. The Rosetta Science Archive: Status and Plans for Completing and Enhancing the Archive Content

    NASA Astrophysics Data System (ADS)

    Heather, D.; Barthelemy, M.; Fraga, D.; Grotheer, E.; O'Rourke, L.; Taylor, M.

    2017-09-01

    On 30 September 2016, Rosetta's signal flat-lined, confirming that the spacecraft had completed its incredible mission by landing on the surface of Comet 67P/Churyumov-Gerasimenko. Although this marked an end to the spacecraft's active operations, intensive work is still on-going with instrument teams preparing their final science data increments for delivery and ingestion into ESA's Planetary Science Archive (PSA). In addition to this, ESA is establishing contracts with a number of instrument teams to enhance and improve their data and documentation in an effort to provide the best long- term archive possible for the Rosetta mission. This presentation will outline the current status of the Rosetta archive, as well as highlighting some of the 'enhanced archiving' activities planned and underway with the various instrument teams on Rosetta to ensure the scientific legacy of the mission.

  14. Tracing cell lineages in videos of lens-free microscopy.

    PubMed

    Rempfler, Markus; Stierle, Valentin; Ditzel, Konstantin; Kumar, Sanjeev; Paulitschke, Philipp; Andres, Bjoern; Menze, Bjoern H

    2018-06-05

    In vitro experiments with cultured cells are essential for studying their growth and migration pattern and thus, for gaining a better understanding of cancer progression and its treatment. Recent progress in lens-free microscopy (LFM) has rendered it an inexpensive tool for label-free, continuous live cell imaging, yet there is only little work on analysing such time-lapse image sequences. We propose (1) a cell detector for LFM images based on fully convolutional networks and residual learning, and (2) a probabilistic model based on moral lineage tracing that explicitly handles multiple detections and temporal successor hypotheses by clustering and tracking simultaneously. (3) We benchmark our method in terms of detection and tracking scores on a dataset of three annotated sequences of several hours of LFM, where we demonstrate our method to produce high quality lineages. (4) We evaluate its performance on a somewhat more challenging problem: estimating cell lineages from the LFM sequence as would be possible from a corresponding fluorescence microscopy sequence. We present experiments on 16 LFM sequences for which we acquired fluorescence microscopy in parallel and generated annotations from them. Finally, (5) we showcase our methods effectiveness for quantifying cell dynamics in an experiment with skin cancer cells. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Radio data archiving system

    NASA Astrophysics Data System (ADS)

    Knapic, C.; Zanichelli, A.; Dovgan, E.; Nanni, M.; Stagni, M.; Righini, S.; Sponza, M.; Bedosti, F.; Orlati, A.; Smareglia, R.

    2016-07-01

    Radio Astronomical Data models are becoming very complex since the huge possible range of instrumental configurations available with the modern Radio Telescopes. What in the past was the last frontiers of data formats in terms of efficiency and flexibility is now evolving with new strategies and methodologies enabling the persistence of a very complex, hierarchical and multi-purpose information. Such an evolution of data models and data formats require new data archiving techniques in order to guarantee data preservation following the directives of Open Archival Information System and the International Virtual Observatory Alliance for data sharing and publication. Currently, various formats (FITS, MBFITS, VLBI's XML description files and ancillary files) of data acquired with the Medicina and Noto Radio Telescopes can be stored and handled by a common Radio Archive, that is planned to be released to the (inter)national community by the end of 2016. This state-of-the-art archiving system for radio astronomical data aims at delegating as much as possible to the software setting how and where the descriptors (metadata) are saved, while the users perform user-friendly queries translated by the web interface into complex interrogations on the database to retrieve data. In such a way, the Archive is ready to be Virtual Observatory compliant and as much as possible user-friendly.

  16. NCBI GEO: archive for functional genomics data sets—update

    PubMed Central

    Barrett, Tanya; Wilhite, Stephen E.; Ledoux, Pierre; Evangelista, Carlos; Kim, Irene F.; Tomashevsky, Maxim; Marshall, Kimberly A.; Phillippy, Katherine H.; Sherman, Patti M.; Holko, Michelle; Yefanov, Andrey; Lee, Hyeseung; Zhang, Naigong; Robertson, Cynthia L.; Serova, Nadezhda; Davis, Sean; Soboleva, Alexandra

    2013-01-01

    The Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) is an international public repository for high-throughput microarray and next-generation sequence functional genomic data sets submitted by the research community. The resource supports archiving of raw data, processed data and metadata which are indexed, cross-linked and searchable. All data are freely available for download in a variety of formats. GEO also provides several web-based tools and strategies to assist users to query, analyse and visualize data. This article reports current status and recent database developments, including the release of GEO2R, an R-based web application that helps users analyse GEO data. PMID:23193258

  17. NCBI GEO: archive for functional genomics data sets--update.

    PubMed

    Barrett, Tanya; Wilhite, Stephen E; Ledoux, Pierre; Evangelista, Carlos; Kim, Irene F; Tomashevsky, Maxim; Marshall, Kimberly A; Phillippy, Katherine H; Sherman, Patti M; Holko, Michelle; Yefanov, Andrey; Lee, Hyeseung; Zhang, Naigong; Robertson, Cynthia L; Serova, Nadezhda; Davis, Sean; Soboleva, Alexandra

    2013-01-01

    The Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) is an international public repository for high-throughput microarray and next-generation sequence functional genomic data sets submitted by the research community. The resource supports archiving of raw data, processed data and metadata which are indexed, cross-linked and searchable. All data are freely available for download in a variety of formats. GEO also provides several web-based tools and strategies to assist users to query, analyse and visualize data. This article reports current status and recent database developments, including the release of GEO2R, an R-based web application that helps users analyse GEO data.

  18. Astronomical Archive at Tartu Observatory

    NASA Astrophysics Data System (ADS)

    Annuk, K.

    2007-10-01

    Archiving astronomical data is important task not only at large observatories but also at small observatories. Here we describe the astronomical archive at Tartu Observatory. The archive consists of old photographic plate images, photographic spectrograms, CCD direct--images and CCD spectroscopic data. The photographic plate digitizing project was started in 2005. An on-line database (based on MySQL) was created. The database includes CCD data as well photographic data. A PHP-MySQL interface was written for access to all data.

  19. Transportation plan repository and archive.

    DOT National Transportation Integrated Search

    2011-04-01

    This project created a repository and archive for transportation planning documents in Texas within the : established Texas A&M Repository (http://digital.library.tamu.edu). This transportation planning archive : and repository provides ready access ...

  20. Dynamics of drug resistance-associated mutations in HIV-1 DNA reverse transcriptase sequence during effective ART.

    PubMed

    Nouchi, A; Nguyen, T; Valantin, M A; Simon, A; Sayon, S; Agher, R; Calvez, V; Katlama, C; Marcelin, A G; Soulie, C

    2018-05-29

    To investigate the dynamics of HIV-1 variants archived in cells harbouring drug resistance-associated mutations (DRAMs) to lamivudine/emtricitabine, etravirine and rilpivirine in patients under effective ART free from selective pressure on these DRAMs, in order to assess the possibility of recycling molecules with resistance history. We studied 25 patients with at least one DRAM to lamivudine/emtricitabine, etravirine and/or rilpivirine identified on an RNA sequence in their history and with virological control for at least 5 years under a regimen excluding all drugs from the resistant class. Longitudinal ultra-deep sequencing (UDS) and Sanger sequencing of the reverse transcriptase region were performed on cell-associated HIV-1 DNA samples taken over the 5 years of follow-up. Viral variants harbouring the analysed DRAMs were no longer detected by UDS over the 5 years in 72% of patients, with viruses susceptible to the molecules of interest found after 5 years in 80% of patients with UDS and in 88% of patients with Sanger. Residual viraemia with <50 copies/mL was detected in 52% of patients. The median HIV DNA level remained stable (2.4 at baseline versus 2.1 log10 copies/106 cells 5 years later). These results show a clear trend towards clearance of archived DRAMs to reverse transcriptase inhibitors in cell-associated HIV-1 DNA after a long period of virological control, free from therapeutic selective pressure on these DRAMs, reflecting probable residual replication in some reservoirs of the fittest viruses and leading to persistent evolution of the archived HIV-1 DNA resistance profile.

  1. Archive of Digital Boomer Seismic Reflection Data Collected During USGS Field Activity 08LCA04 in Lakes Cherry, Helen, Hiawassee, Louisa, and Prevatt, Central Florida, September 2008

    USGS Publications Warehouse

    Harrison, Arnell S.; Dadisman, Shawn V.; Davis, Jeffrey B.; Flocks, James G.; Wiese, Dana S.

    2009-01-01

    From September 2 through 4, 2008, the U.S. Geological Survey and St. Johns River Water Management District (SJRWMD) conducted geophysical surveys in Lakes Cherry, Helen, Hiawassee, Louisa, and Prevatt, central Florida. This report serves as an archive of unprocessed digital boomer seismic reflection data, trackline maps, navigation files, GIS information, FACS logs, and formal FGDC metadata. Filtered and gained digital images of the seismic profiles are also provided. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry and others, 1975) and may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU). Example SU processing scripts and USGS software for viewing the SEG-Y files (Zihlman, 1992) are also provided.

  2. Archaeological Feature Detection from Archive Aerial Photography with a Sfm-Mvs and Image Enhancement Pipeline

    NASA Astrophysics Data System (ADS)

    Peppa, M. V.; Mills, J. P.; Fieber, K. D.; Haynes, I.; Turner, S.; Turner, A.; Douglas, M.; Bryan, P. G.

    2018-05-01

    Understanding and protecting cultural heritage involves the detection and long-term documentation of archaeological remains alongside the spatio-temporal analysis of their landscape evolution. Archive aerial photography can illuminate traces of ancient features which typically appear with different brightness values from their surrounding environment, but are not always well defined. This research investigates the implementation of the Structure-from-Motion - Multi-View Stereo image matching approach with an image enhancement algorithm to derive three epochs of orthomosaics and digital surface models from visible and near infrared historic aerial photography. The enhancement algorithm uses decorrelation stretching to improve the contrast of the orthomosaics so as archaeological features are better detected. Results include 2D / 3D locations of detected archaeological traces stored into a geodatabase for further archaeological interpretation and correlation with benchmark observations. The study also discusses the merits and difficulties of the process involved. This research is based on a European-wide project, entitled "Cultural Heritage Through Time", and the case study research was carried out as a component of the project in the UK.

  3. The French Astronomical Archives Alidade Project

    NASA Astrophysics Data System (ADS)

    Debarbat, S.; Bobis, L.

    2004-12-01

    The present state of Alidade, an archival project of Paris Observatory, including not only archival papers, but also instruments, documents, iconography, paintings etc., of various institutions, is described. Documents and collections, e.g. from donations or purchases, are still integrated into the archives, and selected material is displayed in temporary exhibits at the Observatory. Modern uses of old material are briefly mentioned

  4. From the archives of scientific diplomacy: science and the shared interests of Samuel Hartlib's London and Frederick Clodius's Gottorf.

    PubMed

    Keller, Vera; Penman, Leigh T I

    2015-03-01

    Many historians have traced the accumulation of scientific archives via communication networks. Engines for communication in early modernity have included trade, the extrapolitical Republic of Letters, religious enthusiasm, and the centralization of large emerging information states. The communication between Samuel Hartlib, John Dury, Duke Friedrich III of Gottorf-Holstein, and his key agent in England, Frederick Clodius, points to a less obvious but no less important impetus--the international negotiations of smaller states. Smaller states shaped communication networks in an international (albeit politically and religiously slanted) direction. Their networks of negotiation contributed to the internationalization of emerging science through a political and religious concept of shared interest. While interest has been central to social studies of science, interest itself has not often been historicized within the history of science. This case study demonstrates the co-production of science and society by tracing how period concepts of interest made science international.

  5. Vaginal microbial flora analysis by next generation sequencing and microarrays; can microbes indicate vaginal origin in a forensic context?

    PubMed

    Benschop, Corina C G; Quaak, Frederike C A; Boon, Mathilde E; Sijen, Titia; Kuiper, Irene

    2012-03-01

    Forensic analysis of biological traces generally encompasses the investigation of both the person who contributed to the trace and the body site(s) from which the trace originates. For instance, for sexual assault cases, it can be beneficial to distinguish vaginal samples from skin or saliva samples. In this study, we explored the use of microbial flora to indicate vaginal origin. First, we explored the vaginal microbiome for a large set of clinical vaginal samples (n = 240) by next generation sequencing (n = 338,184 sequence reads) and found 1,619 different sequences. Next, we selected 389 candidate probes targeting genera or species and designed a microarray, with which we analysed a diverse set of samples; 43 DNA extracts from vaginal samples and 25 DNA extracts from samples from other body sites, including sites in close proximity of or in contact with the vagina. Finally, we used the microarray results and next generation sequencing dataset to assess the potential for a future approach that uses microbial markers to indicate vaginal origin. Since no candidate genera/species were found to positively identify all vaginal DNA extracts on their own, while excluding all non-vaginal DNA extracts, we deduce that a reliable statement about the cellular origin of a biological trace should be based on the detection of multiple species within various genera. Microarray analysis of a sample will then render a microbial flora pattern that is probably best analysed in a probabilistic approach.

  6. A Vision of Archival Education at the Millennium.

    ERIC Educational Resources Information Center

    Tibbo, Helen R.

    1997-01-01

    Issues critical to the development of an archival education degree program are discussed including number of credit hours and courses. Archival educators continue to revise the Society of American Archivists (SAA) Master's of Archival Studies (M.A.S.) guidelines as higher education and the world changes. Archival educators must cooperate with…

  7. Tracing the Evolutionary History of the CAP Superfamily of Proteins Using Amino Acid Sequence Homology and Conservation of Splice Sites.

    PubMed

    Abraham, Anup; Chandler, Douglas E

    2017-10-01

    Proteins of the CAP superfamily play numerous roles in reproduction, innate immune responses, cancer biology, and venom toxicology. Here we document the breadth of the CAP (Cysteine-RIch Secretory Protein (CRISP), Antigen 5, and Pathogenesis-Related) protein superfamily and trace the major events in its evolution using amino acid sequence homology and the positions of exon/intron borders within their genes. Seldom acknowledged in the literature, we find that many of the CAP subfamilies present in mammals, where they were originally characterized, have distinct homologues in the invertebrate phyla. Early eukaryotic CAP genes contained only one exon inherited from prokaryotic predecessors and as evolution progressed an increasing number of introns were inserted, reaching 2-5 in the invertebrate world and 5-15 in the vertebrate world. Focusing on the CRISP subfamily, we propose that these proteins evolved in three major steps: (1) origination of the CAP/PR/SCP domain in bacteria, (2) addition of a small Hinge domain to produce the two-domain SCP-like proteins found in roundworms and anthropoids, and (3) addition of an Ion Channel Regulatory domain, borrowed from invertebrate peptide toxins, to produce full length, three-domain CRISP proteins, first seen in insects and later to diversify into multiple subtypes in the vertebrate world.

  8. ASP archiving solution of regional HUSpacs.

    PubMed

    Pohjonen, Hanna; Kauppinen, Tomi; Ahovuo, Juhani

    2004-09-01

    The application service provider (ASP) model is not novel, but widely used in several non-health care-related business areas. In this article, ASP is described as a potential solution for long-term and back-up archiving of the picture archiving and communication system (PACS) of the Hospital District of Helsinki and Uusimaa (HUS). HUSpacs is a regional PACS for 21 HUS hospitals serving altogether 1.4 million citizens. The ultimate goal of this study was to define the specifications for the ASP archiving service and to compare different commercial options for archiving solutions (costs derived by unofficial requests for proposal): in-house PACS components, the regional ASP concept and the hospital-based ASP concept. In conclusion, the large scale of the HUS installation enables a cost-effective regional ASP archiving, resulting in a four to five times more economical solution than hospital-based ASP.

  9. STScI Archive Manual, Version 7.0

    NASA Astrophysics Data System (ADS)

    Padovani, Paolo

    1999-06-01

    The STScI Archive Manual provides information a user needs to know to access the HST archive via its two user interfaces: StarView and a World Wide Web (WWW) interface. It provides descriptions of the StarView screens used to access information in the database and the format of that information, and introduces the use to the WWW interface. Using the two interfaces, users can search for observations, preview public data, and retrieve data from the archive. Using StarView one can also find calibration reference files and perform detailed association searches. With the WWW interface archive users can access, and obtain information on, all Multimission Archive at Space Telescope (MAST) data, a collection of mainly optical and ultraviolet datasets which include, amongst others, the International Ultraviolet Explorer (IUE) Final Archive. Both interfaces feature a name resolver which simplifies searches based on target name.

  10. The ``One Archive'' for JWST

    NASA Astrophysics Data System (ADS)

    Greene, G.; Kyprianou, M.; Levay, K.; Sienkewicz, M.; Donaldson, T.; Dower, T.; Swam, M.; Bushouse, H.; Greenfield, P.; Kidwell, R.; Wolfe, D.; Gardner, L.; Nieto-Santisteban, M.; Swade, D.; McLean, B.; Abney, F.; Alexov, A.; Binegar, S.; Aloisi, A.; Slowinski, S.; Gousoulin, J.

    2015-09-01

    The next generation for the Space Telescope Science Institute data management system is gearing up to provide a suite of archive system services supporting the operation of the James Webb Space Telescope. We are now completing the initial stage of integration and testing for the preliminary ground system builds of the JWST Science Operations Center which includes multiple components of the Data Management Subsystem (DMS). The vision for astronomical science and research with the JWST archive introduces both solutions to formal mission requirements and innovation derived from our existing mission systems along with the collective shared experience of our global user community. We are building upon the success of the Hubble Space Telescope archive systems, standards developed by the International Virtual Observatory Alliance, and collaborations with our archive data center partners. In proceeding forward, the “one archive” architectural model presented here is designed to balance the objectives for this new and exciting mission. The STScI JWST archive will deliver high quality calibrated science data products, support multi-mission data discovery and analysis, and provide an infrastructure which supports bridges to highly valued community tools and services.

  11. The Hubble Spectroscopic Legacy Archive

    NASA Astrophysics Data System (ADS)

    Peeples, Molly S.; Tumlinson, Jason; Fox, Andrew; Aloisi, Alessandra; Ayres, Thomas R.; Danforth, Charles; Fleming, Scott W.; Jenkins, Edward B.; Jedrzejewski, Robert I.; Keeney, Brian A.; Oliveira, Cristina M.

    2016-01-01

    With no future space ultraviolet instruments currently planned, the data from the UV spectrographs aboard the Hubble Space Telescope have a legacy value beyond their initial science goals. The Hubble Spectroscopic Legacy Archive will provide to the community new science-grade combined spectra for all publicly available data obtained by the Cosmic Origins Spectrograph (COS) and the Space Telescope Imaging Spectrograph (STIS). These data will be packaged into "smart archives" according to target type and scientific themes to facilitate the construction of archival samples for common science uses. A new "quick look" capability will make the data easy for users to quickly access, assess the quality of, and download for archival science starting in Cycle 24, with the first generation of these products for the FUV modes of COS available online via MAST in early 2016.

  12. Data archive for NO(y) from observations and construction and testing of airborne instrument for simultaneous measurement of NO, NO2, NO(y), and O3

    NASA Technical Reports Server (NTRS)

    Carroll, Mary Anne; Emmons, Louisa

    1995-01-01

    The compilation and archiving of NO(x) and NO(y) measurements began in mid-March 1994. Since the submission of the first report, data summaries have been obtained for the TROPOZ 2, STRATOZ 3, OCTA and TOR/Schauinsland campaigns, and the full data sets will become a part of this archive in the near future. Climatologies of NO(x) and NO(y) have been developed from these and previously archived data sets, including the available GTE campaigns (ABLE-2A, B, -3A, B, CITE-2, -3, TRACE-A, PEM WEST-A) and AASE 1 and 2. The data have been grouped by season and altitude (boundary layer and 3 km ranges in the free troposphere). Maps showing median values of midday NO, NO(x) and NO(y) have been produced for each season for the boundary layer and 3 km ranges of the free troposphere. The statistics of the data (median, mean, and standard deviation, central 67% and 90%) have also been determined, and are shown in representative figures included in this report.

  13. Persistence of Mycoplasma hyopneumoniae sequence types in spite of a control program for enzootic pneumonia in pigs.

    PubMed

    Overesch, Gudrun; Kuhnert, Peter

    2017-09-15

    Enzootic pneumonia (EP) in pigs caused by Mycoplasma (M.) hyopneumoniae has successfully been combatted in Switzerland. A control program was fully implemented in 2004 which is based on total depopulation strategies of affected fattening farms as well as partial depopulation on breeding farms. Thereby, the number of cases has dropped drastically from more than 200 in 2003 to two cases in 2013. Currently monitoring is done based on clinical observation and subsequent diagnostic of coughing pigs. Moreover, in case of more than 10% gross pathological lesions per slaughter batch laboratory confirmation for EP is compulsory. Despite these strict measures it was not possible to eliminate M. hyopneumoniae from Swiss pig production. In fact, during the last few years the number of EP cases has slightly increased. Therefore, genotyping of the involved M. hyopneumoniae strains was conducted in order to elucidate possible sources and routes of infection. All available and typeable samples from totally 22 cases during the period 2014-2016 were investigated by extended multilocus sequence typing (MLST). A total of 16 cases, including eight from 2014, five from 2015 and three from 2016 could thereby be included in the study. MLST revealed that the majority of cases in 2014/2015 were due to two major spread scenarios, i.e. two M. hyopneumoniae sequence types, each scenario involving six individual production farms in five to six different Cantons (states), respectively. Moreover, by comparison of archived sequences some sequence types were observed over ten years demonstrating their persistence over a long time and the possible partial failure of elimination measures in Switzerland. Insufficient sanitation on affected farms and subsequent animal transport of symptomless infected pigs could lead to recurrent cases. Wild boar harbor identical strains found with EP but solid data are missing to assign a role as reservoir to this wild animal. Implementing a monitoring scheme for M

  14. In-depth investigation of archival and prospectively collected samples reveals no evidence for XMRV infection in prostate cancer.

    PubMed

    Lee, Deanna; Das Gupta, Jaydip; Gaughan, Christina; Steffen, Imke; Tang, Ning; Luk, Ka-Cheung; Qiu, Xiaoxing; Urisman, Anatoly; Fischer, Nicole; Molinaro, Ross; Broz, Miranda; Schochetman, Gerald; Klein, Eric A; Ganem, Don; Derisi, Joseph L; Simmons, Graham; Hackett, John; Silverman, Robert H; Chiu, Charles Y

    2012-01-01

    XMRV, or xenotropic murine leukemia virus (MLV)-related virus, is a novel gammaretrovirus originally identified in studies that analyzed tissue from prostate cancer patients in 2006 and blood from patients with chronic fatigue syndrome (CFS) in 2009. However, a large number of subsequent studies failed to confirm a link between XMRV infection and CFS or prostate cancer. On the contrary, recent evidence indicates that XMRV is a contaminant originating from the recombination of two mouse endogenous retroviruses during passaging of a prostate tumor xenograft (CWR22) in mice, generating laboratory-derived cell lines that are XMRV-infected. To confirm or refute an association between XMRV and prostate cancer, we analyzed prostate cancer tissues and plasma from a prospectively collected cohort of 39 patients as well as archival RNA and prostate tissue from the original 2006 study. Despite comprehensive microarray, PCR, FISH, and serological testing, XMRV was not detected in any of the newly collected samples or in archival tissue, although archival RNA remained XMRV-positive. Notably, archival VP62 prostate tissue, from which the prototype XMRV strain was derived, tested negative for XMRV on re-analysis. Analysis of viral genomic and human mitochondrial sequences revealed that all previously characterized XMRV strains are identical and that the archival RNA had been contaminated by an XMRV-infected laboratory cell line. These findings reveal no association between XMRV and prostate cancer, and underscore the conclusion that XMRV is not a naturally acquired human infection.

  15. Autonomous Real Time Requirements Tracing

    NASA Technical Reports Server (NTRS)

    Plattsmier, George I.; Stetson, Howard K.

    2014-01-01

    One of the more challenging aspects of software development is the ability to verify and validate the functional software requirements dictated by the Software Requirements Specification (SRS) and the Software Detail Design (SDD). Insuring the software has achieved the intended requirements is the responsibility of the Software Quality team and the Software Test team. The utilization of Timeliner-TLX(sup TM) Auto-Procedures for relocating ground operations positions to ISS automated on-board operations has begun the transition that would be required for manned deep space missions with minimal crew requirements. This transition also moves the auto-procedures from the procedure realm into the flight software arena and as such the operational requirements and testing will be more structured and rigorous. The autoprocedures would be required to meet NASA software standards as specified in the Software Safety Standard (NASASTD- 8719), the Software Engineering Requirements (NPR 7150), the Software Assurance Standard (NASA-STD-8739) and also the Human Rating Requirements (NPR-8705). The Autonomous Fluid Transfer System (AFTS) test-bed utilizes the Timeliner-TLX(sup TM) Language for development of autonomous command and control software. The Timeliner- TLX(sup TM) system has the unique feature of providing the current line of the statement in execution during real-time execution of the software. The feature of execution line number internal reporting unlocks the capability of monitoring the execution autonomously by use of a companion Timeliner-TLX(sup TM) sequence as the line number reporting is embedded inside the Timeliner-TLX(sup TM) execution engine. This negates I/O processing of this type data as the line number status of executing sequences is built-in as a function reference. This paper will outline the design and capabilities of the AFTS Autonomous Requirements Tracker, which traces and logs SRS requirements as they are being met during real-time execution of the

  16. Autonomous Real Time Requirements Tracing

    NASA Technical Reports Server (NTRS)

    Plattsmier, George; Stetson, Howard

    2014-01-01

    One of the more challenging aspects of software development is the ability to verify and validate the functional software requirements dictated by the Software Requirements Specification (SRS) and the Software Detail Design (SDD). Insuring the software has achieved the intended requirements is the responsibility of the Software Quality team and the Software Test team. The utilization of Timeliner-TLX(sup TM) Auto- Procedures for relocating ground operations positions to ISS automated on-board operations has begun the transition that would be required for manned deep space missions with minimal crew requirements. This transition also moves the auto-procedures from the procedure realm into the flight software arena and as such the operational requirements and testing will be more structured and rigorous. The autoprocedures would be required to meet NASA software standards as specified in the Software Safety Standard (NASASTD- 8719), the Software Engineering Requirements (NPR 7150), the Software Assurance Standard (NASA-STD-8739) and also the Human Rating Requirements (NPR-8705). The Autonomous Fluid Transfer System (AFTS) test-bed utilizes the Timeliner-TLX(sup TM) Language for development of autonomous command and control software. The Timeliner-TLX(sup TM) system has the unique feature of providing the current line of the statement in execution during real-time execution of the software. The feature of execution line number internal reporting unlocks the capability of monitoring the execution autonomously by use of a companion Timeliner-TLX(sup TM) sequence as the line number reporting is embedded inside the Timeliner-TLX(sup TM) execution engine. This negates I/O processing of this type data as the line number status of executing sequences is built-in as a function reference. This paper will outline the design and capabilities of the AFTS Autonomous Requirements Tracker, which traces and logs SRS requirements as they are being met during real-time execution of the

  17. Expansion of the On-line Archive "Statistically Downscaled WCRP CMIP3 Climate Projections"

    NASA Astrophysics Data System (ADS)

    Brekke, L. D.; Pruitt, T.; Maurer, E. P.; Das, T.; Duffy, P.; White, K.

    2009-12-01

    response, archive developers are adding content in 2010, teaming with Scripps Institution of Oceanography (through their NOAA-RISA California-Nevada Applications Program and the California Climate Change Center) to apply a new daily downscaling technique to a sub-ensemble of the archive’s CMIP3 projections. The new technique, Bias-Corrected Constructed Analogs, combines the BC part of BCSD with a recently developed technique that preserves the daily sequencing structure of CMIP3 projections (Constructed Analogs, or CA). Such data will more easily serve hydrologic and ecological impacts assessments, and offer an opportunity to evaluate projection uncertainty associated with downscaling technique. Looking ahead to the arrival CMIP5 projections, archive collaborators have plans apply both BCSD and BCCA over the contiguous U.S. consistent with CMIP3 applications above, and also apply BCSD globally at a 0.5 degree spatial resolution. The latter effort involves collaboration with U.S. Army Corps of Engineers (USACE) and Climate Central.

  18. HEASARC Software Archive

    NASA Technical Reports Server (NTRS)

    White, Nicholas (Technical Monitor); Murray, Stephen S.

    2003-01-01

    (1) Chandra Archive: SAO has maintained the interfaces through which HEASARC gains access to the Chandra Data Archive. At HEASARC's request, we have implemented an anonymous ftp copy of a major part of the public archive and we keep that archive up-to- date. SAO has participated in the ADEC interoperability working group, establishing guidelines or interoperability standards and prototyping such interfaces. We have provided an NVO-based prototype interface, intending to serve the HEASARC-led NVO demo project. HEASARC's Astrobrowse interface was maintained and updated. In addition, we have participated in design discussions surrounding HEASARC's Caldb project. We have attended the HEASARC Users Group meeting and presented CDA status and developments. (2) Chandra CALDB: SA0 has maintained and expanded the Chandra CALDB by including four new data file types, defining the corresponding CALDB keyword/identification structures. We have provided CALDB upgrades for the public (CIAO) and for Standard Data Processing. Approximately 40 new files have been added to the CALDB in these version releases. There have been in the past year ten of these CALDB upgrades, each with unique index configurations. In addition, with the inputs from software, archive, and calibration scientists, as well as CIAO/SDP software developers, we have defined a generalized expansion of the existing CALDB interface and indexing structure. The purpose of this is to make the CALDB more generally applicable and useful in new and future missions that will be supported archivally by HEASARC. The generalized interface will identify additional configurational keywords and permit more extensive calibration parameter and boundary condition specifications for unique file selection. HEASARC scientists and developers from SAO and GSFC have become involved in this work, which is expected to produce a new interface for general use within the current year. (3) DS9: One of the decisions that came from last year

  19. Archiving Microgravity Flight Data and Samples

    NASA Technical Reports Server (NTRS)

    1996-01-01

    To obtain help in evaluating its current strategy for archiving data and samples obtained in microgravity research, NASA's Microgravity Science and Applications Division (MSAD) asked the Space Studies Board's Committee on Microgravity Research for guidance on the following questions: What data should be archived and where should it be kept? In what form should the data be maintained (electronic files, photographs, hard copy, samples)? What should the general format of the database be? To what extent should it be universally accessible and through what mechanisms? Should there be a period of time for which principal investigators have proprietary access? If so, how long should proprietary data be stored? What provisions should be made for data obtained from ground-based experiments? What should the deadline be for investigators placing their data in the archive? How long should data be saved? How long should data be easily accessible? As a prelude to making recommendations for optimum selection and storage of microgravity data and samples, the committee in this report briefly describes NASA's past archiving practices and outlines MSAD's current archiving strategy. Although the committee found that only a limited number of experiments have thus far been archived, it concluded that the general archiving strategy, characterized by MSAD as minimalist, appears viable. A central focus of attention is the Experiment Data Management Plan (EDMP), MSAD's recently instituted data management and archiving framework for flight experiments. Many of the report's recommendations are aimed at enhancing the effectiveness of the EDMP approach, which the committee regards as an appropriate data management method for MSAD. Other recommendations provide guidance on broader issues related to the questions listed above. This report does not address statutory or regulatory records retention requirements.

  20. 36 CFR 1253.1 - National Archives Building.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false National Archives Building... PUBLIC AVAILABILITY AND USE LOCATION OF RECORDS AND HOURS OF USE § 1253.1 National Archives Building. (a) The National Archives Building is located at 700 Pennsylvania Avenue, NW., Washington, DC 20408...

  1. A Generic Archive Protocol and an Implementation

    NASA Astrophysics Data System (ADS)

    Jordan, J. M.; Jennings, D. G.; McGlynn, T. A.; Ruggiero, N. G.; Serlemitsos, T. A.

    1993-01-01

    Archiving vast amounts of data has become a major part of every scientific space mission today. GRASP, the Generic Retrieval/Ar\\-chive Services Protocol, addresses the question of how to archive the data collected in an environment where the underlying hardware archives and computer hosts may be rapidly changing.

  2. A multi-archive coherent chronology: from Greenland to the Mediterranean sea

    NASA Astrophysics Data System (ADS)

    Bazin, Lucie; Landais, Amaelle; Lemieux-Dudon, Bénédicte; Siani, Giuseppe; Michel, Elisabeth; Combourieu-Nebout, Nathalie; Blamart, Dominique; Genty, Dominique

    2015-04-01

    Understanding the climate mechanisms requires a precise knowledge of the sequence of events during major climate changes. In order to provide precise relationships between changes in orbital and/or greenhouse gases concentration forcing, sea level changes and high vs low latitudes temperatures, a common chronological framework for different paleoclimatic archives is required. Coherent chronologies for ice cores have been recently produced using a bayesian dating tool, DATICE (Lemieux-Dudon et al., 2010, Bazin et al., 2013, Veres et al., 2013). Such tool has been recently developed to include marine cores and speleothems in addition to ice cores. This new development should enable one to test the coherency of different chronologies using absolute and stratigraphic links as well as to provide relationship between climatic changes recorded in different archives. We present here a first application of multi-archive coherent dating including paleoclimatic archives from (1) Greenland (NGRIP ice core), (2) Mediterranean sea (marine core MD90-917, 41° N17° E, 1010 m) and (3) speleothems from the South of France and North Tunisia (Chauvet, Villars and La Mine speleothems, Genty et al., 2006). Thanks to the good absolute chronological constraints from annual layer counting in NGRIP, 14C and tephra layers in MD90-917 and U-Th dating in speleothems, we can provide a precise chronological framework for the last 50 ka (ie. thousand years before present). Then, we present different tests on how to combine the records from the different archives and give the most plausible scenario for the sequence of events at different latitudes over the last deglaciation. Bazin, L., Landais, A. ; Lemieu¬-Dudon, B. ; Kele, H. T. M. ; Veres, D. ; Parrenin, F. ; Martinerie, P. ; Ritz, C. ; Capron, E. ; Lipenkov, V. ; Loutre, M.-F. ; Raynaud, D. ; Vinther, B. ; Svensson, A. ; Rasmussen, S. ; Severi, M. ; Blunier, T. ; Leuenberger, M. ; Fischer, H. ; Masson-¬-Delmotte, V. ; Chappellaz, J

  3. Tracing Males From Different Continents by Genotyping JC Polyomavirus in DNA From Semen Samples.

    PubMed

    Rotondo, John Charles; Candian, Tommaso; Selvatici, Rita; Mazzoni, Elisa; Bonaccorsi, Gloria; Greco, Pantaleo; Tognon, Mauro; Martini, Fernanda

    2017-05-01

    The human JC polyomavirus (JCPyV) is an ubiquitous viral agent infecting approximately 60% of humans. Recently, JCPyV sequences have been detected in semen samples. The aim of this investigation was to test whether semen JCPyV genotyping can be employed to trace the origin continent of males. Semen DNA samples (n = 170) from males of different Continents were investigated by PCR for the polymorphic JCPyV viral capsid protein 1 (VP1) sequences, followed by DNA sequencing. JCPyV sequences were detected with an overall prevalence of 27.6% (47/170). DNA sequencing revealed that European males carried JCPyV types 1A (71.4%), 4 (11.4%), 2B (2.9%), 2D1 (2.9%), and 3A (2.9%). Asians JCPyV type 2D1 (66.7%) and Africans JCPyV types 3A (33.3%) and 1A (33.3%). In 10.6% of males, two different JCPyV genotypes were detected, suggesting that the second JCPyV genotype was acquired in the destination country. This study indicates that the majority of semen samples found to be JCPyV-positive, were infected with the JCPyV genotype found in the geographic area of male origin. Therefore, semen JCPyV genotyping could be employed to trace the origin continent of males. Our findings could be applied to forensic investigations, in case of for instance sexual crimes. Indeed, JCPyV genotyping should enable investigators to make additional detailed profiling of the offender. J. Cell. Physiol. 232: 982-985, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. Is there still a TRACE of trace?

    NASA Astrophysics Data System (ADS)

    McClelland, James; Mirman, Daniel; Holt, Lori

    2003-04-01

    According to the TRACE model [McClelland and Elman, Cogn. Psychol. 18, 1-86 (1986)], speech recognition is an interactive activation process involving the integrated use of top-down (lexical) and bottom-up (acoustic) information. Although it is widely accepted that there are lexical influences on speech perception, there has been a disagreement over their exact nature. Two contested predictions of TRACE are that (a) lexical influences should delay or inhibit recognition of phonemes not consistent with lexical information and (b) a lexical influence on the identification of one phoneme can trigger compensation for co-articulation, affecting the identification of other phonemes. Others [Norris, McQueen, and Cutler, BBS 23, 299-370 (2000)] have argued that the predicted effects do not occur, taking this to support an alternative to the TRACE model in which lexical influences do not affect perception, but only a post-perceptual identification process. We re-examine the evidence on these points along with the recent finding that lexical information may lead to a lasting adjustment of category boundaries [McQueen, Norris, and Cutler, Psychonomics Abstract 255 (2001)]. Our analysis indicates that the existing evidence is completely consistent with TRACE, and we suggest additional research that will be necessary to resolve unanswered questions.

  5. The Preservation of Paper Collections in Archives.

    ERIC Educational Resources Information Center

    Adams, Cynthia Ann

    The preservation methods used for paper collections in archives were studied through a survey of archives in the metropolitan Atlanta (Georgia) area. The preservation policy or program was studied, and the implications for conservators and preservation officers were noted. Twelve of 15 archives responded (response rate of 80 percent). Basic…

  6. Records & Information Management Services | Alaska State Archives

    Science.gov Websites

    Search Search in: Archives State of Alaska Home About Records Management (RIMS) For Researchers Collections Imaging (IMS) ASHRAB Libraries, Archives, & Museums Archives Records Management (RIMS) Records records and information management for the State of Alaska. Frequently Asked Questions Submit Records

  7. Picture archiving and communication in radiology.

    PubMed

    Napoli, Marzia; Nanni, Marinella; Cimarra, Stefania; Crisafulli, Letizia; Campioni, Paolo; Marano, Pasquale

    2003-01-01

    After over 80 years of exclusive archiving of radiologic films, at present, in Radiology, digital archiving is increasingly gaining ground. Digital archiving allows a considerable reduction in costs and space saving, but most importantly, immediate or remote consultation of all examinations and reports in the hospital clinical wards, is feasible. The RIS system, in this case, is the starting point of the process of electronic archiving which however is the task of PACS. The latter can be used as radiologic archive in accordance with the law provided that it is in conformance with some specifications as the use of optical long-term storage media or with electronic track of change. PACS archives, in a hierarchical system, all digital images produced by each diagnostic imaging modality. Images and patient data can be retrieved and used for consultation or remote consultation by the reporting radiologist who requires images and reports of previous radiologic examinations or by the referring physician of the ward. Modern PACS owing to the WEB server allow remote access to extremely simplified images and data however ensuring the due regulations and access protections. Since the PACS enables a simpler data communication within the hospital, security and patient privacy should be protected. A secure and reliable PACS should be able to minimize the risk of accidental data destruction, and should prevent non authorized access to the archive with adequate security measures in relation to the acquired knowledge and based on the technological advances. Archiving of data produced by modern digital imaging is a problem now present also in small Radiology services. The technology is able to readily solve problems which were extremely complex up to some years ago as the connection between equipment and archiving system owing also to the universalization of the DICOM 3.0 standard. The evolution of communication networks and the use of standard protocols as TCP/IP can minimize

  8. The Design of Archives Buildings.

    ERIC Educational Resources Information Center

    Faye, Bernard

    1982-01-01

    Studies specific problems arising from design of archives buildings and examines three main purposes of this type of building, namely conservation, classification and restoration of archives, and the provision of access to them by administrators and research workers. Three references are listed. (Author/EJS)

  9. Archival storage solutions for PACS

    NASA Astrophysics Data System (ADS)

    Chunn, Timothy

    1997-05-01

    While they are many, one of the inhibitors to the wide spread diffusion of PACS systems has been robust, cost effective digital archive storage solutions. Moreover, an automated Nearline solution is key to a central, sharable data repository, enabling many applications such as PACS, telemedicine and teleradiology, and information warehousing and data mining for research such as patient outcome analysis. Selecting the right solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, configuration architecture and flexibility, subsystem availability and reliability, security requirements, system cost, achievable benefits and cost savings, investment protection, strategic fit and more.This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on storage system throughput will be analyzed. The concept of automated migration of images from high performance, high cost storage devices to high capacity, low cost storage devices will be introduced as a viable way to minimize overall storage costs for an archive. The concept of access density will also be introduced and applied to the selection of the most cost effective archive solution.

  10. The bark of the branches of holm oak (Quercus ilex L.) for a retrospective study of trace elements in the atmosphere.

    PubMed

    Drava, Giuliana; Brignole, Daniele; Giordani, Paolo; Minganti, Vincenzo

    2017-04-01

    Tree bark has proved to be a useful bioindicator for trace elements in the atmosphere, however it reflects an exposure occurring during an unidentified period of time, so it provides spatial information about the distribution of contaminants in a certain area, but it cannot be used to detect temporal changes or trends, which is an important achievement in environmental studies. In order to obtain information about a known period of time, the bark collected from the annual segments of tree branches can be used, allowing analyses going back 10-15 years with annual resolution. In the present study, the concentrations of As, Cd, Co, Cu, Fe, Mn, Ni, Pb, V and Zn were measured by atomic emission spectrometry in a series of samples covering the period from 2001 to 2013 in an urban environment. Downward time trends were significant for Cd, Pb and Zn. The only trace element showing an upward time trend was V. The concentrations of the remaining six trace elements were constant over time, showing that their presence in bark is not simply proportional to the duration of exposure. This approach, which is simple, reliable and widely applicable at a low cost, allows the "a posteriori" reconstruction of atmospheric trace element deposition when or where no monitoring programme is in progress and no other natural archives are available. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Building an archives in a medical library.

    PubMed Central

    Sammis, S K

    1984-01-01

    In 1979 the University of Medicine and Dentistry of New Jersey established an archives to collect, preserve, and retrieve important documentation related to its history. This paper examines various steps in building an archives and the development of a coherent collection policy, including potential sources for archival material. Problems and possible solutions concerning what to preserve from the vast quantities of material generated by an institution are considered. The relationship between the archives and the medical library and the requirements of the physical plant are discussed, including the storage and preservation of materials. PMID:6743876

  12. Efficient use of single molecule time traces to resolve kinetic rates, models and uncertainties

    NASA Astrophysics Data System (ADS)

    Schmid, Sonja; Hugel, Thorsten

    2018-03-01

    Single molecule time traces reveal the time evolution of unsynchronized kinetic systems. Especially single molecule Förster resonance energy transfer (smFRET) provides access to enzymatically important time scales, combined with molecular distance resolution and minimal interference with the sample. Yet the kinetic analysis of smFRET time traces is complicated by experimental shortcomings—such as photo-bleaching and noise. Here we recapitulate the fundamental limits of single molecule fluorescence that render the classic, dwell-time based kinetic analysis unsuitable. In contrast, our Single Molecule Analysis of Complex Kinetic Sequences (SMACKS) considers every data point and combines the information of many short traces in one global kinetic rate model. We demonstrate the potential of SMACKS by resolving the small kinetic effects caused by different ionic strengths in the chaperone protein Hsp90. These results show an unexpected interrelation between conformational dynamics and ATPase activity in Hsp90.

  13. Case-Based Plan Recognition Using Action Sequence Graphs

    DTIC Science & Technology

    2014-10-01

    resized as necessary. Similarly, trace- based reasoning (Zarka et al., 2013) and episode -based reasoning (Sánchez-Marré, 2005) store fixed-length...is a goal state of Π, where satisfies has the same semantics as originally laid out in Ghallab, Nau & Traverso (2004). Action 0 is ...Although there are syntactic similarities between planning encoding graphs and action sequence graphs, important semantic differences exist because the

  14. Functional region prediction with a set of appropriate homologous sequences-an index for sequence selection by integrating structure and sequence information with spatial statistics

    PubMed Central

    2012-01-01

    Background The detection of conserved residue clusters on a protein structure is one of the effective strategies for the prediction of functional protein regions. Various methods, such as Evolutionary Trace, have been developed based on this strategy. In such approaches, the conserved residues are identified through comparisons of homologous amino acid sequences. Therefore, the selection of homologous sequences is a critical step. It is empirically known that a certain degree of sequence divergence in the set of homologous sequences is required for the identification of conserved residues. However, the development of a method to select homologous sequences appropriate for the identification of conserved residues has not been sufficiently addressed. An objective and general method to select appropriate homologous sequences is desired for the efficient prediction of functional regions. Results We have developed a novel index to select the sequences appropriate for the identification of conserved residues, and implemented the index within our method to predict the functional regions of a protein. The implementation of the index improved the performance of the functional region prediction. The index represents the degree of conserved residue clustering on the tertiary structure of the protein. For this purpose, the structure and sequence information were integrated within the index by the application of spatial statistics. Spatial statistics is a field of statistics in which not only the attributes but also the geometrical coordinates of the data are considered simultaneously. Higher degrees of clustering generate larger index scores. We adopted the set of homologous sequences with the highest index score, under the assumption that the best prediction accuracy is obtained when the degree of clustering is the maximum. The set of sequences selected by the index led to higher functional region prediction performance than the sets of sequences selected by other sequence

  15. Archives of Memory and Memories of Archive: CMS Women's Letters and Diaries 1823-35

    ERIC Educational Resources Information Center

    Fitzgerald, Tanya

    2005-01-01

    Searching for evidence written about or by women regarding past lives and experiences has raised challenges about what counts as an archive. Archives provide a form of connection between past and present and are a form of memory storing, memory-recording and memory-making. Records such as letters, diaries, and journals that may have been…

  16. Decrease in gamma-band activity tracks sequence learning

    PubMed Central

    Madhavan, Radhika; Millman, Daniel; Tang, Hanlin; Crone, Nathan E.; Lenz, Fredrick A.; Tierney, Travis S.; Madsen, Joseph R.; Kreiman, Gabriel; Anderson, William S.

    2015-01-01

    Learning novel sequences constitutes an example of declarative memory formation, involving conscious recall of temporal events. Performance in sequence learning tasks improves with repetition and involves forming temporal associations over scales of seconds to minutes. To further understand the neural circuits underlying declarative sequence learning over trials, we tracked changes in intracranial field potentials (IFPs) recorded from 1142 electrodes implanted throughout temporal and frontal cortical areas in 14 human subjects, while they learned the temporal-order of multiple sequences of images over trials through repeated recall. We observed an increase in power in the gamma frequency band (30–100 Hz) in the recall phase, particularly in areas within the temporal lobe including the parahippocampal gyrus. The degree of this gamma power enhancement decreased over trials with improved sequence recall. Modulation of gamma power was directly correlated with the improvement in recall performance. When presenting new sequences, gamma power was reset to high values and decreased again after learning. These observations suggest that signals in the gamma frequency band may play a more prominent role during the early steps of the learning process rather than during the maintenance of memory traces. PMID:25653598

  17. Archive interoperability in the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Genova, Françoise

    2003-02-01

    Main goals of Virtual Observatory projects are to build interoperability between astronomical on-line services, observatory archives, databases and results published in journals, and to develop tools permitting the best scientific usage from the very large data sets stored in observatory archives and produced by large surveys. The different Virtual Observatory projects collaborate to define common exchange standards, which are the key for a truly International Virtual Observatory: for instance their first common milestone has been a standard allowing exchange of tabular data, called VOTable. The Interoperability Work Area of the European Astrophysical Virtual Observatory project aims at networking European archives, by building a prototype using the CDS VizieR and Aladin tools, and at defining basic rules to help archive providers in interoperability implementation. The prototype is accessible for scientific usage, to get user feedback (and science results!) at an early stage of the project. ISO archive participates very actively to this endeavour, and more generally to information networking. The on-going inclusion of the ISO log in SIMBAD will allow higher level links for users.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doug Blankenship

    Archive of ArcGIS data from the West Flank FORGE site located in Coso, California. Archive contains the following eight shapefiles: Polygon of the 3D geologic model (WestFlank3DGeologicModelExtent) Polylines of the traces 3D modeled faults (WestFlank3DModeledFaultTraces) Polylines of the fault traces from Duffield and Bacon, 1980 (WestFlankFaultsfromDuffieldandBacon) Polygon of the West Flank FORGE site (WestFlankFORGEsite) Polylines of the traces of the geologic cross-sections (cross-sections in a separate archive in the GDR) (WestFlankGeologicCrossSections) Polylines of the traces of the seismic reflection profiles through and adjacent to the West Flank site (seismic reflection profiles in a separate archive in the GDR) (WestFlankSiesmicReflectionProfiles) Pointsmore » of the well collars in and around the West Flank site (WestFlankWellCollars) Polylines of the surface expression of the West Flank well paths (WestFlankWellPaths)« less

  19. Interactions of trace metals with hydrogels and filter membranes used in DET and DGT techniques.

    PubMed

    Garmo, Oyvind A; Davison, William; Zhang, Hao

    2008-08-01

    Equilibrium partitioning of trace metals between bulk solution and hydrogels/filter was studied. Under some conditions, trace metal concentrations were higher in the hydrogels or filter membranes compared to bulk solution (enrichment). In synthetic soft water, enrichment of cationic trace metals in polyacrylamide hydrogels decreased with increasing trace metal concentration. Enrichment was little affected by Ca and Mg in the concentration range typically encountered in natural freshwaters, indicating high affinity but low capacity binding of trace metals to solid structure in polyacrylamide gels. The apparent binding strength decreased in the sequence: Cu > Pb > Ni approximately to Cd approximately to Co and a low concentration of cationic Cu eliminated enrichment of weakly binding trace metal cations. The polyacrylamide gels also had an affinity for fulvic acid and/or its trace metal complexes. Enrichment of cationic Cd in agarose gel and hydrophilic polyethersulfone filter was independent of concentration (10 nM to 5 microM) but decreased with increasing Ca/ Mg concentration and ionic strength, suggesting that it is mainly due to electrostatic interactions. However, Cu and Pb were enriched even after equilibration in seawater, indicating that these metals additionally bind to sites within the agarose gel and filter. Compared to the polyacrylamide gels, agarose gel had a lower affinity for metal-fulvic complexes. Potential biases in measurements made with the diffusive equilibration in thin-films (DET) technique, identified by this work, are discussed.

  20. Profiles of international archives: Les archives Jean Piaget, University of Geneva, Switzerland.

    PubMed

    Burman, Jeremy Trevelyan

    2013-05-01

    This research report provides a look behind closed doors at the Jean Piaget Archives in Geneva, Switzerland. It situates the potential visitor, contextualizes the Archives in its own history, and then describes what scholars can expect to find. New details about Piaget's views on Equal Rights and Equal Pay are also provided, including a look at how they affected the women who worked his factory (esp. Bärbel Inhelder). (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  1. The Protein Data Bank: unifying the archive

    PubMed Central

    Westbrook, John; Feng, Zukang; Jain, Shri; Bhat, T. N.; Thanki, Narmada; Ravichandran, Veerasamy; Gilliland, Gary L.; Bluhm, Wolfgang F.; Weissig, Helge; Greer, Douglas S.; Bourne, Philip E.; Berman, Helen M.

    2002-01-01

    The Protein Data Bank (PDB; http://www.pdb.org/) is the single worldwide archive of structural data of biological macromolecules. This paper describes the progress that has been made in validating all data in the PDB archive and in releasing a uniform archive for the community. We have now produced a collection of mmCIF data files for the PDB archive (ftp://beta.rcsb.org/pub/pdb/uniformity/data/mmCIF/). A utility application that converts the mmCIF data files to the PDB format (called CIFTr) has also been released to provide support for existing software. PMID:11752306

  2. 36 CFR 1280.66 - May I use the National Archives Library?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Archives Library? 1280.66 Section 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park are...

  3. 36 CFR 1280.66 - May I use the National Archives Library?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Archives Library? 1280.66 Section 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park are...

  4. 36 CFR 1280.66 - May I use the National Archives Library?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Archives Library? 1280.66 Section 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park are...

  5. 36 CFR 1280.66 - May I use the National Archives Library?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Archives Library? 1280.66 Section 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park are...

  6. Archives of Transformation: A Case Study of the International Women's Network against Militarism's Archival System

    ERIC Educational Resources Information Center

    Cachola, Ellen-Rae Cabebe

    2014-01-01

    This dissertation describes the International Women's Network Against Militarism's (IWNAM) political epistemology of security from an archival perspective, and how they create community archives to evidence this epistemology. This research examines records created by Women for Genuine Security (WGS) and Women's Voices Women Speak (WVWS), U.S. and…

  7. European distributed seismological data archives infrastructure: EIDA

    NASA Astrophysics Data System (ADS)

    Clinton, John; Hanka, Winfried; Mazza, Salvatore; Pederson, Helle; Sleeman, Reinoud; Stammler, Klaus; Strollo, Angelo

    2014-05-01

    The European Integrated waveform Data Archive (EIDA) is a distributed Data Center system within ORFEUS that (a) securely archives seismic waveform data and related metadata gathered by European research infrastructures, and (b) provides transparent access to the archives for the geosciences research communities. EIDA was founded in 2013 by ORFEUS Data Center, GFZ, RESIF, ETH, INGV and BGR to ensure sustainability of a distributed archive system and the implementation of standards (e.g. FDSN StationXML, FDSN webservices) and coordinate new developments. Under the mandate of the ORFEUS Board of Directors and Executive Committee the founding group is responsible for steering and maintaining the technical developments and organization of the European distributed seismic waveform data archive and the integration within broader multidisciplanry frameworks like EPOS. EIDA currently offers uniform data access to unrestricted data from 8 European archives (www.orfeus-eu.org/eida), linked by the Arclink protocol, hosting data from 75 permanent networks (1800+ stations) and 33 temporary networks (1200+) stations). Moreover, each archive may also provide unique, restricted datasets. A webinterface, developed at GFZ, offers interactive access to different catalogues (EMSC, GFZ, USGS) and EIDA waveform data. Clients and toolboxes like arclink_fetch and ObsPy can connect directly to any EIDA node to collect data. Current developments are directed to the implementation of quality parameters and strong motion parameters.

  8. NASA Data Archive Evaluation

    NASA Technical Reports Server (NTRS)

    Holley, Daniel C.; Haight, Kyle G.; Lindstrom, Ted

    1997-01-01

    The purpose of this study was to expose a range of naive individuals to the NASA Data Archive and to obtain feedback from them, with the goal of learning how useful people with varied backgrounds would find the Archive for research and other purposes. We processed 36 subjects in four experimental categories, designated in this report as C+R+, C+R-, C-R+ and C-R-, for computer experienced researchers, computer experienced non-researchers, non-computer experienced researchers, and non-computer experienced non-researchers, respectively. This report includes an assessment of general patterns of subject responses to the various aspects of the NASA Data Archive. Some of the aspects examined were interface-oriented, addressing such issues as whether the subject was able to locate information, figure out how to perform desired information retrieval tasks, etc. Other aspects were content-related. In doing these assessments, answers given to different questions were sometimes combined. This practice reflects the tendency of the subjects to provide answers expressing their experiences across question boundaries. Patterns of response are cross-examined by subject category in order to bring out deeper understandings of why subjects reacted the way they did to the archive. After the general assessment, there will be a more extensive summary of the replies received from the test subjects.

  9. The Alaska Arctic Vegetation Archive (AVA-AK)

    Treesearch

    Donald A. Walker; Amy L. Breen; Lisa A. Druckenmiller; Lisa W. Wirth; Will Fisher; Martha K. Raynolds; Jozef Šibík; Marilyn D. Walker; Stephan Hennekens; Keith Boggs; Tina Boucher; Marcel Buchhorn; Helga Bültmann; David J. Cooper; Fred J.A Daniëls; Scott J. Davidson; James J. Ebersole; Sara C. Elmendorf; Howard E. Epstein; William A. Gould; Robert D. Hollister; Colleen M. Iversen; M. Torre Jorgenson; Anja Kade; Michael T. Lee; William H. MacKenzie; Robert K. Peet; Jana L. Peirce; Udo Schickhoff; Victoria L. Sloan; Stephen S. Talbot; Craig E. Tweedie; Sandra Villarreal; Patrick J. Webber; Donatella Zona

    2016-01-01

    The Alaska Arctic Vegetation Archive (AVA-AK, GIVD-ID: NA-US-014) is a free, publically available database archive of vegetation-plot data from the Arctic tundra region of northern Alaska. The archive currently contains 24 datasets with 3,026 non-overlapping plots. Of these, 74% have geolocation data with 25-m or better precision. Species cover data and header data are...

  10. A search for T Tauri stars and related objects: Archival photometry of candidate variables in V733 Cep field

    NASA Astrophysics Data System (ADS)

    Jurdana-Šepić, R.; Poljančić Beljan, I.

    Searching for T Tauri stars or related early type variables we carried out a BVRI photometric measurements of five candidates with positions within the field of the pre-main sequence object V733 Cephei (Persson's star) located in the dark cloud L1216 near to Cepheus OB3 Association: VES 946, VES 950, NSV 14333, NSV 25966 and V385 Cep. Their magnitudes are determined on the plates from Asiago Observatory historical photographic archive exposed 1971 - 1978. We provide finding charts for program stars and comparison sequence stars, magnitude estimations, magnitude mean values and BVR_cI_c light curves of program stars.

  11. Permafrost as palaeo-environmental archive - potentials and limitations

    NASA Astrophysics Data System (ADS)

    Schirrmeister, L.; Wetterich, S.; Meyer, H.; Grosse, G.; Schwamborn, G.; Siegert, C.

    2009-04-01

    temperatures, mean winter temperatures, mean Juli temperatures, precipitation, humidity, soil climate and chemistry, hydrology and hydrochemistry of waters). The general potential of permafrost archives includes spatial (circumarctic, high arctic to boreal zones) and temporal (Mid Pleistocene to modern) environmental gradients. Lateral cross sections contain information about permafrost degradation during interglacial periods, the aggradation of ice-rich sequences during stadial and interstadial periods, and extreme changes in periglacial hydrology during the late Quaternary. The spatial reconstruction of ancient landscapes is possible by detailed study of kilometer-long coastal exposures. Temporally relative high resolution (about 50 years) isotope data from ice wedges reflect the Late Pleistocene to Holocene climate transition. Using transfer functions for pollen, plant macro remains or chironomids, the numerical estimation of palaeo-climate data (temperature and precipitation) is possible. The limitations of permafrost archives are the frequent lack of continuous sequences due to thermokarst or thermo-erosion events. Local stratigraphies are sometimes difficult to correlate on a regional scale because of permafrost degradation and neotectonic influence on the accumulative/erosive environment in some regions. Until now there are still uncertainties for comparing different geochronological methods, some of them related to unknown influences of permafrost processes on chemical and physical parameters important to the age determination technique. Due to strong cryoturbation patterns and sometimes challenging sampling situations on near-vertical frozen exposures the geochronological resolution in permafrost sequences is usually lower than in lacustrine sequences or glacial ice cores. Eventually, as for any other archive, we need to consider the effect of local versus regional signals derived from the palaeo-ecological interpretation of fossil records.

  12. Archive of digital and digitized analog boomer seismic reflection data collected during USGS cruise 96CCT02 in Copano, Corpus Christi, and Nueces Bays and Corpus Christi Bayou, Texas, July 1996

    USGS Publications Warehouse

    Harrison, Arnell S.; Dadisman, Shawn V.; Kindinger, Jack G.; Morton, Robert A.; Blum, Mike D.; Wiese, Dana S.; Subiño, Janice A.

    2007-01-01

    In June of 1996, the U.S. Geological Survey conducted geophysical surveys from Nueces to Copano Bays, Texas. This report serves as an archive of unprocessed digital boomer seismic reflection data, trackline maps, navigation files, GIS information, cruise log, and formal FGDC metadata. Filtered and gained digital images of the seismic profiles and high resolution scanned TIFF images of the original paper printouts are also provided. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry and others, 1975) and may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU). Example SU processing scripts and USGS software for viewing the SEG-Y files (Zihlman, 1992) are also provided.

  13. Placental fetal stem segmentation in a sequence of histology images

    NASA Astrophysics Data System (ADS)

    Athavale, Prashant; Vese, Luminita A.

    2012-02-01

    Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental fetal stems. Analysis of the fetal stems in a placenta could be useful in the study and diagnosis of some diseases like autism. To study the fetal stem structure effectively, we need to automatically and accurately track fetal stems through a sequence of digitized hematoxylin and eosin (H&E) stained histology slides. There are many problems in successfully achieving this goal. A few of the problems are: large size of images, misalignment of the consecutive H&E slides, unpredictable inaccuracies of manual tracing, very complicated texture patterns of various tissue types without clear characteristics, just to name a few. In this paper we propose a novel algorithm to achieve automatic tracing of the fetal stem in a sequence of H&E images, based on an inaccurate manual segmentation of a fetal stem in one of the images. This algorithm combines global affine registration, local non-affine registration and a novel 'dynamic' version of the active contours model without edges. We first use global affine image registration of all the images based on displacement, scaling and rotation. This gives us approximate location of the corresponding fetal stem in the image that needs to be traced. We then use the affine registration algorithm "locally" near this location. At this point, we use a fast non-affine registration based on L2-similarity measure and diffusion regularization to get a better location of the fetal stem. Finally, we have to take into account inaccuracies in the initial tracing. This is achieved through a novel dynamic version of the active contours model without edges where the coefficients of the fitting terms are computed iteratively to ensure that we obtain a unique stem in the segmentation. The segmentation thus obtained can then be used as an

  14. Software for Managing an Archive of Images

    NASA Technical Reports Server (NTRS)

    Hallai, Charles; Jones, Helene; Callac, Chris

    2003-01-01

    This is a revised draft by Innovators concerning the report on Software for Managing and Archive of Images.The SSC Multimedia Archive is an automated electronic system to manage images, acquired both by film and digital cameras, for the Public Affairs Office (PAO) at Stennis Space Center (SSC). Previously, the image archive was based on film photography and utilized a manual system that, by todays standards, had become inefficient and expensive. Now, the SSC Multimedia Archive, based on a server at SSC, contains both catalogs and images for pictures taken both digitally and with a traditional film-based camera, along with metadata about each image.

  15. Life Sciences Data Archive (LSDA)

    NASA Technical Reports Server (NTRS)

    Fitts, M.; Johnson-Throop, Kathy; Thomas, D.; Shackelford, K.

    2008-01-01

    In the early days of spaceflight, space life sciences data were been collected and stored in numerous databases, formats, media-types and geographical locations. While serving the needs of individual research teams, these data were largely unknown/unavailable to the scientific community at large. As a result, the Space Act of 1958 and the Science Data Management Policy mandated that research data collected by the National Aeronautics and Space Administration be made available to the science community at large. The Biomedical Informatics and Health Care Systems Branch of the Space Life Sciences Directorate at JSC and the Data Archive Project at ARC, with funding from the Human Research Program through the Exploration Medical Capability Element, are fulfilling these requirements through the systematic population of the Life Sciences Data Archive. This program constitutes a formal system for the acquisition, archival and distribution of data for Life Sciences-sponsored experiments and investigations. The general goal of the archive is to acquire, preserve, and distribute these data using a variety of media which are accessible and responsive to inquiries from the science communities.

  16. Managing an Archive of Images

    NASA Technical Reports Server (NTRS)

    Andres, Vince; Walter, David; Hallal, Charles; Jones, Helene; Callac, Chris

    2004-01-01

    The SSC Multimedia Archive is an automated electronic system to manage images, acquired both by film and digital cameras, for the Public Affairs Office (PAO) at Stennis Space Center (SSC). Previously, the image archive was based on film photography and utilized a manual system that, by today s standards, had become inefficient and expensive. Now, the SSC Multimedia Archive, based on a server at SSC, contains both catalogs and images for pictures taken both digitally and with a traditional, film-based camera, along with metadata about each image. After a "shoot," a photographer downloads the images into the database. Members of the PAO can use a Web-based application to search, view and retrieve images, approve images for publication, and view and edit metadata associated with the images. Approved images are archived and cross-referenced with appropriate descriptions and information. Security is provided by allowing administrators to explicitly grant access privileges to personnel to only access components of the system that they need to (i.e., allow only photographers to upload images, only PAO designated employees may approve images).

  17. Traces of ternary relations

    NASA Astrophysics Data System (ADS)

    Zedam, Lemnaouar; Barkat, Omar; De Baets, Bernard

    2018-05-01

    In this paper, we generalize the notion of traces of a binary relation to the setting of ternary relations. With a given ternary relation, we associate three binary relations: its left, middle and right trace. As in the binary case, these traces facilitate the study and characterization of properties of a ternary relation. Interestingly, the traces themselves turn out to be the greatest solutions of relational inequalities associated with newly introduced compositions of a ternary relation with a binary relation (and vice versa).

  18. Databases and archiving for cryoEM

    PubMed Central

    Patwardhan, Ardan; Lawson, Catherine L.

    2017-01-01

    Cryo-EM in structural biology is currently served by three public archives – EMDB for 3DEM reconstructions, PDB for models built from 3DEM reconstructions and EMPIAR for the raw 2D image data used to obtain the 3DEM reconstructions. These archives play a vital role for both the structural community and the wider biological community in making the data accessible so that results may be reused, reassessed and integrated with other structural and bioinformatics resources. The important role of the archives is underpinned by the fact that many journals mandate the deposition of data to PDB and EMDB on publication. The field is currently undergoing transformative changes where on the one hand high-resolution structures are becoming a routine occurrence while on the other hand electron tomography is enabling the study of macromolecules in the cellular context. Concomitantly the archives are evolving to best serve their stakeholder communities. In this chapter we describe the current state of the archives, resources available for depositing, accessing, searching, visualising and validating data, on-going community-wide initiatives and opportunities and challenges for the future. PMID:27572735

  19. Traces of Drosophila Memory

    PubMed Central

    Davis, Ronald L.

    2012-01-01

    Summary Studies using functional cellullar imaging of living flies have identified six memory traces that form in the olfactory nervous system after conditioning with odors. These traces occur in distinct nodes of the olfactory nervous system, form and disappear across different windows of time, and are detected in the imaged neurons as increased calcium influx or synaptic release in response to the conditioned odor. Three traces form at, or near acquisition and co-exist with short-term behavioral memory. One trace forms with a delay after learning and co-exists with intermediate-term behavioral memory. Two traces form many hours after acquisition and co-exist with long-term behavioral memory. The transient memory traces may support behavior across the time-windows of their existence. The experimental approaches for dissecting memory formation in the fly, ranging from the molecular to the systems, make it an ideal system for dissecting the logic by which the nervous system organizes and stores different temporal forms of memory. PMID:21482352

  20. Archival Services and Technologies for Scientific Data

    NASA Astrophysics Data System (ADS)

    Meyer, Jörg; Hardt, Marcus; Streit, Achim; van Wezel, Jos

    2014-06-01

    After analysis and publication, there is no need to keep experimental data online on spinning disks. For reliability and costs inactive data is moved to tape and put into a data archive. The data archive must provide reliable access for at least ten years following a recommendation of the German Science Foundation (DFG), but many scientific communities wish to keep data available much longer. Data archival is on the one hand purely a bit preservation activity in order to ensure the bits read are the same as those written years before. On the other hand enough information must be archived to be able to use and interpret the content of the data. The latter is depending on many also community specific factors and remains an areas of much debate among archival specialists. The paper describes the current practice of archival and bit preservation in use for different science communities at KIT for which a combination of organizational services and technical tools are required. The special monitoring to detect tape related errors, the software infrastructure in use as well as the service certification are discussed. Plans and developments at KIT also in the context of the Large Scale Data Management and Analysis (LSDMA) project are presented. The technical advantages of the T10 SCSI Stream Commands (SSC-4) and the Linear Tape File System (LTFS) will have a profound impact on future long term archival of large data sets.

  1. A Contextualized, Differential Sequence Mining Method to Derive Students' Learning Behavior Patterns

    ERIC Educational Resources Information Center

    Kinnebrew, John S.; Loretz, Kirk M.; Biswas, Gautam

    2013-01-01

    Computer-based learning environments can produce a wealth of data on student learning interactions. This paper presents an exploratory data mining methodology for assessing and comparing students' learning behaviors from these interaction traces. The core algorithm employs a novel combination of sequence mining techniques to identify deferentially…

  2. Real-time data archiving for GTA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, R.A.; Atkins, W.H.

    1992-09-01

    The architecture of the GTA control system, the nature of a typical GTA commissioning activity, and the varied interests of those analyzing the data make it challenging to develop a general-purpose scheme for archiving data and making the data available to those who will use it. Addressing the needs of those who develop and trouble-shoot hardware and software increases the challenge. This paper describes the aspects of GTA that affect archiving operations and discusses how the features of the EPICS archiving module meet a variety of needs for storing and accessing data.

  3. Real-time data archiving for GTA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, R.A.; Atkins, W.H.

    1992-01-01

    The architecture of the GTA control system, the nature of a typical GTA commissioning activity, and the varied interests of those analyzing the data make it challenging to develop a general-purpose scheme for archiving data and making the data available to those who will use it. Addressing the needs of those who develop and trouble-shoot hardware and software increases the challenge. This paper describes the aspects of GTA that affect archiving operations and discusses how the features of the EPICS archiving module meet a variety of needs for storing and accessing data.

  4. The Role of Data Archives in Synoptic Solar Physics

    NASA Astrophysics Data System (ADS)

    Reardon, Kevin

    The detailed study of solar cycle variations requires analysis of recorded datasets spanning many years of observations, that is, a data archive. The use of digital data, combined with powerful database server software, gives such archives new capabilities to provide, quickly and flexibly, selected pieces of information to scientists. Use of standardized protocols will allow multiple databases, independently maintained, to be seamlessly joined, allowing complex searches spanning multiple archives. These data archives also benefit from being developed in parallel with the telescope itself, which helps to assure data integrity and to provide close integration between the telescope and archive. Development of archives that can guarantee long-term data availability and strong compatibility with other projects makes solar-cycle studies easier to plan and realize.

  5. The evolutionary time machine: forecasting how populations can adapt to changing environments using dormant propagules

    PubMed Central

    Orsini, Luisa; Schwenk, Klaus; De Meester, Luc; Colbourne, John K.; Pfrender, Michael E.; Weider, Lawrence J.

    2013-01-01

    Evolutionary changes are determined by a complex assortment of ecological, demographic and adaptive histories. Predicting how evolution will shape the genetic structures of populations coping with current (and future) environmental challenges has principally relied on investigations through space, in lieu of time, because long-term phenotypic and molecular data are scarce. Yet, dormant propagules in sediments, soils and permafrost are convenient natural archives of population-histories from which to trace adaptive trajectories along extended time periods. DNA sequence data obtained from these natural archives, combined with pioneering methods for analyzing both ecological and population genomic time-series data, are likely to provide predictive models to forecast evolutionary responses of natural populations to environmental changes resulting from natural and anthropogenic stressors, including climate change. PMID:23395434

  6. Sequencing degraded DNA from non-destructively sampled museum specimens for RAD-tagging and low-coverage shotgun phylogenetics.

    PubMed

    Tin, Mandy Man-Ying; Economo, Evan Philip; Mikheyev, Alexander Sergeyevich

    2014-01-01

    Ancient and archival DNA samples are valuable resources for the study of diverse historical processes. In particular, museum specimens provide access to biotas distant in time and space, and can provide insights into ecological and evolutionary changes over time. However, archival specimens are difficult to handle; they are often fragile and irreplaceable, and typically contain only short segments of denatured DNA. Here we present a set of tools for processing such samples for state-of-the-art genetic analysis. First, we report a protocol for minimally destructive DNA extraction of insect museum specimens, which produced sequenceable DNA from all of the samples assayed. The 11 specimens analyzed had fragmented DNA, rarely exceeding 100 bp in length, and could not be amplified by conventional PCR targeting the mitochondrial cytochrome oxidase I gene. Our approach made these samples amenable to analysis with commonly used next-generation sequencing-based molecular analytic tools, including RAD-tagging and shotgun genome re-sequencing. First, we used museum ant specimens from three species, each with its own reference genome, for RAD-tag mapping. Were able to use the degraded DNA sequences, which were sequenced in full, to identify duplicate reads and filter them prior to base calling. Second, we re-sequenced six Hawaiian Drosophila species, with millions of years of divergence, but with only a single available reference genome. Despite a shallow coverage of 0.37 ± 0.42 per base, we could recover a sufficient number of overlapping SNPs to fully resolve the species tree, which was consistent with earlier karyotypic studies, and previous molecular studies, at least in the regions of the tree that these studies could resolve. Although developed for use with degraded DNA, all of these techniques are readily applicable to more recent tissue, and are suitable for liquid handling automation.

  7. Opening the Landsat Archive

    USGS Publications Warehouse

    ,

    2008-01-01

    The USGS Landsat archive holds an unequaled 36-year record of the Earth's surface that is invaluable to climate change studies, forest and resource management activities, and emergency response operations. An aggressive effort is taking place to provide all Landsat imagery [scenes currently held in the USGS Earth Resources Observation and Science (EROS) Center archive, as well as newly acquired scenes daily] free of charge to users with electronic access via the Web by the end of December 2008. The entire Landsat 7 Enhanced Thematic Mapper Plus (ETM+) archive acquired since 1999 and any newly acquired Landsat 7 ETM+ images that have less than 40 percent cloud cover are currently available for download. When this endeavor is complete all Landsat 1-5 data will also be available for download. This includes Landsat 1-5 Multispectral Scanner (MSS) scenes, as well as Landsat 4 and 5 Thematic Mapper (TM) scenes.

  8. Technologically Enhanced Archival Collections: Using the Buddy System

    ERIC Educational Resources Information Center

    Holz, Dayna

    2006-01-01

    Based in the context of challenges faced by archives when managing digital projects, this article explores options of looking outside the existing expertise of archives staff to find collaborative partners. In teaming up with other departments and organizations, the potential scope of traditional archival digitization projects is expanded beyond…

  9. Cassini/Huygens Program Archive Plan for Science Data

    NASA Technical Reports Server (NTRS)

    Conners, D.

    2000-01-01

    The purpose of this document is to describe the Cassini/Huygens science data archive system which includes policy, roles and responsibilities, description of science and supplementary data products or data sets, metadata, documentation, software, and archive schedule and methods for archive transfer to the NASA Planetary Data System (PDS).

  10. A comprehensive cost model for NASA data archiving

    NASA Technical Reports Server (NTRS)

    Green, J. L.; Klenk, K. F.; Treinish, L. A.

    1990-01-01

    A simple archive cost model has been developed to help predict NASA's archiving costs. The model covers data management activities from the beginning of the mission through launch, acquisition, and support of retrospective users by the long-term archive; it is capable of determining the life cycle costs for archived data depending on how the data need to be managed to meet user requirements. The model, which currently contains 48 equations with a menu-driven user interface, is available for use on an IBM PC or AT.

  11. Tracing the microbial biosphere into the Messinian Salinity Crisis

    NASA Astrophysics Data System (ADS)

    Natalicchio, Marcello; Dela Pierre, Francesco; Birgel, Daniel; Lozar, Francesca; Peckmann, Jörn

    2016-04-01

    The Messinian salinity crisis (MSC), one of the largest environmental crises in Earth history, occurred in the Mediterranean Basin about 6 Ma ago. The isolation of the Mediterranean from the Atlantic Ocean caused the transformation of the Mediterranean sea into a giant salina. The establishment of harsh conditions (hypersalinity and anoxia) in the water mass had a strong impact on the aquatic biosphere, resulting in the apparent disappearance of many marine biota. This aspect is however controversial, mostly because of the finding of fossils of biota that actually survived the onset of the MSC. To trace the response of life to this catastrophic event, we studied the microbial biosphere (both body fossils and molecular fossils) archived in the sediments straddling the MSC onset (shales, carbonates and sulphates) from marginal subbasins (Piedmont Basin, northern Italy, and Nijar Basin, southern Spain). Despite the significant reduction of calcareous plankton, the progressive rise of other microorganisms (prokaryotes and eukaryotes) is documented in the studied sediments at the MSC onset. These microorganisms include remains of euryhaline and stenohaline diatoms and filamentous microfossils interpreted as vacuolated sulphide-oxidizing bacteria. This fossil assemblage, which typifies both marginal (gypsum) and more distal (carbonates and shale) deposits, indicates conditions of high primary productivity in the surface waters, favoured by increased nutrient influx in the course of high riverine runoff. Molecular fossils allow tracing of the microbial biosphere into the geological past. The rise of algal compounds (e.g. dinosterol) in the basal MSC deposits (gypsum, carbonate and shales), accompanied by the simultaneous increase of terrigenous organic material (n-alkanes), agree with the eutrophication of the basin. In addition, the MSC deposits show an instant and significant increase of archaeal biomarkers, including the archaeal membrane lipids archaeol and extended

  12. Investigating the Microscopic Location of Trace Elements in High-Alpine Glacier Ice

    NASA Astrophysics Data System (ADS)

    Avak, Sven Erik; Birrer, Mario; Laurent, Oscar; Guillong, Marcel; Wälle, Markus; Jenk, Theo Manuel; Bartels-Rausch, Thorsten; Schwikowski, Margit; Eichler, Anja

    2017-04-01

    Past changes in atmospheric pollution can be reconstructed from high-alpine ice core trace element records (Schwikowski et al., 2004). Percolation of meltwater alters the information originally stored in these environmental archives. Eichler et al. (2001) suggested that the preservation of major ions with respect to meltwater percolation depends on their location in the crystal ice lattice, i.e. grain boundaries versus grain interiors. Other studies have also focused on the effect of meltwater on organic pollutant concentrations as well as on stable isotope profiles in ice cores, whereas no information exists about trace elements. Here, we investigate for the first time the effect of the microscopic location of anthropogenic, dust and volcanic related trace elements on the behavior during meltwater percolation by using two different approaches. On the one hand we assess the microscopic location of trace elements indirectly by analyzing trace element concentrations in a high-alpine ice core, which has been shown to be affected by an inflow of meltwater, using discrete inductively coupled plasma mass spectrometry (ICP-MS). Impurities located at grain boundaries are prone to be removed by meltwater and tend to be depleted in the affected section of the record whereas those incorporated into the ice interior are preserved and not disturbed in the record. In the second approach we work towards a direct quantification of differences in concentrations of trace elements between ice grain boundaries and grain interiors in samples both from unaffected and affected sections of this ice core. Therefore we use cryocell laser ablation (LA) ICP-MS, which is the method of choice for the direct in situ chemical analysis of trace elements at a sub-millimeter resolution in glacier ice (Reinhardt et al., 2001, Della Lunga et al., 2014, Sneed et al., 2015). We will present first results of both approaches with regard to the evaluation of the potential of trace elements as environmental

  13. Data archiving for animal cognition research: report of an NIMH workshop.

    PubMed

    Kurtzman, Howard S; Church, Russell M; Crystal, Jonathon D

    2002-11-01

    In July 2001, the National Institute of Mental Health sponsored a workshop titled "Data Archiving for Animal Cognition Research." Participants included scientists as well as experts in archiving, publishing, policy, and law. As is described in this report, the workshop resulted in a set of conclusions and recommendations concerning (A) the impact of data archiving on research, (B) how to incorporate data archiving into research practice, (C) contents of data archives, (D) technical and archival standards, and (E) organizational, financing, and policy issues. The animal cognition research community is encouraged to begin now to establish archives, deposit data and related materials, and make use of archived materials in new scientific projects.

  14. A new archival infrastructure for highly-structured astronomical data

    NASA Astrophysics Data System (ADS)

    Dovgan, Erik; Knapic, Cristina; Sponza, Massimo; Smareglia, Riccardo

    2018-03-01

    With the advent of the 2020 Radio Astronomy Telescopes era, the amount and format of the radioastronomical data is becoming a massive and performance-critical challenge. Such an evolution of data models and data formats require new data archiving techniques that allow massive and fast storage of data that are at the same time also efficiently processed. A useful expertise for efficient archiviation has been obtained through data archiving of Medicina and Noto Radio Telescopes. The presented archival infrastructure named the Radio Archive stores and handles various formats, such as FITS, MBFITS, and VLBI's XML, which includes description and ancillary files. The modeling and architecture of the archive fulfill all the requirements of both data persistence and easy data discovery and exploitation. The presented archive already complies with the Virtual Observatory directives, therefore future service implementations will also be VO compliant. This article presents the Radio Archive services and tools, from the data acquisition to the end-user data utilization.

  15. Complete amino acid sequence of bovine colostrum low-Mr cysteine proteinase inhibitor.

    PubMed

    Hirado, M; Tsunasawa, S; Sakiyama, F; Niinobe, M; Fujii, S

    1985-07-01

    The complete amino acid sequence of bovine colostrum cysteine proteinase inhibitor was determined by sequencing native inhibitor and peptides obtained by cyanogen bromide degradation, Achromobacter lysylendopeptidase digestion and partial acid hydrolysis of reduced and S-carboxymethylated protein. Achromobacter peptidase digestion was successfully used to isolate two disulfide-containing peptides. The inhibitor consists of 112 amino acids with an Mr of 12787. Two disulfide bonds were established between Cys 66 and Cys 77 and between Cys 90 and Cys 110. A high degree of homology in the sequence was found between the colostrum inhibitor and human gamma-trace, human salivary acidic protein and chicken egg-white cystatin.

  16. IJS procedure for RELAP5 to TRACE input model conversion using SNAP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prosek, A.; Berar, O. A.

    2012-07-01

    The TRAC/RELAP Advanced Computational Engine (TRACE) advanced, best-estimate reactor systems code developed by the U.S. Nuclear Regulatory Commission comes with a graphical user interface called Symbolic Nuclear Analysis Package (SNAP). Much of efforts have been done in the past to develop the RELAP5 input decks. The purpose of this study is to demonstrate the Institut 'Josef Stefan' (IJS) conversion procedure from RELAP5 to TRACE input model of BETHSY facility. The IJS conversion procedure consists of eleven steps and is based on the use of SNAP. For calculations of the selected BETHSY 6.2TC test the RELAP5/MOD3.3 Patch 4 and TRACE V5.0more » Patch 1 were used. The selected BETHSY 6.2TC test was 15.24 cm equivalent diameter horizontal cold leg break in the reference pressurized water reactor without high pressure and low pressure safety injection. The application of the IJS procedure for conversion of BETHSY input model showed that it is important to perform the steps in proper sequence. The overall calculated results obtained with TRACE using the converted RELAP5 model were close to experimental data and comparable to RELAP5/MOD3.3 calculations. Therefore it can be concluded, that proposed IJS conversion procedure was successfully demonstrated on the BETHSY integral test facility input model. (authors)« less

  17. Finding "Science" in the Archives of the Spanish Monarchy.

    PubMed

    Portuondo, Maria M

    2016-03-01

    This essay explores the history of several archives that house the early modern records of Spanish imperial science. The modern "archival turn" urges us to think critically about archives and to recognize in the history of these collections an embedded, often implicit, history that--unless properly recognized, acknowledged, and understood--can distort the histories we are trying to tell. This essay uses a curious episode in the history of science to illustrate how Spanish archives relate to each other and shape the collections they house. During the late eighteenth century a young navy officer, Martín Fernández de Navarrete, was dispatched to all the principal archives of the Spanish monarchy with a peculiar mission: he was to search for evidence that the Spanish in fact had a scientific tradition. This essay uses his mission to explain how the original purpose of an archive--the archive's telos--may persist as a strong and potentially deterministic force in the work of historians of science. In the case of the archives discussed, this telos was shaped by issues as wide ranging as defending a nation's reputation against claims of colonial neglect and as idiosyncratic as an archivist's selection criteria.

  18. An enhanced archive facilitating climate impacts analysis

    USGS Publications Warehouse

    Maurer, E.P.; Brekke, L.; Pruitt, T.; Thrasher, B.; Long, J.; Duffy, P.; Dettinger, M.; Cayan, D.; Arnold, J.

    2014-01-01

    We describe the expansion of a publicly available archive of downscaled climate and hydrology projections for the United States. Those studying or planning to adapt to future climate impacts demand downscaled climate model output for local or regional use. The archive we describe attempts to fulfill this need by providing data in several formats, selectable to meet user needs. Our archive has served as a resource for climate impacts modelers, water managers, educators, and others. Over 1,400 individuals have transferred more than 50 TB of data from the archive. In response to user demands, the archive has expanded from monthly downscaled data to include daily data to facilitate investigations of phenomena sensitive to daily to monthly temperature and precipitation, including extremes in these quantities. New developments include downscaled output from the new Coupled Model Intercomparison Project phase 5 (CMIP5) climate model simulations at both the monthly and daily time scales, as well as simulations of surface hydrologi- cal variables. The web interface allows the extraction of individual projections or ensemble statistics for user-defined regions, promoting the rapid assessment of model consensus and uncertainty for future projections of precipitation, temperature, and hydrology. The archive is accessible online (http://gdo-dcp.ucllnl.org/downscaled_ cmip_projections).

  19. Digital Archival Image Collections: Who Are the Users?

    ERIC Educational Resources Information Center

    Herold, Irene M. H.

    2010-01-01

    Archival digital image collections are a relatively new phenomenon in college library archives. Digitizing archival image collections may make them accessible to users worldwide. There has been no study to explore whether collections on the Internet lead to users who are beyond the institution or a comparison of users to a national or…

  20. Trace fossils in diatomaceous strata of Miocene Monterey Formation: their character and implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savdra, C.E.; Bottjer, D.J.

    Younger parts of the Miocene Monterey Formation are commonly characterized by relatively unaltered diatomaceous strata. A common characteristic of these deposits is the preservation of varvelike lamination, indicative of deposition under anoxic or nearly anoxic conditions. Although laminated rock types are volumetrically dominant, bioturbated intervals are by no means rare, but little attention has been paid to the trace fossils themselves. Our study of trace fossils in the Monterey Formation demonstrated the significance of these biogenic structures in paleoenvironmental and paleoecologic analyses. In particular, trace fossils provide a means for detailed reconstruction of paleo-oxygenation conditions during Monterey deposition. Trace fossilsmore » in several Monterey sections exposed in central and southern California were examined in detail. At all localities, three major ichnofossil assemblages or ichnofacies were recognized: (1) Chondrites, (2) Planolites, and (3) Thalassinoides. The size and diversity of the three major ichnofacies and their lithologic associations suggest that the distribution of these facies is controlled primarily by the level of paleo-bottom water oxygenation. The Chondrites ichnofacies represents very low paleo-oxygen levels just above the anoxic threshold. The Planolites ichnofacies, with greater variety of larger burrow types, is indicative of slightly higher levels of oxygenation. Moderately to well-oxygenated conditions are suggested by the Thalassinoides ichnofacies. More detailed information on paleoenvironmental conditions can be gleaned by applying a refined trace-fossil tiering model. When used in detailed (centimeter-scale) vertical sequence analyses, this tiering model permits the translation of data on the composition, size parameters, and cross-cutting relationships of trace-fossil assemblages into relative paleo-oxygenation curves.« less

  1. Speeches Archive

    Science.gov Websites

    Speeches Archive Former AF Top 3 Viewpoints and Speeches Air Force Warrior Games 2017 Events 2018 Air Force Strategic Documents Desert Storm 25th Anniversary Observances DoD Warrior Games Portraits in Courage

  2. BAO plate archive digitization

    NASA Astrophysics Data System (ADS)

    Mickaelian, A. M.; Nikoghosyan, E. H.; Gigoyan, K. S.; Paronyan, G. M.; Abrahamyan, H. V.; Andreasyan, H. R.; Azatyan, N. M.; Kostandyan, G. R.; Khachatryan, K. G.; Vardanyan, A. V.; Gyulzadyan, M. V.; Mikayelyan, G. A.; Farmanyan, S. V.; Knyazyan, A. V.

    Astronomical plate archives created on the basis of numerous observations at many observatories are important part of the astronomical heritage. Byurakan Astrophysical Observatory (BAO) plate archive consists of 37,000 photographic plates and films, obtained at 2.6m telescope, 1m and 0.5m Schmidt telescopes and other smaller ones during 1947-1991. In 2015, we have started a project on the whole BAO Plate Archive digitization, creation of electronic database and its scientific usage. A Science Program Board is created to evaluate the observing material, to investigate new possibilities and to propose new projects based on the combined usage of these observations together with other world databases. The Executing Team consists of 11 astronomers and 2 computer scientists and will use 2 EPSON Perfection V750 Pro scanners for the digitization. The project will run during 3 years in 2015-2017 and the final result will be an electronic database and online interactive sky map to be used for further research projects.

  3. Current status of the international Halley Watch infrared net archive

    NASA Technical Reports Server (NTRS)

    Mcguinness, Brian B.

    1988-01-01

    The primary purposes of the Halley Watch have been to promote Halley observations, coordinate and standardize the observing where useful, and to archive the results in a database readily accessible to cometary scientists. The intention of IHW is to store the observations themselves, along with any information necessary to allow users to understand and use the data, but to exclude interpretations of these data. Each of the archives produced by the IHW will appear in two versions: a printed archive and a digital archive on CD-ROMs. The archive is expected to have a very long lifetime. The IHW has already produced an archive for P/Crommelin. This consists of one printed volume and two 1600 bpi tapes. The Halley archive will contain at least twenty gigabytes of information.

  4. Archiving for Rosetta: Lessons for IPDA

    NASA Astrophysics Data System (ADS)

    Heather, David

    The Rosetta Project is unusual, possibly unique, in that all data must be archived both in NASA's Planetary Data System (PDS), and in ESA's Planetary Science Archive (PSA), ac-cording to an inter-agency agreement that predates the existence of ESA's PSA. This requires that all data are formatted according to NASA's PDS3 Standards. Scientific peer reviews of the data content for Rosetta have been carried out both in the US and in Europe and there was a very large overlap of the issues raised, illustrating the general scientific agreement, independent of geography, in what an archive must contain to be useful to the broader community of planetary scientists. However, validation of the data against the PDS Standards using both PSA and PDS devel-oped software has led to the discovery that many of the items that are validated are unstated assumptions in the written PDS Standards and are related, at least in large part, to how the two archiving systems operate rather than to the actual content that a scientist needs to use the data. The talk will illustrate some of these discrepancies with examples and suggest how to avoid such issues in future, optimizing the scientific return on the investment in archiving while minimizing the costs.

  5. Examining Activism in Practice: A Qualitative Study of Archival Activism

    ERIC Educational Resources Information Center

    Novak, Joy Rainbow

    2013-01-01

    While archival literature has increasingly discussed activism in the context of archives, there has been little examination of the extent to which archivists in the field have accepted or incorporated archival activism into practice. Scholarship that has explored the practical application of archival activism has predominately focused on case…

  6. The use of coded PCR primers enables high-throughput sequencing of multiple homolog amplification products by 454 parallel sequencing.

    PubMed

    Binladen, Jonas; Gilbert, M Thomas P; Bollback, Jonathan P; Panitz, Frank; Bendixen, Christian; Nielsen, Rasmus; Willerslev, Eske

    2007-02-14

    The invention of the Genome Sequence 20 DNA Sequencing System (454 parallel sequencing platform) has enabled the rapid and high-volume production of sequence data. Until now, however, individual emulsion PCR (emPCR) reactions and subsequent sequencing runs have been unable to combine template DNA from multiple individuals, as homologous sequences cannot be subsequently assigned to their original sources. We use conventional PCR with 5'-nucleotide tagged primers to generate homologous DNA amplification products from multiple specimens, followed by sequencing through the high-throughput Genome Sequence 20 DNA Sequencing System (GS20, Roche/454 Life Sciences). Each DNA sequence is subsequently traced back to its individual source through 5'tag-analysis. We demonstrate that this new approach enables the assignment of virtually all the generated DNA sequences to the correct source once sequencing anomalies are accounted for (miss-assignment rate<0.4%). Therefore, the method enables accurate sequencing and assignment of homologous DNA sequences from multiple sources in single high-throughput GS20 run. We observe a bias in the distribution of the differently tagged primers that is dependent on the 5' nucleotide of the tag. In particular, primers 5' labelled with a cytosine are heavily overrepresented among the final sequences, while those 5' labelled with a thymine are strongly underrepresented. A weaker bias also exists with regards to the distribution of the sequences as sorted by the second nucleotide of the dinucleotide tags. As the results are based on a single GS20 run, the general applicability of the approach requires confirmation. However, our experiments demonstrate that 5'primer tagging is a useful method in which the sequencing power of the GS20 can be applied to PCR-based assays of multiple homologous PCR products. The new approach will be of value to a broad range of research areas, such as those of comparative genomics, complete mitochondrial analyses

  7. Airborne Measurements of Formaldehyde Employing a Tunable Diode Laser Absorption Spectrometer During TRACE-P

    NASA Technical Reports Server (NTRS)

    Fried, Alan; Drummond, James

    2003-01-01

    This final report summarizes the progress achieved over the entire 3-year proposal period including two extensions spanning 1 year. These activities include: 1) Preparation for and participation in the NASA 2001 TRACE-P campaign using our airborne tunable diode laser system to acquire measurements of formaldehyde (CH2O); 2) Comprehensive data analysis and data submittal to the NASA archive; 3) Follow up data interpretation working with NASA modelers to place our ambient CH2O measurements into a broader photochemical context; 4) Publication of numerous JGR papers using this data; 5) Extensive follow up laboratory tests on the selectivity and efficiency of our CH20 scrubbing system; and 6) An extensive follow up effort to assess and study the mechanical stability of our entire optical system, particularly the multipass absorption cell, with aircraft changes in cabin pressure.

  8. Fluvial deposits as an archive of early human activity: Progress during the 20 years of the Fluvial Archives Group

    NASA Astrophysics Data System (ADS)

    Chauhan, Parth R.; Bridgland, David R.; Moncel, Marie-Hélène; Antoine, Pierre; Bahain, Jean-Jacques; Briant, Rebecca; Cunha, Pedro P.; Despriée, Jackie; Limondin-Lozouet, Nicole; Locht, Jean-Luc; Martins, Antonio A.; Schreve, Danielle C.; Shaw, Andrew D.; Voinchet, Pierre; Westaway, Rob; White, Mark J.; White, Tom S.

    2017-06-01

    Fluvial sedimentary archives are important repositories for Lower and Middle Palaeolithic artefacts throughout the 'Old World', especially in Europe, where the beginning of their study coincided with the realisation that early humans were of great antiquity. Now that many river terrace sequences can be reliably dated and correlated with the globally valid marine isotope record, potentially useful patterns can be recognized in the distribution of the find-spots of the artefacts that constitute the large collections that were assembled during the years of manual gravel extraction. This paper reviews the advances during the past two decades in knowledge of hominin occupation based on artefact occurrences in fluvial contexts, in Europe, Asia and Africa. As such it is an update of a comparable review in 2007, at the end of IGCP Project no. 449, which had instigated the compilation of fluvial records from around the world during 2000-2004, under the auspices of the Fluvial Archives Group. An overarching finding is the confirmation of the well-established view that in Europe there is a demarcation between handaxe making in the west and flake-core industries in the east, although on a wider scale that pattern is undermined by the increased numbers of Lower Palaeolithic bifaces now recognized in East Asia. It is also apparent that, although it seems to have appeared at different places and at different times in the later Lower Palaeolithic, the arrival of Levallois technology as a global phenomenon was similarly timed across the area occupied by Middle Pleistocene hominins, at around 0.3 Ma.

  9. Archiving California’s historical duck nesting data

    USGS Publications Warehouse

    Ackerman, Joshua T.; Herzog, Mark P.; Brady, Caroline; Eadie, John M.; Yarris, Greg S.

    2015-07-14

    With the conclusion of this project, most duck nest data have been entered, but all nest-captured hen data and other breeding waterfowl data that were outside the scope of this project have still not been entered and electronically archived. Maintaining an up-to-date archive will require additional resources to archive and enter the new duck nest data each year in an iterative process. Further, data proofing should be conducted whenever possible, and also should be considered an iterative process as there was sometimes missing data that could not be filled in without more direct knowledge of specific projects. Despite these disclaimers, this duck data archive represents a massive and useful dataset to inform future research and management questions.

  10. Medical image digital archive: a comparison of storage technologies

    NASA Astrophysics Data System (ADS)

    Chunn, Timothy; Hutchings, Matt

    1998-07-01

    A cost effective, high capacity digital archive system is one of the remaining key factors that will enable a radiology department to eliminate film as an archive medium. The ever increasing amount of digital image data is creating the need for huge archive systems that can reliably store and retrieve millions of images and hold from a few terabytes of data to possibly hundreds of terabytes. Selecting the right archive solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, conformance to open standards, archive availability and reliability, security, cost, achievable benefits and cost savings, investment protection, and more. This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. New technologies will be discussed, such as DVD and high performance tape. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on random and pre-fetch retrieval time will be analyzed. The concept of automated migration of images from high performance, RAID disk storage devices to high capacity, NearlineR storage devices will be introduced as a viable way to minimize overall storage costs for an archive.

  11. The bark of the branches of holm oak (Quercus ilex L.) for a retrospective study of trace elements in the atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drava, Giuliana, E-mail: drava@difar.unige.it; Bri

    Tree bark has proved to be a useful bioindicator for trace elements in the atmosphere, however it reflects an exposure occurring during an unidentified period of time, so it provides spatial information about the distribution of contaminants in a certain area, but it cannot be used to detect temporal changes or trends, which is an important achievement in environmental studies. In order to obtain information about a known period of time, the bark collected from the annual segments of tree branches can be used, allowing analyses going back 10–15 years with annual resolution. In the present study, the concentrations ofmore » As, Cd, Co, Cu, Fe, Mn, Ni, Pb, V and Zn were measured by atomic emission spectrometry in a series of samples covering the period from 2001 to 2013 in an urban environment. Downward time trends were significant for Cd, Pb and Zn. The only trace element showing an upward time trend was V. The concentrations of the remaining six trace elements were constant over time, showing that their presence in bark is not simply proportional to the duration of exposure. This approach, which is simple, reliable and widely applicable at a low cost, allows the “a posteriori” reconstruction of atmospheric trace element deposition when or where no monitoring programme is in progress and no other natural archives are available. - Highlights: • Branch bark allows the historical reconstruction of atmospheric trace elements. • This approach is simple, reliable, widely applicable and “a posteriori”. • Downward time trends were found for Cd, Pb and Zn; upward trend for V.« less

  12. 50 CFR 635.33 - Archival tags.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Archival tags. 635.33 Section 635.33 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE ATLANTIC HIGHLY MIGRATORY SPECIES Management Measures § 635.33 Archival tags. (a...

  13. Trace Elements in River Waters

    NASA Astrophysics Data System (ADS)

    Gaillardet, J.; Viers, J.; Dupré, B.

    2003-12-01

    Trace elements are characterized by concentrations lower than 1 mg L-1 in natural waters. This means that trace elements are not considered when "total dissolved solids" are calculated in rivers, lakes, or groundwaters, because their combined mass is not significant compared to the sum of Na+, K+, Ca2+, Mg2+, H4SiO4, HCO3-, CO32-, SO42-, Cl-, and NO3-. Therefore, most of the elements, except about ten of them, occur at trace levels in natural waters. Being trace elements in natural waters does not necessarily qualify them as trace elements in rocks. For example, aluminum, iron, and titanium are major elements in rocks, but they occur as trace elements in waters, due to their low mobility at the Earth's surface. Conversely, trace elements in rocks such as chlorine and carbon are major elements in waters.The geochemistry of trace elements in river waters, like that of groundwater and seawater, is receiving increasing attention. This growing interest is clearly triggered by the technical advances made in the determination of concentrations at lower levels in water. In particular, the development of inductively coupled plasma mass spectrometry (ICP-MS) has considerably improved our knowledge of trace-element levels in waters since the early 1990s. ICP-MS provides the capability of determining trace elements having isotopes of interest for geochemical dating or tracing, even where their dissolved concentrations are extremely low.The determination of trace elements in natural waters is motivated by a number of issues. Although rare, trace elements in natural systems can play a major role in hydrosystems. This is particularly evident for toxic elements such as aluminum, whose concentrations are related to the abundance of fish in rivers. Many trace elements have been exploited from natural accumulation sites and used over thousands of years by human activities. Trace elements are therefore highly sensitive indexes of human impact from local to global scale. Pollution

  14. The new Gemini Observatory archive: a fast and low cost observatory data archive running in the cloud

    NASA Astrophysics Data System (ADS)

    Hirst, Paul; Cardenes, Ricardo

    2016-08-01

    We have developed and deployed a new data archive for the Gemini Observatory. Focused on simplicity and ease of use, the archive provides a number of powerful and novel features including automatic association of calibration data with the science data, and the ability to bookmark searches. A simple but powerful API allows programmatic search and download of data. The archive is hosted on Amazon Web Services, which provides us excellent internet connectivity and significant cost savings in both operations and development over more traditional deployment options. The code is written in python, utilizing a PostgreSQL database and Apache web server.

  15. 36 CFR § 1280.66 - May I use the National Archives Library?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Archives Library? § 1280.66 Section § 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND... Facilities in the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park...

  16. Deep sequencing in library selection projects: what insight does it bring?

    PubMed Central

    Glanville, J; D’Angelo, S; Khan, T.A.; Reddy, S. T.; Naranjo, L.; Ferrara, F.; Bradbury, A.R.M.

    2015-01-01

    High throughput sequencing is poised to change all aspects of the way antibodies and other binders are discovered and engineered. Millions of available sequence reads provide an unprecedented sampling depth able to guide the design and construction of effective, high quality naïve libraries containing tens of billions of unique molecules. Furthermore, during selections, high throughput sequencing enables quantitative tracing of enriched clones and position-specific guidance to amino acid variation under positive selection during antibody engineering. Successful application of the technologies relies on specific PCR reagent design, correct sequencing platform selection, and effective use of computational tools and statistical measures to remove error, identify antibodies, estimate diversity, and extract signatures of selection from the clone down to individual structural positions. Here we review these considerations and discuss some of the remaining challenges to the widespread adoption of the technology. PMID:26451649

  17. Reference Model for an Open Archival Information System

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This document is a technical report for use in developing a consensus on what is required to operate a permanent, or indefinite long-term, archive of digital information. It may be useful as a starting point for a similar document addressing the indefinite long-term preservation of non-digital information. This report establishes a common framework of terms and concepts which comprise an Open Archival Information System (OAIS). It allows existing and future archives to be more meaningfully compared and contrasted. It provides a basis for further standardization of within an archival context and it should promote greater vendor awareness of, and support of , archival requirements. Through the process of normal evolution, it is expected that expansion, deletion, or modification to this document may occur. This report is therefore subject to CCSDS document management and change control procedures.

  18. The abundance and relative volatility of refractory trace elements in Allende Ca,Al-rich inclusions - Implications for chemical and physical processes in the solar nebula

    NASA Technical Reports Server (NTRS)

    Kornacki, Alan S.; Fegley, Bruce, Jr.

    1986-01-01

    The relative volatilities of lithophile refractory trace elements (LRTE) were determined using calculated 50-percent condensation temperatures. Then, the refractory trace-element abundances were measured in about 100 Allende inclusions. The abundance patterns found in Allende Ca,Al-rich inclusions (CAIs) and ultrarefractory inclusions were used to empirically modify the calculated LRTE volatility sequence. In addition, the importance of crystal-chemical effects, diffusion constraints, and grain transport for the origin of the trace-element chemistry of Allende CAIs (which have important implications for chemical and physical processes in the solar nebula) is discussed.

  19. Trace Elements Affect Methanogenic Activity and Diversity in Enrichments from Subsurface Coal Bed Produced Water

    PubMed Central

    Ünal, Burcu; Perry, Verlin Ryan; Sheth, Mili; Gomez-Alvarez, Vicente; Chin, Kuk-Jeong; Nüsslein, Klaus

    2012-01-01

    Microbial methane from coal beds accounts for a significant and growing percentage of natural gas worldwide. Our knowledge of physical and geochemical factors regulating methanogenesis is still in its infancy. We hypothesized that in these closed systems, trace elements (as micronutrients) are a limiting factor for methanogenic growth and activity. Trace elements are essential components of enzymes or cofactors of metabolic pathways associated with methanogenesis. This study examined the effects of eight trace elements (iron, nickel, cobalt, molybdenum, zinc, manganese, boron, and copper) on methane production, on mcrA transcript levels, and on methanogenic community structure in enrichment cultures obtained from coal bed methane (CBM) well produced water samples from the Powder River Basin, Wyoming. Methane production was shown to be limited both by a lack of additional trace elements as well as by the addition of an overly concentrated trace element mixture. Addition of trace elements at concentrations optimized for standard media enhanced methane production by 37%. After 7 days of incubation, the levels of mcrA transcripts in enrichment cultures with trace element amendment were much higher than in cultures without amendment. Transcript levels of mcrA correlated positively with elevated rates of methane production in supplemented enrichments (R2 = 0.95). Metabolically active methanogens, identified by clone sequences of mcrA mRNA retrieved from enrichment cultures, were closely related to Methanobacterium subterraneum and Methanobacterium formicicum. Enrichment cultures were dominated by M. subterraneum and had slightly higher predicted methanogenic richness, but less diversity than enrichment cultures without amendments. These results suggest that varying concentrations of trace elements in produced water from different subsurface coal wells may cause changing levels of CBM production and alter the composition of the active methanogenic community. PMID

  20. ModelArchiver—A program for facilitating the creation of groundwater model archives

    USGS Publications Warehouse

    Winston, Richard B.

    2018-03-01

    ModelArchiver is a program designed to facilitate the creation of groundwater model archives that meet the requirements of the U.S. Geological Survey (USGS) policy (Office of Groundwater Technical Memorandum 2016.02, https://water.usgs.gov/admin/memo/GW/gw2016.02.pdf, https://water.usgs.gov/ogw/policy/gw-model/). ModelArchiver version 1.0 leads the user step-by-step through the process of creating a USGS groundwater model archive. The user specifies the contents of each of the subdirectories within the archive and provides descriptions of the archive contents. Descriptions of some files can be specified automatically using file extensions. Descriptions also can be specified individually. Those descriptions are added to a readme.txt file provided by the user. ModelArchiver moves the content of the archive to the archive folder and compresses some folders into .zip files.As part of the archive, the modeler must create a metadata file describing the archive. The program has a built-in metadata editor and provides links to websites that can aid in creation of the metadata. The built-in metadata editor is also available as a stand-alone program named FgdcMetaEditor version 1.0, which also is described in this report. ModelArchiver updates the metadata file provided by the user with descriptions of the files in the archive. An optional archive list file generated automatically by ModelMuse can streamline the creation of archives by identifying input files, output files, model programs, and ancillary files for inclusion in the archive.

  1. An Invitation to the ALA Archives.

    ERIC Educational Resources Information Center

    Beckel, Deborah; Brichford, Maynard

    1984-01-01

    Description of materials found in American Library Association Archives located at University of Illinois highlights 1905 letter defending Melvil Dewey, the 1900 Saguenay River Trip, children's librarians, library education, 1926 visit to President Coolidge by foreign librarians, and the American Library in Mexico. Notes on using the archives are…

  2. Broadsides & Posters from the National Archives.

    ERIC Educational Resources Information Center

    National Archives and Records Administration, Washington, DC.

    This booklet evolved from research for the exhibition "Uncle Sam Speaks: Broadsides and Posters from the National Archives," which opened at the National Archives in February 1986. The booklet is presented chronologically, beginning with broadsides from the American Revolution and ending with posters of the 1980's. Accompanying text…

  3. Seasonally-resolved trace element concentrations in stalagmites from a shallow cave in New Mexico

    NASA Astrophysics Data System (ADS)

    Sekhon, N.; Banner, J.; Miller, N. R.; Carlson, P. E.; Breecker, D.

    2017-12-01

    High-resolution (sub-annual/seasonal) paleoclimate records extending beyond the instrumental period are required to test climate models and better understand how climate warming/cooling and wetting/drying are manifested seasonally. This is particularly the case for areas such as the southwest United States where precipitation and temperature seasonality dictate the regional climate. Study of a 20thcentury stalagmite (Carlson et al., in prep) documented (1) seasonal variation in trace element compositions of a stalagmite from a shallow, well-ventilated cave and (2) demonstrated the seasonal variation in stalagmite Mg to be in agreement with predicted temperature-dependent fractionation between water and calcite. The seasonal nature of variability was constrained by monitoring the cave on a monthly basis (Casteel and Banner, 2015; Carlson et al., in prep). Here we expand on using stalagmites from shallow, well-ventilated caves as archives of seasonally-resolved climate recorders by studying trace element variations in two coeval modern stalagmites (SBFC-1 and SBFC-2) cored from Sitting Bull Falls, southern New Mexico. Seasonal cycles will be confirmed by analyzing Mg, Ba, and Sr in in-situ calcite precipitated on artificial substrates as available (July, Sept., and Nov. 2017). The chronology is constrained by semi-automated peak counting and 14C bomb-peak. In addition, principal component analyses of trace element data identify two primary underlying modes of trace element variability for soil-derived elements (Cu, Zn, and Fe) and bedrock-derived elements (Mg, Sr, and Ba). We hypothesize that the soil-derived elements are transported by seasonal infiltration of organic colloids and the bedrock-derived elements are ­­controlled by variability in cave air temperature, drip water, and calcite growth rate. The two modes of variability will be calibrated against instrumental data over the 20th century. When complete, these new seasonally resolved proxy records will

  4. Address tracing for parallel machines

    NASA Technical Reports Server (NTRS)

    Stunkel, Craig B.; Janssens, Bob; Fuchs, W. Kent

    1991-01-01

    Recently implemented parallel system address-tracing methods based on several metrics are surveyed. The issues specific to collection of traces for both shared and distributed memory parallel computers are highlighted. Five general categories of address-trace collection methods are examined: hardware-captured, interrupt-based, simulation-based, altered microcode-based, and instrumented program-based traces. The problems unique to shared memory and distributed memory multiprocessors are examined separately.

  5. Flexible server-side processing of climate archives

    NASA Astrophysics Data System (ADS)

    Juckes, Martin; Stephens, Ag; Damasio da Costa, Eduardo

    2014-05-01

    The flexibility and interoperability of OGC Web Processing Services are combined with an extensive range of data processing operations supported by the Climate Data Operators (CDO) library to facilitate processing of the CMIP5 climate data archive. The challenges posed by this peta-scale archive allow us to test and develop systems which will help us to deal with approaching exa-scale challenges. The CEDA WPS package allows users to manipulate data in the archive and export the results without first downloading the data -- in some cases this can drastically reduce the data volumes which need to be transferred and greatly reduce the time needed for the scientists to get their results. Reductions in data transfer are achieved at the expense of an additional computational load imposed on the archive (or near-archive) infrastructure. This is managed with a load balancing system. Short jobs may be run in near real-time, longer jobs will be queued. When jobs are queued the user is provided with a web dashboard displaying job status. A clean split between the data manipulation software and the request management software is achieved by exploiting the extensive CDO library. This library has a long history of development to support the needs of the climate science community. Use of the library ensures that operations run on data by the system can be reproduced by users using the same operators installed on their own computers. Examples using the system deployed for the CMIP5 archive will be shown and issues which need to be addressed as archive volumes expand into the exa-scale will be discussed.

  6. Flexible server-side processing of climate archives

    NASA Astrophysics Data System (ADS)

    Juckes, M. N.; Stephens, A.; da Costa, E. D.

    2013-12-01

    The flexibility and interoperability of OGC Web Processing Services are combined with an extensive range of data processing operations supported by the Climate Data Operators (CDO) library to facilitate processing of the CMIP5 climate data archive. The challenges posed by this peta-scale archive allow us to test and develop systems which will help us to deal with approaching exa-scale challenges. The CEDA WPS package allows users to manipulate data in the archive and export the results without first downloading the data -- in some cases this can drastically reduce the data volumes which need to be transferred and greatly reduce the time needed for the scientists to get their results. Reductions in data transfer are achieved at the expense of an additional computational load imposed on the archive (or near-archive) infrastructure. This is managed with a load balancing system. Short jobs may be run in near real-time, longer jobs will be queued. When jobs are queued the user is provided with a web dashboard displaying job status. A clean split between the data manipulation software and the request management software is achieved by exploiting the extensive CDO library. This library has a long history of development to support the needs of the climate science community. Use of the library ensures that operations run on data by the system can be reproduced by users using the same operators installed on their own computers. Examples using the system deployed for the CMIP5 archive will be shown and issues which need to be addressed as archive volumes expand into the exa-scale will be discussed.

  7. SolTrace | Concentrating Solar Power | NREL

    Science.gov Websites

    NREL packaged distribution or from source code at the SolTrace open source project website. NREL Publications Support FAQs SolTrace open source project The code uses Monte-Carlo ray-tracing methodology. The -tracing capabilities. With the release of the SolTrace open source project, the software has adopted

  8. Compendium of NASA data base for the global tropospheric experiment's Transport and Atmospheric Chemistry Near the Equator-Atlantic (TRACE-A)

    NASA Technical Reports Server (NTRS)

    Gregory, Gerald L.; Scott, A. Donald, Jr.

    1995-01-01

    This compendium describes aircraft data that are available from NASA's Transport and Atmospheric Chemistry near the Equator - Atlantic (TRACE-A) conducted in September/October 1992. The broad objectives of TRACE-A were to study chemical processes and long-range transport associated with South American and African continental outflow during periods of widespread vegetation burning, and to understand the ozone enhancements observed from satellite data measured over the southern tropical Atlantic Ocean during the September/October time period. Flight experiments were conducted from Brazil, South Africa, Namibia, and the Ascension Island. This document provides a representation of aircraft data that are available from NASA Langley's Distributed Active Archive Center (DAAC). The data format of time series and altitude profile plots is not intended to support original analyses, but to assist the reader in identifying data that are of interest. This compendium is for only the NASA aircraft data. The DAAC data base includes numerous supporting data-meteorological products, results from surface studies, satellite observations, and data from sonde releases.

  9. Contents of the JPL Distributed Active Archive Center (DAAC) archive, version 2-91

    NASA Technical Reports Server (NTRS)

    Smith, Elizabeth A. (Editor); Lassanyi, Ruby A. (Editor)

    1991-01-01

    The Distributed Active Archive Center (DAAC) archive at the Jet Propulsion Laboratory (JPL) includes satellite data sets for the ocean sciences and global change research to facilitate multidisciplinary use of satellite ocean data. Parameters include sea surface height, surface wind vector, sea surface temperature, atmospheric liquid water, and surface pigment concentration. The Jet Propulsion Laboratory DAAC is an element of the Earth Observing System Data and Information System (EOSDIS) and will be the United States distribution site for the Ocean Topography Experiment (TOPEX)/POSEIDON data and metadata.

  10. A system approach to archival storage

    NASA Technical Reports Server (NTRS)

    Corcoran, John W.

    1991-01-01

    The introduction and viewgraphs of a discussion on a system approach to archival storage presented at the National Space Science Data Center (NSSDC) Mass Storage Workshop is included. The use of D-2 iron particles for archival storage is discussed along with how acceleration factors relating short-term tests to archival life times can be justified. Ampex Recording Systems is transferring D-2 video technology to data storage applications, and encountering concerns about corrosion. To protect the D-2 standard, Battelle tests were done on all four tapes in the Class 2 environment. Error rates were measured before and after the test on both exposed and control groups.

  11. A Robust, Low-Cost Virtual Archive for Science Data

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Vollmer, Bruce

    2005-01-01

    Despite their expense tape silos are still often the only affordable option for petabytescale science data archives, particularly when other factors such as data reliability, floor space, power and cooling load are accounted for. However, the complexity, management software, hardware reliability and access latency of tape silos make online data storage ever more attractive. Drastic reductions in low-cost mass-market PC disk drivers help to make this more affordable (approx. 1$/GB), but are challenging to scale to the petabyte range and of questionable reliability for archival use, On the other hand, if much of the science archive could be "virtualized", i.e., produced on demand when requested by users, we would need store only a fraction of the data online, perhaps bringing an online-only system into in affordable range. Radiance data from the satellite-borne Moderate Resolution Imaging Spectroradiometer (MODIS) instrument provides a good opportunity for such a virtual archive: the raw data amount to 140 GB/day, but these are small relative to the 550 GB/day making up the radiance products. These data are routinely processed as inputs for geophysical parameter products and then archived on tape at the Goddard Earth Sciences Distributed Active Archive (GES DAAC) for distributing to users. Virtualizing them would be an immediate and signifcant reduction in the amount of data being stored in the tape archives and provide more customizable products. A prototype of such a virtual archive is being developed to prove the concept and develop ways of incorporating the robustness that a science data archive requires.

  12. Evaluation of positive Rift Valley fever virus formalin-fixed paraffin embedded samples as a source of sequence data for retrospective phylogenetic analysis.

    PubMed

    Mubemba, B; Thompson, P N; Odendaal, L; Coetzee, P; Venter, E H

    2017-05-01

    Rift Valley fever (RVF), caused by an arthropod borne Phlebovirus in the family Bunyaviridae, is a haemorrhagic disease that affects ruminants and humans. Due to the zoonotic nature of the virus, a biosafety level 3 laboratory is required for isolation of the virus. Fresh and frozen samples are the preferred sample type for isolation and acquisition of sequence data. However, these samples are scarce in addition to posing a health risk to laboratory personnel. Archived formalin-fixed, paraffin-embedded (FFPE) tissue samples are safe and readily available, however FFPE derived RNA is in most cases degraded and cross-linked in peptide bonds and it is unknown whether the sample type would be suitable as reference material for retrospective phylogenetic studies. A RT-PCR assay targeting a 490 nt portion of the structural G N glycoprotein encoding gene of the RVFV M-segment was applied to total RNA extracted from archived RVFV positive FFPE samples. Several attempts to obtain target amplicons were unsuccessful. FFPE samples were then analysed using next generation sequencing (NGS), i.e. Truseq ® (Illumina) and sequenced on the Miseq ® genome analyser (Illumina). Using reference mapping, gapped virus sequence data of varying degrees of shallow depth was aligned to a reference sequence. However, the NGS did not yield long enough contigs that consistently covered the same genome regions in all samples to allow phylogenetic analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Digitized Archival Primary Sources in STEM: A Selected Webliography

    ERIC Educational Resources Information Center

    Jankowski, Amy

    2017-01-01

    Accessibility and findability of digitized archival resources can be a challenge, particularly for students or researchers not familiar with archival formats and digital interfaces, which adhere to different descriptive standards than more widely familiar library resources. Numerous aggregate archival collection databases exist, which provide a…

  14. 36 CFR 1253.1 - National Archives Building.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... Hours for the Research Center and the Central Research Room are posted at http://www.archives.gov. The exhibit areas' hours of operation are also posted at http://www.archives.gov. Last admission to the...

  15. 36 CFR 1253.1 - National Archives Building.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... Hours for the Research Center and the Central Research Room are posted at http://www.archives.gov. The exhibit areas' hours of operation are also posted at http://www.archives.gov. Last admission to the...

  16. 36 CFR 1253.1 - National Archives Building.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... Hours for the Research Center and the Central Research Room are posted at http://www.archives.gov. The exhibit areas' hours of operation are also posted at http://www.archives.gov. Last admission to the...

  17. Lunar Ferroan Anorthosite Petrogenesis: Clues from Trace Element Distributions in FAN Subgroups

    NASA Astrophysics Data System (ADS)

    Floss, Christine; James, Odette B.; McGee, James J.; Crozaz, Ghislaine

    1998-04-01

    The rare earth elements (REE) and selected other trace elements were measured in plagioclase and pyroxene from nine samples of the lunar ferroan anorthosite (FAN) suite of rocks. Samples were selected from each of four FAN subgroups previously defined by James et al. (1989). Plagioclase compositions are homogeneous within each sample, but high- and low-Ca pyroxenes from lithic clasts typically have different REE abundances from their counterparts in the surrounding granulated matrices. Measured plagioclase/low-Ca pyroxene concentration ratios for the REE have steeper patterns than experimentally determined plagioclase/low-Ca pyroxene partition coefficients in most samples. Textural and trace element evidence suggest that, although subsolidus equilibration may be responsible for some of the discrepancy, plagioclase compositions in most samples have been largely unaffected by intermineral redistribution of the REE. The REE systematics of plagioclase from the four subgroups are broadly consistent with their derivation through crystallization from a single evolving magma. However, samples from some of the subgroups exhibit a decoupling of plagioclase and pyroxene compositions that probably reflects the complexities inherent in crystallization from a large-scale magmatic system. For example, two anorthosites with very magnesian mafic minerals have highly evolved trace element compositions; major element compositions in plagioclase also do not reflect the evolutionary sequence recorded by their REE compositions. Finally, a noritic anorthosite breccia with relatively ferroan mafic minerals contains several clasts with high and variable REE and other trace element abundances. Although plagioclase REE compositions are consistent with their derivation from a magma with a KREEPy trace element signature, very shallow REE patterns in the pyroxenes suggest the addition of a component enriched in the light REE.

  18. Life Sciences Data Archive Scientific Development

    NASA Technical Reports Server (NTRS)

    Buckey, Jay C., Jr.

    1995-01-01

    The Life Sciences Data Archive will provide scientists, managers and the general public with access to biomedical data collected before, during and after spaceflight. These data are often irreplaceable and represent a major resource from the space program. For these data to be useful, however, they must be presented with enough supporting information, description and detail so that an interested scientist can understand how, when and why the data were collected. The goal of this contract was to provide a scientific consultant to the archival effort at the NASA-Johnson Space Center. This consultant (Jay C. Buckey, Jr., M.D.) is a scientist, who was a co-investigator on both the Spacelab Life Sciences-1 and Spacelab Life Sciences-2 flights. In addition he was an alternate payload specialist for the Spacelab Life Sciences-2 flight. In this role he trained on all the experiments on the flight and so was familiar with the protocols, hardware and goals of all the experiments on the flight. Many of these experiments were flown on both SLS-1 and SLS-2. This background was useful for the archive, since the first mission to be archived was Spacelab Life Sciences-1. Dr. Buckey worked directly with the archive effort to ensure that the parameters, scientific descriptions, protocols and data sets were accurate and useful.

  19. Resources for Archives: Developing Collections, Constituents, Colleagues, and Capital

    ERIC Educational Resources Information Center

    Primer, Ben

    2009-01-01

    The essential element for archival success is to be found in the quality of management decisions made and public services provided. Archivists can develop first-class archives operations through understanding the organizational context; planning; hiring, retaining, and developing staff; meeting archival standards for storage and access; and…

  20. The Alaska Arctic Vegetation Archive (AVA-AK)

    DOE PAGES

    Walker, Donald; Breen, Amy; Druckenmiller, Lisa; ...

    2016-05-17

    The Alaska Arctic Vegetation Archive (AVA-AK, GIVD-ID: NA-US-014) is a free, publically available database archive of vegetation-plot data from the Arctic tundra region of northern Alaska. The archive currently contains 24 datasets with 3,026 non-overlapping plots. Of these, 74% have geolocation data with 25-m or better precision. Species cover data and header data are stored in a Turboveg database. A standardized Pan Arctic Species List provides a consistent nomenclature for vascular plants, bryophytes, and lichens in the archive. A web-based online Alaska Arctic Geoecological Atlas (AGA-AK) allows viewing and downloading the species data in a variety of formats, and providesmore » access to a wide variety of ancillary data. We conducted a preliminary cluster analysis of the first 16 datasets (1,613 plots) to examine how the spectrum of derived clusters is related to the suite of datasets, habitat types, and environmental gradients. Here, we present the contents of the archive, assess its strengths and weaknesses, and provide three supplementary files that include the data dictionary, a list of habitat types, an overview of the datasets, and details of the cluster analysis.« less

  1. NADIR: A Flexible Archiving System Current Development

    NASA Astrophysics Data System (ADS)

    Knapic, C.; De Marco, M.; Smareglia, R.; Molinaro, M.

    2014-05-01

    The New Archiving Distributed InfrastructuRe (NADIR) is under development at the Italian center for Astronomical Archives (IA2) to increase the performances of the current archival software tools at the data center. Traditional softwares usually offer simple and robust solutions to perform data archive and distribution but are awkward to adapt and reuse in projects that have different purposes. Data evolution in terms of data model, format, publication policy, version, and meta-data content are the main threats to re-usage. NADIR, using stable and mature framework features, answers those very challenging issues. Its main characteristics are a configuration database, a multi threading and multi language environment (C++, Java, Python), special features to guarantee high scalability, modularity, robustness, error tracking, and tools to monitor with confidence the status of each project at each archiving site. In this contribution, the development of the core components is presented, commenting also on some performance and innovative features (multi-cast and publisher-subscriber paradigms). NADIR is planned to be developed as simply as possible with default configurations for every project, first of all for LBT and other IA2 projects.

  2. The Alaska Arctic Vegetation Archive (AVA-AK)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Donald; Breen, Amy; Druckenmiller, Lisa

    The Alaska Arctic Vegetation Archive (AVA-AK, GIVD-ID: NA-US-014) is a free, publically available database archive of vegetation-plot data from the Arctic tundra region of northern Alaska. The archive currently contains 24 datasets with 3,026 non-overlapping plots. Of these, 74% have geolocation data with 25-m or better precision. Species cover data and header data are stored in a Turboveg database. A standardized Pan Arctic Species List provides a consistent nomenclature for vascular plants, bryophytes, and lichens in the archive. A web-based online Alaska Arctic Geoecological Atlas (AGA-AK) allows viewing and downloading the species data in a variety of formats, and providesmore » access to a wide variety of ancillary data. We conducted a preliminary cluster analysis of the first 16 datasets (1,613 plots) to examine how the spectrum of derived clusters is related to the suite of datasets, habitat types, and environmental gradients. Here, we present the contents of the archive, assess its strengths and weaknesses, and provide three supplementary files that include the data dictionary, a list of habitat types, an overview of the datasets, and details of the cluster analysis.« less

  3. Trace elements in dialysis.

    PubMed

    Filler, Guido; Felder, Sarah

    2014-08-01

    In end-stage chronic kidney disease (CKD), pediatric nephrologists must consider the homeostasis of the multiple water-soluble ions that are influenced by renal replacement therapy (RRT). While certain ions such as potassium and calcium are closely monitored, little is known about the handling of trace elements in pediatric dialysis. RRT may lead to accumulation of toxic trace elements, either due to insufficient elimination or due to contamination, or to excessive removal of essential trace elements. However, trace elements are not routinely monitored in dialysis patients and no mechanism for these deficits or toxicities has been established. This review summarizes the handling of trace elements, with particular attention to pediatric data. The best data describe lead and indicate that there is a higher prevalence of elevated lead (Pb, atomic number 82) levels in children on RRT when compared to adults. Lead is particularly toxic in neurodevelopment and lead levels should therefore be monitored. Monitoring of zinc (Zn, atomic number 30) and selenium (Se, atomic number 34) may be indicated in the monitoring of all pediatric dialysis patients to reduce morbidity from deficiency. Prospective studies evaluating the impact of abnormal trace elements and the possible therapeutic value of intervention are required.

  4. ROSETTA: How to archive more than 10 years of mission

    NASA Astrophysics Data System (ADS)

    Barthelemy, Maud; Heather, D.; Grotheer, E.; Besse, S.; Andres, R.; Vallejo, F.; Barnes, T.; Kolokolova, L.; O'Rourke, L.; Fraga, D.; A'Hearn, M. F.; Martin, P.; Taylor, M. G. G. T.

    2018-01-01

    The Rosetta spacecraft was launched in 2004 and, after several planetary and two asteroid fly-bys, arrived at comet 67P/Churyumov-Gerasimenko in August 2014. After escorting the comet for two years and executing its scientific observations, the mission ended on 30 September 2016 through a touch down on the comet surface. This paper describes how the Planetary Science Archive (PSA) and the Planetary Data System - Small Bodies Node (PDS-SBN) worked with the Rosetta instrument teams to prepare the science data collected over the course of the Rosetta mission for inclusion in the science archive. As Rosetta is an international mission in collaboration between ESA and NASA, all science data from the mission are fully archived within both the PSA and the PDS. The Rosetta archiving process, supporting tools, archiving systems, and their evolution throughout the mission are described, along with a discussion of a number of the challenges faced during the Rosetta implementation. The paper then presents the current status of the archive for each of the science instruments, before looking to the improvements planned both for the archive itself and for the Rosetta data content. The lessons learned from the first 13 years of archiving on Rosetta are finally discussed with an aim to help future missions plan and implement their science archives.

  5. A MYSQL-BASED DATA ARCHIVER: PRELIMINARY RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Bickley; Christopher Slominski

    2008-01-23

    Following an evaluation of the archival requirements of the Jefferson Laboratory accelerator’s user community, a prototyping effort was executed to determine if an archiver based on MySQL had sufficient functionality to meet those requirements. This approach was chosen because an archiver based on a relational database enables the development effort to focus on data acquisition and management, letting the database take care of storage, indexing and data consistency. It was clear from the prototype effort that there were no performance impediments to successful implementation of a final system. With our performance concerns addressed, the lab undertook the design and developmentmore » of an operational system. The system is in its operational testing phase now. This paper discusses the archiver system requirements, some of the design choices and their rationale, and presents the acquisition, storage and retrieval performance.« less

  6. 76 FR 19147 - Advisory Committee on the Electronic Records Archives (ACERA)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-06

    ... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on the Electronic Records Archives... Electronic Records Archives (ACERA). The meeting has been consolidated into one day. This meeting will be... number of individuals planning to attend must be submitted to the Electronic Records Archives Program at...

  7. The NAS Computational Aerosciences Archive

    NASA Technical Reports Server (NTRS)

    Miceli, Kristina D.; Globus, Al; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    In order to further the state-of-the-art in computational aerosciences (CAS) technology, researchers must be able to gather and understand existing work in the field. One aspect of this information gathering is studying published work available in scientific journals and conference proceedings. However, current scientific publications are very limited in the type and amount of information that they can disseminate. Information is typically restricted to text, a few images, and a bibliography list. Additional information that might be useful to the researcher, such as additional visual results, referenced papers, and datasets, are not available. New forms of electronic publication, such as the World Wide Web (WWW), limit publication size only by available disk space and data transmission bandwidth, both of which are improving rapidly. The Numerical Aerodynamic Simulation (NAS) Systems Division at NASA Ames Research Center is in the process of creating an archive of CAS information on the WWW. This archive will be based on the large amount of information produced by researchers associated with the NAS facility. The archive will contain technical summaries and reports of research performed on NAS supercomputers, visual results (images, animations, visualization system scripts), datasets, and any other supporting meta-information. This information will be available via the WWW through the NAS homepage, located at http://www.nas.nasa.gov/, fully indexed for searching. The main components of the archive are technical summaries and reports, visual results, and datasets. Technical summaries are gathered every year by researchers who have been allotted resources on NAS supercomputers. These summaries, together with supporting visual results and references, are browsable by interested researchers. Referenced papers made available by researchers can be accessed through hypertext links. Technical reports are in-depth accounts of tools and applications research projects

  8. The Kanzelhöhe Online Data Archive

    NASA Astrophysics Data System (ADS)

    Pötzi, W.; Hirtenfellner-Polanec, W.; Temmer, M.

    The Kanzelhöhe Observatory provides high-cadence full-disk observations of solar activity phenomena like sunspots, flares and prominence eruptions on a regular basis. The data are available for download from the KODA (Kanzelhöhe Observatory Data Archive) which is freely accessible. The archive offers sunspot drawings back to 1950 and high cadence H-α data back to 1973. Images from other instruments, like white-light and CaIIK, are available since 2007 and 2010, respectively. In the following we describe how to access the archive and the format of the data.

  9. 36 CFR 1253.2 - National Archives at College Park.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false National Archives at College Park. 1253.2 Section 1253.2 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... College Park. (a) The National Archives at College Park is located at 8601 Adelphi Road, College Park, MD...

  10. 36 CFR 1253.2 - National Archives at College Park.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false National Archives at College Park. 1253.2 Section 1253.2 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... Park, MD 20740-6001. Hours for the Research Center are posted at http://www.archives.gov. The phone...

  11. 36 CFR 1253.2 - National Archives at College Park.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false National Archives at College Park. 1253.2 Section 1253.2 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... Park, MD 20740-6001. Hours for the Research Center are posted at http://www.archives.gov. The phone...

  12. 36 CFR 1253.2 - National Archives at College Park.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false National Archives at College Park. 1253.2 Section 1253.2 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... Park, MD 20740-6001. Hours for the Research Center are posted at http://www.archives.gov. The phone...

  13. Radiation Embrittlement Archive Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klasky, Hilda B; Bass, Bennett Richard; Williams, Paul T

    2013-01-01

    The Radiation Embrittlement Archive Project (REAP), which is being conducted by the Probabilistic Integrity Safety Assessment (PISA) Program at Oak Ridge National Laboratory under funding from the U.S. Nuclear Regulatory Commission s (NRC) Office of Nuclear Regulatory Research, aims to provide an archival source of information about the effect of neutron radiation on the properties of reactor pressure vessel (RPV) steels. Specifically, this project is an effort to create an Internet-accessible RPV steel embrittlement database. The project s website, https://reap.ornl.gov, provides information in two forms: (1) a document archive with surveillance capsule(s) reports and related technical reports, in PDF format,more » for the 104 commercial nuclear power plants (NPPs) in the United States, with similar reports from other countries; and (2) a relational database archive with detailed information extracted from the reports. The REAP project focuses on data collected from surveillance capsule programs for light-water moderated, nuclear power reactor vessels operated in the United States, including data on Charpy V-notch energy testing results, tensile properties, composition, exposure temperatures, neutron flux (rate of irradiation damage), and fluence, (Fast Neutron Fluence a cumulative measure of irradiation for E>1 MeV). Additionally, REAP contains data from surveillance programs conducted in other countries. REAP is presently being extended to focus on embrittlement data analysis, as well. This paper summarizes the current status of the REAP database and highlights opportunities to access the data and to participate in the project.« less

  14. BAO Plate Archive Project

    NASA Astrophysics Data System (ADS)

    Mickaelian, A. M.; Gigoyan, K. S.; Gyulzadyan, M. V.; Paronyan, G. M.; Abrahamyan, H. V.; Andreasyan, H. R.; Azatyan, N. M.; Kostandyan, G. R.; Samsonyan, A. L.; Mikayelyan, G. A.; Farmanyan, S. V.; Harutyunyan, V. L.

    2017-12-01

    We present the Byurakan Astrophysical Observatory (BAO) Plate Archive Project that is aimed at digitization, extraction and analysis of archival data and building an electronic database and interactive sky map. BAO Plate Archive consists of 37,500 photographic plates and films, obtained with 2.6m telescope, 1m and 0.5m Schmidt telescopes and other smaller ones during 1947-1991. The famous Markarian Survey (or the First Byurakan Survey, FBS) 2000 plates were digitized in 2002-2005 and the Digitized FBS (DFBS, www.aras.am/Dfbs/dfbs.html) was created. New science projects have been conducted based on this low-dispersion spectroscopic material. Several other smaller digitization projects have been carried out as well, such as part of Second Byurakan Survey (SBS) plates, photographic chain plates in Coma, where the blazar ON 231 is located and 2.6m film spectra of FBS Blue Stellar Objects. However, most of the plates and films are not digitized. In 2015, we have started a project on the whole BAO Plate Archive digitization, creation of electronic database and its scientific usage. Armenian Virtual Observatory (ArVO, www.aras.am/Arvo/arvo.htm) database will accommodate all new data. The project runs in collaboration with the Armenian Institute of Informatics and Automation Problems (IIAP) and will continues during 4 years in 2015-2018. The final result will be an Electronic Database and online Interactive Sky map to be used for further research projects. ArVO will provide all standards and tools for efficient usage of the scientific output and its integration in international databases.

  15. Deep sequencing in library selection projects: what insight does it bring?

    PubMed

    Glanville, J; D'Angelo, S; Khan, T A; Reddy, S T; Naranjo, L; Ferrara, F; Bradbury, A R M

    2015-08-01

    High throughput sequencing is poised to change all aspects of the way antibodies and other binders are discovered and engineered. Millions of available sequence reads provide an unprecedented sampling depth able to guide the design and construction of effective, high quality naïve libraries containing tens of billions of unique molecules. Furthermore, during selections, high throughput sequencing enables quantitative tracing of enriched clones and position-specific guidance to amino acid variation under positive selection during antibody engineering. Successful application of the technologies relies on specific PCR reagent design, correct sequencing platform selection, and effective use of computational tools and statistical measures to remove error, identify antibodies, estimate diversity, and extract signatures of selection from the clone down to individual structural positions. Here we review these considerations and discuss some of the remaining challenges to the widespread adoption of the technology. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Troubleshooting Public Data Archiving: Suggestions to Increase Participation

    PubMed Central

    Roche, Dominique G.; Lanfear, Robert; Binning, Sandra A.; Haff, Tonya M.; Schwanz, Lisa E.; Cain, Kristal E.; Kokko, Hanna; Jennions, Michael D.; Kruuk, Loeske E. B.

    2014-01-01

    An increasing number of publishers and funding agencies require public data archiving (PDA) in open-access databases. PDA has obvious group benefits for the scientific community, but many researchers are reluctant to share their data publicly because of real or perceived individual costs. Improving participation in PDA will require lowering costs and/or increasing benefits for primary data collectors. Small, simple changes can enhance existing measures to ensure that more scientific data are properly archived and made publicly available: (1) facilitate more flexible embargoes on archived data, (2) encourage communication between data generators and re-users, (3) disclose data re-use ethics, and (4) encourage increased recognition of publicly archived data. PMID:24492920

  17. Guide to the Seattle Archives Branch.

    ERIC Educational Resources Information Center

    Hobbs, Richard, Comp.

    The guide presents an overview of the textual and microfilmed records located at the Seattle Branch of the National Archives of the United States. Established in 1969, the Seattle Archives Branch is one of 11 branches which preserve and make available for research those U.S. Government records of permanent value created and maintained by Federal…

  18. The European Classical Swine Fever Virus Database: Blueprint for a Pathogen-Specific Sequence Database with Integrated Sequence Analysis Tools

    PubMed Central

    Postel, Alexander; Schmeiser, Stefanie; Zimmermann, Bernd; Becher, Paul

    2016-01-01

    Molecular epidemiology has become an indispensable tool in the diagnosis of diseases and in tracing the infection routes of pathogens. Due to advances in conventional sequencing and the development of high throughput technologies, the field of sequence determination is in the process of being revolutionized. Platforms for sharing sequence information and providing standardized tools for phylogenetic analyses are becoming increasingly important. The database (DB) of the European Union (EU) and World Organisation for Animal Health (OIE) Reference Laboratory for classical swine fever offers one of the world’s largest semi-public virus-specific sequence collections combined with a module for phylogenetic analysis. The classical swine fever (CSF) DB (CSF-DB) became a valuable tool for supporting diagnosis and epidemiological investigations of this highly contagious disease in pigs with high socio-economic impacts worldwide. The DB has been re-designed and now allows for the storage and analysis of traditionally used, well established genomic regions and of larger genomic regions including complete viral genomes. We present an application example for the analysis of highly similar viral sequences obtained in an endemic disease situation and introduce the new geographic “CSF Maps” tool. The concept of this standardized and easy-to-use DB with an integrated genetic typing module is suited to serve as a blueprint for similar platforms for other human or animal viruses. PMID:27827988

  19. EOSDIS: Archive and Distribution Systems in the Year 2000

    NASA Technical Reports Server (NTRS)

    Behnke, Jeanne; Lake, Alla

    2000-01-01

    Earth Science Enterprise (ESE) is a long-term NASA research mission to study the processes leading to global climate change. The Earth Observing System (EOS) is a NASA campaign of satellite observatories that are a major component of ESE. The EOS Data and Information System (EOSDIS) is another component of ESE that will provide the Earth science community with easy, affordable, and reliable access to Earth science data. EOSDIS is a distributed system, with major facilities at seven Distributed Active Archive Centers (DAACs) located throughout the United States. The EOSDIS software architecture is being designed to receive, process, and archive several terabytes of science data on a daily basis. Thousands of science users and perhaps several hundred thousands of non-science users are expected to access the system. The first major set of data to be archived in the EOSDIS is from Landsat-7. Another EOS satellite, Terra, was launched on December 18, 1999. With the Terra launch, the EOSDIS will be required to support approximately one terabyte of data into and out of the archives per day. Since EOS is a multi-mission program, including the launch of more satellites and many other missions, the role of the archive systems becomes larger and more critical. In 1995, at the fourth convening of NASA Mass Storage Systems and Technologies Conference, the development plans for the EOSDIS information system and archive were described. Five years later, many changes have occurred in the effort to field an operational system. It is interesting to reflect on some of the changes driving the archive technology and system development for EOSDIS. This paper principally describes the Data Server subsystem including how the other subsystems access the archive, the nature of the data repository, and the mass-storage I/O management. The paper reviews the system architecture (both hardware and software) of the basic components of the archive. It discusses the operations concept, code

  20. The Post-Soviet Archives: Organization, Access, and Declassification

    DTIC Science & Technology

    1993-01-01

    attempting to place these files under Poskeirkhlv but has had limited success . The successors to the 503-the Ninistry of Security and the Foreign...their transfer to Roakomazkbiv. Pikhoia was able to take over these archives with some success ; yet, comp~lete control over the RO= archives has alluded...key 1Mironenko interview, May 27, 1992. - 20 - players are involved in the management of the Russian Presidential Archive. First, the director of the

  1. Use of whole-genome sequencing to trace, control and characterize the regional expansion of extended-spectrum β-lactamase producing ST15 Klebsiella pneumoniae.

    PubMed

    Zhou, Kai; Lokate, Mariette; Deurenberg, Ruud H; Tepper, Marga; Arends, Jan P; Raangs, Erwin G C; Lo-Ten-Foe, Jerome; Grundmann, Hajo; Rossen, John W A; Friedrich, Alexander W

    2016-02-11

    The study describes the transmission of a CTX-M-15-producing ST15 Klebsiella pneumoniae between patients treated in a single center and the subsequent inter-institutional spread by patient referral occurring between May 2012 and September 2013. A suspected epidemiological link between clinical K. pneumoniae isolates was supported by patient contact tracing and genomic phylogenetic analysis from May to November 2012. By May 2013, a patient treated in three institutions in two cities was involved in an expanding cluster caused by this high-risk clone (HiRiC) (local expansion, CTX-M-15 producing, and containing hypervirulence factors). A clone-specific multiplex PCR was developed for patient screening by which another patient was identified in September 2013. Genomic phylogenetic analysis including published ST15 genomes revealed a close homology with isolates previously found in the USA. Environmental contamination and lack of consistent patient screening were identified as being responsible for the clone dissemination. The investigation addresses the advantages of whole-genome sequencing in the early detection of HiRiC with a high propensity of nosocomial transmission and prolonged circulation in the regional patient population. Our study suggests the necessity for inter-institutional/regional collaboration for infection/outbreak management of K. pneumoniae HiRiCs.

  2. Recommendations for a service framework to access astronomical archives

    NASA Technical Reports Server (NTRS)

    Travisano, J. J.; Pollizzi, J.

    1992-01-01

    There are a large number of astronomical archives and catalogs on-line for network access, with many different user interfaces and features. Some systems are moving towards distributed access, supplying users with client software for their home sites which connects to servers at the archive site. Many of the issues involved in defining a standard framework of services that archive/catalog suppliers can use to achieve a basic level of interoperability are described. Such a framework would simplify the development of client and server programs to access the wide variety of astronomical archive systems. The primary services that are supplied by current systems include: catalog browsing, dataset retrieval, name resolution, and data analysis. The following issues (and probably more) need to be considered in establishing a standard set of client/server interfaces and protocols: Archive Access - dataset retrieval, delivery, file formats, data browsing, analysis, etc.; Catalog Access - database management systems, query languages, data formats, synchronous/asynchronous mode of operation, etc.; Interoperability - transaction/message protocols, distributed processing mechanisms (DCE, ONC/SunRPC, etc), networking protocols, etc.; Security - user registration, authorization/authentication mechanisms, etc.; Service Directory - service registration, lookup, port/task mapping, parameters, etc.; Software - public vs proprietary, client/server software, standard interfaces to client/server functions, software distribution, operating system portability, data portability, etc. Several archive/catalog groups, notably the Astrophysics Data System (ADS), are already working in many of these areas. In the process of developing StarView, which is the user interface to the Space Telescope Data Archive and Distribution Service (ST-DADS), these issues and the work of others were analyzed. A framework of standard interfaces for accessing services on any archive system which would benefit

  3. Fermilab History and Archives Project | Announcement of Renaming NAL

    Science.gov Websites

    Archives Project Home About the Archives History and Archives Online Request Contact Us History & Fermi Laboratory In 1972 Enrico Fermi, Nobel Laureate Physicist Return to the Wilson Years NAL TO BECOME ENRICO FERMI LABORATORY IN 1972 Dr. Glenn T. Seaborg, Chairman of the Atomic Energy Commission, announced

  4. The challenge of archiving and preserving remotely sensed data

    USGS Publications Warehouse

    Faundeen, John L.

    2003-01-01

    Few would question the need to archive the scientific and technical (S&T) data generated by researchers. At a minimum, the data are needed for change analysis. Likewise, most people would value efforts to ensure the preservation of the archived S&T data. Future generations will use analysis techniques not even considered today. Until recently, archiving and preserving these data were usually accomplished within existing infrastructures and budgets. As the volume of archived data increases, however, organizations charged with archiving S&T data will be increasingly challenged (U.S. General Accounting Office, 2002). The U.S. Geological Survey has had experience in this area and has developed strategies to deal with the mountain of land remote sensing data currently being managed and the tidal wave of expected new data. The Agency has dealt with archiving issues, such as selection criteria, purging, advisory panels, and data access, and has met with preservation challenges involving photographic and digital media. That experience has allowed the USGS to develop management approaches, which this paper outlines.

  5. Simultaneous activation of parallel sensory pathways promotes a grooming sequence in Drosophila

    PubMed Central

    Hampel, Stefanie; McKellar, Claire E

    2017-01-01

    A central model that describes how behavioral sequences are produced features a neural architecture that readies different movements simultaneously, and a mechanism where prioritized suppression between the movements determines their sequential performance. We previously described a model whereby suppression drives a Drosophila grooming sequence that is induced by simultaneous activation of different sensory pathways that each elicit a distinct movement (Seeds et al., 2014). Here, we confirm this model using transgenic expression to identify and optogenetically activate sensory neurons that elicit specific grooming movements. Simultaneous activation of different sensory pathways elicits a grooming sequence that resembles the naturally induced sequence. Moreover, the sequence proceeds after the sensory excitation is terminated, indicating that a persistent trace of this excitation induces the next grooming movement once the previous one is performed. This reveals a mechanism whereby parallel sensory inputs can be integrated and stored to elicit a delayed and sequential grooming response. PMID:28887878

  6. The Operation and Architecture of the Keck Observatory Archive

    NASA Astrophysics Data System (ADS)

    Berriman, G. B.; Gelino, C. R.; Laity, A.; Kong, M.; Swain, M.; Holt, J.; Goodrich, R.; Mader, J.; Tran, H. D.

    2014-05-01

    The Infrared Processing and Analysis Center (IPAC) and the W. M. Keck Observatory (WMKO) are collaborating to build an archive for the twin 10-m Keck Telescopes, located near the summit of Mauna Kea. The Keck Observatory Archive (KOA) takes advantage of IPAC's long experience with managing and archiving large and complex data sets from active missions and serving them to the community; and of the Observatory's knowledge of the operation of its sophisticated instrumentation and the organization of the data products. By the end of 2013, KOA will contain data from all eight active observatory instruments, with an anticipated volume of 28 TB. The data include raw science and observations, quick look products, weather information, and, for some instruments, reduced and calibrated products. The goal of including data from all instruments is the cumulation of a rapid expansion of the archive's holdings, and already data from four new instruments have been added since October 2012. One more active instrument, the integral field spectrograph OSIRIS, is scheduled for ingestion in December 2013. After preparation for ingestion into the archive, the data are transmitted electronically from WMKO to IPAC for curation in the physical archive. This process includes validation of the science and content of the data and verification that data were not corrupted in transmission. The archived data include both newly-acquired observations and all previously acquired observations. The older data extends back to the date of instrument commissioning; for some instruments, such as HIRES, these data can extend as far back as 1994. KOA will continue to ingest all newly obtained observations, at an anticipated volume of 4 TB per year, and plans to ingest data from two decommissioned instruments. Access to these data is governed by a data use policy that guarantees Principal Investigators (PI) exclusive access to their data for at least 18 months, and allows for extensions as granted by

  7. Continuous, Large-Scale Processing of Seismic Archives for High-Resolution Monitoring of Seismic Activity and Seismogenic Properties

    NASA Astrophysics Data System (ADS)

    Waldhauser, F.; Schaff, D. P.

    2012-12-01

    the computational framework for double-difference processing the combined parametric and waveform archives of the ISC, NEIC, and IRIS with over three million recorded earthquakes worldwide. Since our methods are scalable and run on inexpensive Beowulf clusters, periodic re-analysis of such archives may thus become a routine procedure to continuously improve resolution in existing global earthquake catalogs. Results from subduction zones and aftershock sequences of recent great earthquakes demonstrate the considerable social and economic impact that high-resolution images of active faults, when available in real-time, will have in the prompt evaluation and mitigation of seismic hazards. These results also highlight the need for consistent long-term seismic monitoring and archiving of records.

  8. Identifying Learning Behaviors by Contextualizing Differential Sequence Mining with Action Features and Performance Evolution

    ERIC Educational Resources Information Center

    Kinnebrew, John S.; Biswas, Gautam

    2012-01-01

    Our learning-by-teaching environment, Betty's Brain, captures a wealth of data on students' learning interactions as they teach a virtual agent. This paper extends an exploratory data mining methodology for assessing and comparing students' learning behaviors from these interaction traces. The core algorithm employs sequence mining techniques to…

  9. Interoperability at ESA Heliophysics Science Archives: IVOA, HAPI and other implementations

    NASA Astrophysics Data System (ADS)

    Martinez-Garcia, B.; Cook, J. P.; Perez, H.; Fernandez, M.; De Teodoro, P.; Osuna, P.; Arnaud, M.; Arviset, C.

    2017-12-01

    The data of ESA heliophysics science missions are preserved at the ESAC Science Data Centre (ESDC). The ESDC aims for the long term preservation of those data, which includes missions such as Ulysses, Soho, Proba-2, Cluster, Double Star, and in the future, Solar Orbiter. Scientists have access to these data through web services, command line and graphical user interfaces for each of the corresponding science mission archives. The International Virtual Observatory Alliance (IVOA) provides technical standards that allow interoperability among different systems that implement them. By adopting some IVOA standards, the ESA heliophysics archives are able to share their data with those tools and services that are VO-compatible. Implementation of those standards can be found in the existing archives: Ulysses Final Archive (UFA) and Soho Science Archive (SSA). They already make use of VOTable format definition and Simple Application Messaging Protocol (SAMP). For re-engineered or new archives, the implementation of services through Table Access Protocol (TAP) or Universal Worker Service (UWS) will leverage this interoperability. This will be the case for the Proba-2 Science Archive (P2SA) and the Solar Orbiter Archive (SOAR). We present here which IVOA standards were already used by the ESA Heliophysics archives in the past and the work on-going.

  10. [Psychiatric Archives: Archive for which History? The Writings and Drawings of René L.

    PubMed

    Artières, Philippe

    The archives of the psychiatric institutions are often mobilized to investigate the history of the treatment of mental disorders and its modalities in our societies. From a patient's record, interned in the Hôpital du Bon Sauveur in the department of Manche in France in 1963, this article shows how these archives are part of writing a different story that of decolonization and particularly the independence of Algeria and the end of French colonization. In particular, studied the drawings by this patient and the way it reflects the collective history.

  11. Computer ray tracing speeds.

    PubMed

    Robb, P; Pawlowski, B

    1990-05-01

    The results of measuring the ray trace speed and compilation speed of thirty-nine computers in fifty-seven configurations, ranging from personal computers to super computers, are described. A correlation of ray trace speed has been made with the LINPACK benchmark which allows the ray trace speed to be estimated using LINPACK performance data. The results indicate that the latest generation of workstations, using CPUs based on RISC (Reduced Instruction Set Computer) technology, are as fast or faster than mainframe computers in compute-bound situations.

  12. Planetary Data Archiving Plan at JAXA

    NASA Astrophysics Data System (ADS)

    Shinohara, Iku; Kasaba, Yasumasa; Yamamoto, Yukio; Abe, Masanao; Okada, Tatsuaki; Imamura, Takeshi; Sobue, Shinichi; Takashima, Takeshi; Terazono, Jun-Ya

    After the successful rendezvous of Hayabusa with the small-body planet Itokawa, and the successful launch of Kaguya to the moon, Japanese planetary community has gotten their own and full-scale data. However, at this moment, these datasets are only available from the data sites managed by each mission team. The databases are individually constructed in the different formats, and the user interface of these data sites is not compatible with foreign databases. To improve the usability of the planetary archives at JAXA and to enable the international data exchange smooth, we are investigating to make a new planetary database. Within a coming decade, Japan will have fruitful datasets in the planetary science field, Venus (Planet-C), Mercury (BepiColombo), and several missions in planning phase (small-bodies). In order to strongly assist the international scientific collaboration using these mission archive data, the planned planetary data archive at JAXA should be managed in an unified manner and the database should be constructed in the international planetary database standard style. In this presentation, we will show the current status and future plans of the planetary data archiving at JAXA.

  13. LBT Distributed Archive: Status and Features

    NASA Astrophysics Data System (ADS)

    Knapic, C.; Smareglia, R.; Thompson, D.; Grede, G.

    2011-07-01

    After the first release of the LBT Distributed Archive, this successful collaboration is continuing within the LBT corporation. The IA2 (Italian Center for Astronomical Archive) team had updated the LBT DA with new features in order to facilitate user data retrieval while abiding by VO standards. To facilitate the integration of data from any new instruments, we have migrated to a new database, developed new data distribution software, and enhanced features in the LBT User Interface. The DBMS engine has been changed to MySQL. Consequently, the data handling software now uses java thread technology to update and synchronize the main storage archives on Mt. Graham and in Tucson, as well as archives in Trieste and Heidelberg, with all metadata and proprietary data. The LBT UI has been updated with additional features allowing users to search by instrument and some of the more important characteristics of the images. Finally, instead of a simple cone search service over all LBT image data, new instrument specific SIAP and cone search services have been developed. They will be published in the IVOA framework later this fall.

  14. EBI metagenomics--a new resource for the analysis and archiving of metagenomic data.

    PubMed

    Hunter, Sarah; Corbett, Matthew; Denise, Hubert; Fraser, Matthew; Gonzalez-Beltran, Alejandra; Hunter, Christopher; Jones, Philip; Leinonen, Rasko; McAnulla, Craig; Maguire, Eamonn; Maslen, John; Mitchell, Alex; Nuka, Gift; Oisel, Arnaud; Pesseat, Sebastien; Radhakrishnan, Rajesh; Rocca-Serra, Philippe; Scheremetjew, Maxim; Sterk, Peter; Vaughan, Daniel; Cochrane, Guy; Field, Dawn; Sansone, Susanna-Assunta

    2014-01-01

    Metagenomics is a relatively recently established but rapidly expanding field that uses high-throughput next-generation sequencing technologies to characterize the microbial communities inhabiting different ecosystems (including oceans, lakes, soil, tundra, plants and body sites). Metagenomics brings with it a number of challenges, including the management, analysis, storage and sharing of data. In response to these challenges, we have developed a new metagenomics resource (http://www.ebi.ac.uk/metagenomics/) that allows users to easily submit raw nucleotide reads for functional and taxonomic analysis by a state-of-the-art pipeline, and have them automatically stored (together with descriptive, standards-compliant metadata) in the European Nucleotide Archive.

  15. EBI metagenomics—a new resource for the analysis and archiving of metagenomic data

    PubMed Central

    Hunter, Sarah; Corbett, Matthew; Denise, Hubert; Fraser, Matthew; Gonzalez-Beltran, Alejandra; Hunter, Christopher; Jones, Philip; Leinonen, Rasko; McAnulla, Craig; Maguire, Eamonn; Maslen, John; Mitchell, Alex; Nuka, Gift; Oisel, Arnaud; Pesseat, Sebastien; Radhakrishnan, Rajesh; Rocca-Serra, Philippe; Scheremetjew, Maxim; Sterk, Peter; Vaughan, Daniel; Cochrane, Guy; Field, Dawn; Sansone, Susanna-Assunta

    2014-01-01

    Metagenomics is a relatively recently established but rapidly expanding field that uses high-throughput next-generation sequencing technologies to characterize the microbial communities inhabiting different ecosystems (including oceans, lakes, soil, tundra, plants and body sites). Metagenomics brings with it a number of challenges, including the management, analysis, storage and sharing of data. In response to these challenges, we have developed a new metagenomics resource (http://www.ebi.ac.uk/metagenomics/) that allows users to easily submit raw nucleotide reads for functional and taxonomic analysis by a state-of-the-art pipeline, and have them automatically stored (together with descriptive, standards-compliant metadata) in the European Nucleotide Archive. PMID:24165880

  16. Archiving of HEAO-1 data products and the creation of a general user's guide to the archive

    NASA Technical Reports Server (NTRS)

    Nousek, John A.

    1993-01-01

    The activities at Penn State University are described. Initiated at Penn State in Jan. 1989, the goal of this program was to preserve the results of the HEAO-1 mission by transforming the obsolete and disorganized data products into modern and documented forms. The result of this effort was an archive of top level data products, totalling 70 Mbytes; a general User's Guide to the archive, which is attached; and a hardcopy archive containing standardized plots and output of fits made to all the pointing data taken by the HEAO-1 A-2 LED experiment. A more detailed description of these activities is found in the following sections. Accompanying this document is a copy of the User's Guide which may provide additional detail.

  17. Molecular Tracing of Hepatitis C Virus Genotype 1 Isolates in Iran: A NS5B Phylogenetic Analysis with Systematic Review.

    PubMed

    Hesamizadeh, Khashayar; Alavian, Seyed Moayed; Najafi Tireh Shabankareh, Azar; Sharafi, Heidar

    2016-12-01

    Hepatitis C virus (HCV) is characterized by a high degree of genetic heterogeneity and classified into 7 genotypes and different subtypes. It heterogeneously distributed through various risk groups and geographical regions. A well-established phylogenetic relationship can simplify the tracing of HCV hierarchical strata into geographical regions. The current study aimed to find genetic phylogeny of subtypes 1a and 1b of HCV isolates based on NS5B nucleotide sequences in Iran and other members of Eastern Mediterranean regional office of world health organization, as well as other Middle Eastern countries, with a systematic review of available published and unpublished studies. The phylogenetic analyses were performed based on the nucleotide sequences of NS5B gene of HCV genotype 1 (HCV-1), which were registered in the GenBank database. The literature review was performed in two steps: 1) searching studies evaluating the NS5B sequences of HCV-1, on PubMed, Scopus, and Web of Science, and 2) Searching sequences of unpublished studies registered in the GenBank database. In this study, 442 sequences from HCV-1a and 232 from HCV-1b underwent phylogenetic analysis. Phylogenetic analysis of all sequences revealed different clusters in the phylogenetic trees. The results showed that the proportion of HCV-1a and -1b isolates from Iranian patients probably originated from domestic sources. Moreover, the HCV-1b isolates from Iranian patients may have similarities with the European ones. In this study, phylogenetic reconstruction of HCV-1 sequences clearly indicated for molecular tracing and ancestral relationships of the HCV genotypes in Iran, and showed the likelihood of domestic origin for HCV-1a and various origin for HCV-1b.

  18. 76 FR 15349 - Advisory Committee on the Electronic Records Archives (ACERA); Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-21

    ... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on the Electronic Records Archives (ACERA); Meeting AGENCY: National Archives and Records Administration. ACTION: Notice of Meeting. SUMMARY... Archives and Records Administration (NARA) announces a meeting of the Advisory Committee on the Electronic...

  19. Anisotropic ray trace

    NASA Astrophysics Data System (ADS)

    Lam, Wai Sze Tiffany

    Optical components made of anisotropic materials, such as crystal polarizers and crystal waveplates, are widely used in many complex optical system, such as display systems, microlithography, biomedical imaging and many other optical systems, and induce more complex aberrations than optical components made of isotropic materials. The goal of this dissertation is to accurately simulate the performance of optical systems with anisotropic materials using polarization ray trace. This work extends the polarization ray tracing calculus to incorporate ray tracing through anisotropic materials, including uniaxial, biaxial and optically active materials. The 3D polarization ray tracing calculus is an invaluable tool for analyzing polarization properties of an optical system. The 3x3 polarization ray tracing P matrix developed for anisotropic ray trace assists tracking the 3D polarization transformations along a ray path with series of surfaces in an optical system. To better represent the anisotropic light-matter interactions, the definition of the P matrix is generalized to incorporate not only the polarization change at a refraction/reflection interface, but also the induced optical phase accumulation as light propagates through the anisotropic medium. This enables realistic modeling of crystalline polarization elements, such as crystal waveplates and crystal polarizers. The wavefront and polarization aberrations of these anisotropic components are more complex than those of isotropic optical components and can be evaluated from the resultant P matrix for each eigen-wavefront as well as for the overall image. One incident ray refracting or reflecting into an anisotropic medium produces two eigenpolarizations or eigenmodes propagating in different directions. The associated ray parameters of these modes necessary for the anisotropic ray trace are described in Chapter 2. The algorithms to calculate the P matrix from these ray parameters are described in Chapter 3 for

  20. Frequency of the first feature in action sequences influences feature binding.

    PubMed

    Mattson, Paul S; Fournier, Lisa R; Behmer, Lawrence P

    2012-10-01

    We investigated whether binding among perception and action feature codes is a preliminary step toward creating a more durable memory trace of an action event. If so, increasing the frequency of a particular event (e.g., a stimulus requiring a movement with the left or right hand in an up or down direction) should increase the strength and speed of feature binding for this event. The results from two experiments, using a partial-repetition paradigm, confirmed that feature binding increased in strength and/or occurred earlier for a high-frequency (e.g., left hand moving up) than for a low-frequency (e.g., right hand moving down) event. Moreover, increasing the frequency of the first-specified feature in the action sequence alone (e.g., "left" hand) increased the strength and/or speed of action feature binding (e.g., between the "left" hand and movement in an "up" or "down" direction). The latter finding suggests an update to the theory of event coding, as not all features in the action sequence equally determine binding strength. We conclude that action planning involves serial binding of features in the order of action feature execution (i.e., associations among features are not bidirectional but are directional), which can lead to a more durable memory trace. This is consistent with physiological evidence suggesting that serial order is preserved in an action plan executed from memory and that the first feature in the action sequence may be critical in preserving this serial order.

  1. PACS archive upgrade and data migration: clinical experiences

    NASA Astrophysics Data System (ADS)

    Liu, Brent J.; Documet, Luis; Sarti, Dennis A.; Huang, H. K.; Donnelly, John

    2002-05-01

    Saint John's Health Center PACS data volumes have increased dramatically since the hospital became filmless in April of 1999. This is due in part of continuous image accumulation, and the integration of a new multi-slice detector CT scanner into PACS. The original PACS archive would not be able to handle the distribution and archiving load and capacity in the near future. Furthermore, there is no secondary copy backup of all the archived PACS image data for disaster recovery purposes. The purpose of this paper is to present a clinical and technical process template to upgrade and expand the PACS archive, migrate existing PACs image data to the new archive, and provide a back-up and disaster recovery function not currently available. Discussion of the technical and clinical pitfalls and challenges involved in this process will be presented as well. The server hardware configuration was upgraded and a secondary backup implemented for disaster recovery. The upgrade includes new software versions, database reconfiguration, and installation of a new tape jukebox to replace the current MOD jukebox. Upon completion, all PACS image data from the original MOD jukebox was migrated to the new tape jukebox and verified. The migration was performed during clinical operation continuously in the background. Once the data migration was completed the MOD jukebox was removed. All newly acquired PACS exams are now archived to the new tape jukebox. All PACs image data residing on the original MOD jukebox have been successfully migrated into the new archive. In addition, a secondary backup of all PACS image data has been implemented for disaster recovery and has been verified using disaster scenario testing. No PACS image data was lost during the entire process and there was very little clinical impact during the entire upgrade and data migration. Some of the pitfalls and challenges during this upgrade process included hardware reconfiguration for the original archive server, clinical

  2. PRESENTATION TYPE: Round Table Discussion (80 minutes) TITLE: Unlocking the ‘Omics Archive: Enabling Toxicogenomic/Proteomic Investigation from Archival Samples

    EPA Science Inventory

    Formalin fixation and paraffin embedding (FFPE) is a cross-industry gold standard for preparing nonclinical and clinical samples for histopathological assessment which preserves tissue architecture and enables storage of tissue in archival banks. These archival banks are an untap...

  3. 36 CFR § 1253.1 - National Archives Building.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., DC 20408. Hours for the Research Center and the Central Research Room are posted at http://www.archives.gov. The exhibit areas' hours of operation are also posted at http://www.archives.gov. Last...

  4. On the Information Content of Program Traces

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Hood, Robert; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Program traces are used for analysis of program performance, memory utilization, and communications as well as for program debugging. The trace contains records of execution events generated by monitoring units inserted into the program. The trace size limits the resolution of execution events and restricts the user's ability to analyze the program execution. We present a study of the information content of program traces and develop a coding scheme which reduces the trace size to the limit given by the trace entropy. We apply the coding to the traces of AIMS instrumented programs executed on the IBM SPA and the SCSI Power Challenge and compare it with other coding methods. Our technique shows size of the trace can be reduced by more than a factor of 5.

  5. NASA's astrophysics archives at the National Space Science Data Center

    NASA Technical Reports Server (NTRS)

    Vansteenberg, M. E.

    1992-01-01

    NASA maintains an archive facility for Astronomical Science data collected from NASA's missions at the National Space Science Data Center (NSSDC) at Goddard Space Flight Center. This archive was created to insure the science data collected by NASA would be preserved and useable in the future by the science community. Through 25 years of operation there are many lessons learned, from data collection procedures, archive preservation methods, and distribution to the community. This document presents some of these more important lessons, for example: KISS (Keep It Simple, Stupid) in system development. Also addressed are some of the myths of archiving, such as 'scientists always know everything about everything', or 'it cannot possibly be that hard, after all simple data tech's do it'. There are indeed good reasons that a proper archive capability is needed by the astronomical community, the important question is how to use the existing expertise as well as the new innovative ideas to do the best job archiving this valuable science data.

  6. Remediation of the protein data bank archive.

    PubMed

    Henrick, Kim; Feng, Zukang; Bluhm, Wolfgang F; Dimitropoulos, Dimitris; Doreleijers, Jurgen F; Dutta, Shuchismita; Flippen-Anderson, Judith L; Ionides, John; Kamada, Chisa; Krissinel, Eugene; Lawson, Catherine L; Markley, John L; Nakamura, Haruki; Newman, Richard; Shimizu, Yukiko; Swaminathan, Jawahar; Velankar, Sameer; Ory, Jeramia; Ulrich, Eldon L; Vranken, Wim; Westbrook, John; Yamashita, Reiko; Yang, Huanwang; Young, Jasmine; Yousufuddin, Muhammed; Berman, Helen M

    2008-01-01

    The Worldwide Protein Data Bank (wwPDB; wwpdb.org) is the international collaboration that manages the deposition, processing and distribution of the PDB archive. The online PDB archive at ftp://ftp.wwpdb.org is the repository for the coordinates and related information for more than 47 000 structures, including proteins, nucleic acids and large macromolecular complexes that have been determined using X-ray crystallography, NMR and electron microscopy techniques. The members of the wwPDB-RCSB PDB (USA), MSD-EBI (Europe), PDBj (Japan) and BMRB (USA)-have remediated this archive to address inconsistencies that have been introduced over the years. The scope and methods used in this project are presented.

  7. Serendipitous discovery of Wolbachia genomes in multiple Drosophila species.

    PubMed

    Salzberg, Steven L; Dunning Hotopp, Julie C; Delcher, Arthur L; Pop, Mihai; Smith, Douglas R; Eisen, Michael B; Nelson, William C

    2005-01-01

    The Trace Archive is a repository for the raw, unanalyzed data generated by large-scale genome sequencing projects. The existence of this data offers scientists the possibility of discovering additional genomic sequences beyond those originally sequenced. In particular, if the source DNA for a sequencing project came from a species that was colonized by another organism, then the project may yield substantial amounts of genomic DNA, including near-complete genomes, from the symbiotic or parasitic organism. By searching the publicly available repository of DNA sequencing trace data, we discovered three new species of the bacterial endosymbiont Wolbachia pipientis in three different species of fruit fly: Drosophila ananassae, D. simulans, and D. mojavensis. We extracted all sequences with partial matches to a previously sequenced Wolbachia strain and assembled those sequences using customized software. For one of the three new species, the data recovered were sufficient to produce an assembly that covers more than 95% of the genome; for a second species the data produce the equivalent of a 'light shotgun' sampling of the genome, covering an estimated 75-80% of the genome; and for the third species the data cover approximately 6-7% of the genome. The results of this study reveal an unexpected benefit of depositing raw data in a central genome sequence repository: new species can be discovered within this data. The differences between these three new Wolbachia genomes and the previously sequenced strain revealed numerous rearrangements and insertions within each lineage and hundreds of novel genes. The three new genomes, with annotation, have been deposited in GenBank.

  8. Enlivening Dance History Pedagogy through Archival Projects

    ERIC Educational Resources Information Center

    Randall, Tresa

    2012-01-01

    Dance archives can bring students into contact with historical subjects through artifacts of the past. This article advocates the use of archival projects in undergraduate dance history courses, arguing that such hands-on learning activities give students dynamic and interactive experiences of history. The author describes a creative project she…

  9. Tropical wetlands - problems and potentials as paleo-monsoon archives

    NASA Astrophysics Data System (ADS)

    Chabangborn, Akkaneewut; Chawchai, Sakonvan; Fritz, Sherilyn; Löwemark, Ludvig; Wohlfarth, Barbara

    2014-05-01

    Paleoclimatic and paleoenvironmental information is still scarce for Southeast Asia despite the fact that this large region is home to numerous natural lakes and wetlands that may contain long sedimentary archives. During the past years we have been surveying lakes and wetlands in different parts of Thailand to select the most promising and longest sedimentary sequences for paleoenvironmental studies. Our survey of more than 30 lakes shows that only very few lakes and wetlands still contain soft sediments. The sediments in the majority of the lakes and wetlands have been dredged and excavated during the past 10 years to provide open and clear water for fishing and recreation. Dredging and excavation using large caterpillars has disturbed and in some cases completely destroyed the sedimentary records. Stiff clays now drape most of the lake bottoms. Based on our extensive survey, we found five sites, from which we successfully obtained intact sediment sequences: Lakes Kumphawapi and Pa Kho in northeast Thailand, Nong Leng Sai in northern Thailand and Sam Roi Yod and Nong Thale Pron in southern Thailand. All of these sites contain a detailed sedimentary record covering the past 2000 years, two of the sites cover parts of or, the entire Holocene; and two sites have sediments covering the last Termination and MIS 3, respectively.

  10. Latin American Archives.

    ERIC Educational Resources Information Center

    Belsunce, Cesar A. Garcia

    1983-01-01

    Examination of the situation of archives in four Latin American countries--Argentina, Brazil, Colombia, and Costa Rica--highlights national systems, buildings, staff, processing of documents, accessibility and services to the public and publications and extension services. (EJS)

  11. Desired Precision in Multi-Objective Optimization: Epsilon Archiving or Rounding Objectives?

    NASA Astrophysics Data System (ADS)

    Asadzadeh, M.; Sahraei, S.

    2016-12-01

    Multi-objective optimization (MO) aids in supporting the decision making process in water resources engineering and design problems. One of the main goals of solving a MO problem is to archive a set of solutions that is well-distributed across a wide range of all the design objectives. Modern MO algorithms use the epsilon dominance concept to define a mesh with pre-defined grid-cell size (often called epsilon) in the objective space and archive at most one solution at each grid-cell. Epsilon can be set to the desired precision level of each objective function to make sure that the difference between each pair of archived solutions is meaningful. This epsilon archiving process is computationally expensive in problems that have quick-to-evaluate objective functions. This research explores the applicability of a similar but computationally more efficient approach to respect the desired precision level of all objectives in the solution archiving process. In this alternative approach each objective function is rounded to the desired precision level before comparing any new solution to the set of archived solutions that already have rounded objective function values. This alternative solution archiving approach is compared to the epsilon archiving approach in terms of efficiency and quality of archived solutions for solving mathematical test problems and hydrologic model calibration problems.

  12. Context-sensitive trace inlining for Java.

    PubMed

    Häubl, Christian; Wimmer, Christian; Mössenböck, Hanspeter

    2013-12-01

    Method inlining is one of the most important optimizations in method-based just-in-time (JIT) compilers. It widens the compilation scope and therefore allows optimizing multiple methods as a whole, which increases the performance. However, if method inlining is used too frequently, the compilation time increases and too much machine code is generated. This has negative effects on the performance. Trace-based JIT compilers only compile frequently executed paths, so-called traces, instead of whole methods. This may result in faster compilation, less generated machine code, and better optimized machine code. In the previous work, we implemented a trace recording infrastructure and a trace-based compiler for [Formula: see text], by modifying the Java HotSpot VM. Based on this work, we evaluate the effect of trace inlining on the performance and the amount of generated machine code. Trace inlining has several major advantages when compared to method inlining. First, trace inlining is more selective than method inlining, because only frequently executed paths are inlined. Second, the recorded traces may capture information about virtual calls, which simplify inlining. A third advantage is that trace information is context sensitive so that different method parts can be inlined depending on the specific call site. These advantages allow more aggressive inlining while the amount of generated machine code is still reasonable. We evaluate several inlining heuristics on the benchmark suites DaCapo 9.12 Bach, SPECjbb2005, and SPECjvm2008 and show that our trace-based compiler achieves an up to 51% higher peak performance than the method-based Java HotSpot client compiler. Furthermore, we show that the large compilation scope of our trace-based compiler has a positive effect on other compiler optimizations such as constant folding or null check elimination.

  13. Archived Data User Service self evaluation report : FAST

    DOT National Transportation Integrated Search

    2000-11-01

    The Archived Data User Service (ADUS) is a recent addition to the National Intelligent Transportation System (ITS) Architecture. This user service required ITS system to have the capability to receive, collect and archive ITS-generated operational...

  14. An Introduction to Archival Automation: A RAMP Study with Guidelines.

    ERIC Educational Resources Information Center

    Cook, Michael

    Developed under a contract with the International Council on Archives, these guidelines are designed to emphasize the role of automation techniques in archives and records services, provide an indication of existing computer systems used in different archives services and of specific computer applications at various stages of archives…

  15. 75 FR 12573 - Advisory Committee on the Electronic Records Archives (ACERA)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-16

    ... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on the Electronic Records Archives (ACERA) AGENCY: National Archives and Records Administration. ACTION: Notice of meeting. SUMMARY: In... and Records Administration (NARA) announces a [[Page 12574

  16. Better Living Through Metadata: Examining Archive Usage

    NASA Astrophysics Data System (ADS)

    Becker, G.; Winkelman, S.; Rots, A.

    2013-10-01

    The primary purpose of an observatory's archive is to provide access to the data through various interfaces. User interactions with the archive are recorded in server logs, which can be used to answer basic questions like: Who has downloaded dataset X? When did she do this? Which tools did she use? The answers to questions like these fill in patterns of data access (e.g., how many times dataset X has been downloaded in the past three years). Analysis of server logs provides metrics of archive usage and provides feedback on interface use which can be used to guide future interface development. The Chandra X-ray Observatory is fortunate in that a database to track data access and downloads has been continuously recording such transactions for years; however, it is overdue for an update. We will detail changes we hope to effect and the differences the changes may make to our usage metadata picture. We plan to gather more information about the geographic location of users without compromising privacy; create improved archive statistics; and track and assess the impact of web “crawlers” and other scripted access methods on the archive. With the improvements to our download tracking we hope to gain a better understanding of the dissemination of Chandra's data; how effectively it is being done; and perhaps discover ideas for new services.

  17. Clinical experiences with an ASP model backup archive for PACS images

    NASA Astrophysics Data System (ADS)

    Liu, Brent J.; Cao, Fei; Documet, Luis; Huang, H. K.; Muldoon, Jean

    2003-05-01

    Last year we presented a Fault-Tolerant Backup Archive using an Application Service Provider (ASP) model for disaster recovery. The purpose of this paper is to update and provide clinical experiences related towards implementing the ASP model archive solution for short-term backup of clinical PACS image data as well as possible applications other than disaster recovery. The ASP backup archive provides instantaneous, automatic backup of acquired PACS image data and instantaneous recovery of stored PACS image data all at a low operational cost and with little human intervention. This solution can be used for a variety of scheduled and unscheduled downtimes that occur on the main PACS archive. A backup archive server with hierarchical storage was implemented offsite from the main PACS archive location. Clinical data from a hospital PACS is sent to this ASP storage server in parallel to the exams being archived in the main server. Initially, connectivity between the main archive and the ASP storage server is established via a T-1 connection. In the future, other more cost-effective means of connectivity will be researched such as the Internet 2. We have integrated the ASP model backup archive with a clinical PACS at Saint John's Health Center and has been operational for over 6 months. Pitfalls encountered during integration with a live clinical PACS and the impact to clinical workflow will be discussed. In addition, estimations of the cost of establishing such a solution as well as the cost charged to the users will be included. Clinical downtime scenarios, such as a scheduled mandatory downtime and an unscheduled downtime due to a disaster event to the main archive, were simulated and the PACS exams were sent successfully from the offsite ASP storage server back to the hospital PACS in less than 1 day. The ASP backup archive was able to recover PACS image data for comparison studies with no complex operational procedures. Furthermore, no image data loss was

  18. A Structure Standard for Archival Context: EAC-CPF Is Here

    ERIC Educational Resources Information Center

    Dryden, Jean

    2010-01-01

    The archival community's new descriptive standard, "Encoded Archival Context" for Corporate Bodies, Persons, and Families (EAC-CPF), supports the sharing of descriptions of records creators and is a significant addition to the suite of standards for archival description. EAC-CPF is a data structure standard similar to its older sibling EAD…

  19. Improving Internet Archive Service through Proxy Cache.

    ERIC Educational Resources Information Center

    Yu, Hsiang-Fu; Chen, Yi-Ming; Wang, Shih-Yong; Tseng, Li-Ming

    2003-01-01

    Discusses file transfer protocol (FTP) servers for downloading archives (files with particular file extensions), and the change to HTTP (Hypertext transfer protocol) with increased Web use. Topics include the Archie server; proxy cache servers; and how to improve the hit rate of archives by a combination of caching and better searching mechanisms.…

  20. The Hydrologic Cycle Distributed Active Archive Center

    NASA Technical Reports Server (NTRS)

    Hardin, Danny M.; Goodman, H. Michael

    1995-01-01

    The Marshall Space Flight Center Distributed Active Archive Center in Huntsville, Alabama supports the acquisition, production, archival and dissemination of data relevant to the study of the global hydrologic cycle. This paper describes the Hydrologic Cycle DAAC, surveys its principle data holdings, addresses future growth, and gives information for accessing the data sets.

  1. Accuracy of maxillary positioning after standard and inverted orthognathic sequencing.

    PubMed

    Ritto, Fabio G; Ritto, Thiago G; Ribeiro, Danilo Passeado; Medeiros, Paulo José; de Moraes, Márcio

    2014-05-01

    This study aimed to compare the accuracy of maxillary positioning after bimaxillary orthognathic surgery, using 2 sequences. A total of 80 cephalograms (40 preoperative and 40 postoperative) from 40 patients were analyzed. Group 1 included radiographs of patients submitted to conventional sequence, whereas group 2 patients were submitted to inverted sequence. The final position of the maxillary central incisor was obtained after vertical and horizontal measurements of the tracings, and it was compared with what had been planned. The null hypothesis, which stated that there would be no difference between the groups, was tested. After applying the Welch t test for comparison of mean differences between maxillary desired and achieved position, considering a statistical significance of 5% and a 2-tailed test, the null hypothesis was not rejected (P > .05). Thus, there was no difference in the accuracy of maxillary positioning between groups. Conventional and inverted sequencing proved to be reliable in positioning the maxilla after LeFort I osteotomy in bimaxillary orthognathic surgeries. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Massively parallel algorithms for trace-driven cache simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.

    1991-01-01

    Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.

  3. Manual tracing versus smartphone application (app) tracing: a comparative study.

    PubMed

    Sayar, Gülşilay; Kilinc, Delal Dara

    2017-11-01

    This study aimed to compare the results of conventional manual cephalometric tracing with those acquired with smartphone application cephalometric tracing. The cephalometric radiographs of 55 patients (25 females and 30 males) were traced via the manual and app methods and were subsequently examined with Steiner's analysis. Five skeletal measurements, five dental measurements and two soft tissue measurements were managed based on 21 landmarks. The durations of the performances of the two methods were also compared. SNA (Sella, Nasion, A point angle) and SNB (Sella, Nasion, B point angle) values for the manual method were statistically lower (p < .001) than those for the app method. The ANB value for the manual method was statistically lower than that of app method. L1-NB (°) and upper lip protrusion values for the manual method were statistically higher than those for the app method. Go-GN/SN, U1-NA (°) and U1-NA (mm) values for manual method were statistically lower than those for the app method. No differences between the two methods were found in the L1-NB (mm), occlusal plane to SN, interincisal angle or lower lip protrusion values. Although statistically significant differences were found between the two methods, the cephalometric tracing proceeded faster with the app method than with the manual method.

  4. Adaptability in the Development of Data Archiving Services at Johns Hopkins University

    NASA Astrophysics Data System (ADS)

    Petters, J.; DiLauro, T.; Fearon, D.; Pralle, B.

    2015-12-01

    Johns Hopkins University (JHU) Data Management Services provides archiving services for institutional researchers through the JHU Data Archive, thereby increasing the access to and use of their research data. From its inception our unit's archiving service has evolved considerably. While some of these changes have been internally driven so that our unit can archive quality data collections more efficiently, we have also developed archiving policies and procedures on the fly in response to researcher needs. Providing our archiving services for JHU research groups from a variety of research disciplines have surfaced different sets of expectations and needs. We have used each interaction to help us refine our services and quickly satisfy the researchers we serve (following the first agile principle). Here we discuss the development of our newest archiving service model, its implementation over the past several months, and the processes by which we have continued to refine and improve our archiving services since its implementation. Through this discussion we will illustrate the benefits of planning, structure and flexibility in development of archiving services that maximize the potential value of research data. We will describe interactions with research groups, including those from environmental engineering and international health, and how we were able to rapidly modify and develop our archiving services to meet their needs (e.g. in an 'agile' way). For example, our interactions with both of these research groups led first to discussion in regular standing meetings and eventually development of new archiving policies and procedures. These policies and procedures centered on limiting access to archived research data while associated manuscripts progress through peer-review and publication.

  5. The SpaceInn SISMA archive

    NASA Astrophysics Data System (ADS)

    Rainer, Monica; Poretti, Ennio; Mistò, Angelo; Rosa Panzera, Maria

    2017-10-01

    The Spectroscopic Indicators in a SeisMic Archive (SISMA) has been built in the framework of the FP7 SpaceInn project to contain the 7013 HARPS spectra observed during the CoRoT asteroseismic groundbased program, along with their variability and asteroseismic indicators. The spectra pertain to 261 stars spread around the whole Herztsprung-Russell diagram: 72 of them were CoRoT targets while the others were observed in order to better characterize their variability classes. The Legacy Data lightcurves of the CoRoT targets are also stored in the archive.

  6. The archiving and dissemination of biological structure data.

    PubMed

    Berman, Helen M; Burley, Stephen K; Kleywegt, Gerard J; Markley, John L; Nakamura, Haruki; Velankar, Sameer

    2016-10-01

    The global Protein Data Bank (PDB) was the first open-access digital archive in biology. The history and evolution of the PDB are described, together with the ways in which molecular structural biology data and information are collected, curated, validated, archived, and disseminated by the members of the Worldwide Protein Data Bank organization (wwPDB; http://wwpdb.org). Particular emphasis is placed on the role of community in establishing the standards and policies by which the PDB archive is managed day-to-day. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Trace Replay and Network Simulation Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acun, Bilge; Jain, Nikhil; Bhatele, Abhinav

    2015-03-23

    TraceR is a trace reply tool built upon the ROSS-based CODES simulation framework. TraceR can be used for predicting network performances and understanding network behavior by simulating messaging in High Performance Computing applications on interconnection networks.

  8. Trace Replay and Network Simulation Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, Nikhil; Bhatele, Abhinav; Acun, Bilge

    TraceR Is a trace replay tool built upon the ROSS-based CODES simulation framework. TraceR can be used for predicting network performance and understanding network behavior by simulating messaging In High Performance Computing applications on interconnection networks.

  9. 76 FR 52991 - Renewal of Advisory Committee on Electronic Records Archives

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-24

    ... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Renewal of Advisory Committee on Electronic Records... Records Administration's (NARA) Advisory Committee on Electronic Records Archives. In accordance with... Committee on Electronic Records Archives in NARA's ceiling of discretionary advisory committees. FOR FURTHER...

  10. Long-term archiving and data access: modelling and standardization

    NASA Technical Reports Server (NTRS)

    Hoc, Claude; Levoir, Thierry; Nonon-Latapie, Michel

    1996-01-01

    This paper reports on the multiple difficulties inherent in the long-term archiving of digital data, and in particular on the different possible causes of definitive data loss. It defines the basic principles which must be respected when creating long-term archives. Such principles concern both the archival systems and the data. The archival systems should have two primary qualities: independence of architecture with respect to technological evolution, and generic-ness, i.e., the capability of ensuring identical service for heterogeneous data. These characteristics are implicit in the Reference Model for Archival Services, currently being designed within an ISO-CCSDS framework. A system prototype has been developed at the French Space Agency (CNES) in conformance with these principles, and its main characteristics will be discussed in this paper. Moreover, the data archived should be capable of abstract representation regardless of the technology used, and should, to the extent that it is possible, be organized, structured and described with the help of existing standards. The immediate advantage of standardization is illustrated by several concrete examples. Both the positive facets and the limitations of this approach are analyzed. The advantages of developing an object-oriented data model within this contxt are then examined.

  11. Thin Lens Ray Tracing.

    ERIC Educational Resources Information Center

    Gatland, Ian R.

    2002-01-01

    Proposes a ray tracing approach to thin lens analysis based on a vector form of Snell's law for paraxial rays as an alternative to the usual approach in introductory physics courses. The ray tracing approach accommodates skew rays and thus provides a complete analysis. (Author/KHR)

  12. Archive Inventory Management System (AIMS) — A Fast, Metrics Gathering Framework for Validating and Gaining Insight from Large File-Based Data Archives

    NASA Astrophysics Data System (ADS)

    Verma, R. V.

    2018-04-01

    The Archive Inventory Management System (AIMS) is a software package for understanding the distribution, characteristics, integrity, and nuances of files and directories in large file-based data archives on a continuous basis.

  13. News from the ESO Science Archive Facility

    NASA Astrophysics Data System (ADS)

    Dobrzycki, A.; Arnaboldi, M.; Bierwirth, T.; Boelter, M.; Da Rocha, C.; Delmotte, N.; Forchì, V.; Fourniol, N.; klein Gebbinck, M.; Lange, U.; Mascetti, L.; Micol, A.; Moins, C.; Munte, C.; Pluciennik, C.; Retzlaff, J.; Romaniello, M.; Rosse, N.; Sequeiros, I. V.; Vuong, M.-H.; Zampieri, S.

    2015-09-01

    ESO Science Archive Facility (SAF) - one of the world's biggest astronomical archives - combines two roles: operational (ingest, tallying, safekeeping and distribution to observers of raw data taken with ESO telescopes and processed data generated both internally and externally) and scientific (publication and delivery of all flavours of data to external users). This paper presents the “State of the SAF.” SAF, as a living entity, is constantly implementing new services and upgrading the existing ones. We present recent and future developments related to the Archive's Request Handler and metadata handling as well as performance and usage statistics and trends. We also discuss the current and future datasets on offer at SAF.

  14. Operating a petabyte class archive at ESO

    NASA Astrophysics Data System (ADS)

    Suchar, Dieter; Lockhart, John S.; Burrows, Andrew

    2008-07-01

    The challenges of setting up and operating a Petabyte Class Archive will be described in terms of computer systems within a complex Data Centre environment. The computer systems, including the ESO Primary and Secondary Archive and the associated computational environments such as relational databases will be explained. This encompasses the entire system project cycle, including the technical specifications, procurement process, equipment installation and all further operational phases. The ESO Data Centre construction and the complexity of managing the environment will be presented. Many factors had to be considered during the construction phase, such as power consumption, targeted cooling and the accumulated load on the building structure to enable the smooth running of a Petabyte class Archive.

  15. Remediation of the protein data bank archive

    PubMed Central

    Henrick, Kim; Feng, Zukang; Bluhm, Wolfgang F.; Dimitropoulos, Dimitris; Doreleijers, Jurgen F.; Dutta, Shuchismita; Flippen-Anderson, Judith L.; Ionides, John; Kamada, Chisa; Krissinel, Eugene; Lawson, Catherine L.; Markley, John L.; Nakamura, Haruki; Newman, Richard; Shimizu, Yukiko; Swaminathan, Jawahar; Velankar, Sameer; Ory, Jeramia; Ulrich, Eldon L.; Vranken, Wim; Westbrook, John; Yamashita, Reiko; Yang, Huanwang; Young, Jasmine; Yousufuddin, Muhammed; Berman, Helen M.

    2008-01-01

    The Worldwide Protein Data Bank (wwPDB; wwpdb.org) is the international collaboration that manages the deposition, processing and distribution of the PDB archive. The online PDB archive at ftp://ftp.wwpdb.org is the repository for the coordinates and related information for more than 47 000 structures, including proteins, nucleic acids and large macromolecular complexes that have been determined using X-ray crystallography, NMR and electron microscopy techniques. The members of the wwPDB–RCSB PDB (USA), MSD-EBI (Europe), PDBj (Japan) and BMRB (USA)–have remediated this archive to address inconsistencies that have been introduced over the years. The scope and methods used in this project are presented. PMID:18073189

  16. 36 CFR § 1253.2 - National Archives at College Park.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true National Archives at College Park. § 1253.2 Section § 1253.2 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... Park, MD 20740-6001. Hours for the Research Center are posted at http://www.archives.gov. The phone...

  17. Metadata and Buckets in the Smart Object, Dumb Archive (SODA) Model

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Maly, Kurt; Croom, Delwin R., Jr.; Robbins, Steven W.

    2004-01-01

    We present the Smart Object, Dumb Archive (SODA) model for digital libraries (DLs), and discuss the role of metadata in SODA. The premise of the SODA model is to "push down" many of the functionalities generally associated with archives into the data objects themselves. Thus the data objects become "smarter", and the archives "dumber". In the SODA model, archives become primarily set managers, and the objects themselves negotiate and handle presentation, enforce terms and conditions, and perform data content management. Buckets are our implementation of smart objects, and da is our reference implementation for dumb archives. We also present our approach to metadata translation for buckets.

  18. Supersize me: how whole-genome sequencing and big data are transforming epidemiology.

    PubMed

    Kao, Rowland R; Haydon, Daniel T; Lycett, Samantha J; Murcia, Pablo R

    2014-05-01

    In epidemiology, the identification of 'who infected whom' allows us to quantify key characteristics such as incubation periods, heterogeneity in transmission rates, duration of infectiousness, and the existence of high-risk groups. Although invaluable, the existence of many plausible infection pathways makes this difficult, and epidemiological contact tracing either uncertain, logistically prohibitive, or both. The recent advent of next-generation sequencing technology allows the identification of traceable differences in the pathogen genome that are transforming our ability to understand high-resolution disease transmission, sometimes even down to the host-to-host scale. We review recent examples of the use of pathogen whole-genome sequencing for the purpose of forensic tracing of transmission pathways, focusing on the particular problems where evolutionary dynamics must be supplemented by epidemiological information on the most likely timing of events as well as possible transmission pathways. We also discuss potential pitfalls in the over-interpretation of these data, and highlight the manner in which a confluence of this technology with sophisticated mathematical and statistical approaches has the potential to produce a paradigm shift in our understanding of infectious disease transmission and control. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. The AMBRE project: Parameterisation of FGK-type stars from the ESO:HARPS archived spectra

    NASA Astrophysics Data System (ADS)

    De Pascale, M.; Worley, C. C.; de Laverny, P.; Recio-Blanco, A.; Hill, V.; Bijaoui, A.

    2014-10-01

    Context. The AMBRE project is a collaboration between the European Southern Observatory (ESO) and the Observatoire de la Côte d'Azur (OCA). It has been established to determine the stellar atmospheric parameters of the archived spectra of four ESO spectrographs. Aims: The analysis of the ESO:HARPS archived spectra for the determination of their atmospheric parameters (effective temperature, surface gravity, global metallicities, and abundance of α-elements over iron) is presented. The sample being analysed (AMBRE:HARPS) covers the period from 2003 to 2010 and is comprised of 126 688 scientific spectra corresponding to ~17 218 different stars. Methods: For the analysis of the AMBRE:HARPS spectral sample, the automated pipeline developed for the analysis of the AMBRE:FEROS archived spectra has been adapted to the characteristics of the HARPS spectra. Within the pipeline, the stellar parameters are determined by the MATISSE algorithm, which has been developed at OCA for the analysis of large samples of stellar spectra in the framework of galactic archaeology. In the present application, MATISSE uses the AMBRE grid of synthetic spectra, which covers FGKM-type stars for a range of gravities and metallicities. Results: We first determined the radial velocity and its associated error for the ~15% of the AMBRE:HARPS spectra, for which this velocity had not been derived by the ESO:HARPS reduction pipeline. The stellar atmospheric parameters and the associated chemical index [α/Fe] with their associated errors have then been estimated for all the spectra of the AMBRE:HARPS archived sample. Based on key quality criteria, we accepted and delivered the parameterisation of 93 116 (74% of the total sample) spectra to ESO. These spectra correspond to ~10 706 stars; each are observed between one and several hundred times. This automatic parameterisation of the AMBRE:HARPS spectra shows that the large majority of these stars are cool main-sequence dwarfs with metallicities

  20. 75 FR 63208 - Advisory Committee on the Electronic Records Archives (ACERA)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-14

    ... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on the Electronic Records Archives (ACERA) AGENCY: National Archives and Records Administration. ACTION: Notice of meeting. SUMMARY: In... and Records Administration (NARA) announces a meeting of the Advisory Committee on the Electronic...

  1. 76 FR 65218 - Advisory Committee on the Electronic Records Archives (ACERA)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-20

    ... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on the Electronic Records Archives (ACERA) AGENCY: National Archives and Records Administration. ACTION: Notice of meeting. SUMMARY: In... and Records Administration (NARA) announces a meeting of the Advisory Committee on the Electronic...

  2. Targeted or whole genome sequencing of formalin fixed tissue samples: potential applications in cancer genomics.

    PubMed

    Munchel, Sarah; Hoang, Yen; Zhao, Yue; Cottrell, Joseph; Klotzle, Brandy; Godwin, Andrew K; Koestler, Devin; Beyerlein, Peter; Fan, Jian-Bing; Bibikova, Marina; Chien, Jeremy

    2015-09-22

    Current genomic studies are limited by the poor availability of fresh-frozen tissue samples. Although formalin-fixed diagnostic samples are in abundance, they are seldom used in current genomic studies because of the concern of formalin-fixation artifacts. Better characterization of these artifacts will allow the use of archived clinical specimens in translational and clinical research studies. To provide a systematic analysis of formalin-fixation artifacts on Illumina sequencing, we generated 26 DNA sequencing data sets from 13 pairs of matched formalin-fixed paraffin-embedded (FFPE) and fresh-frozen (FF) tissue samples. The results indicate high rate of concordant calls between matched FF/FFPE pairs at reference and variant positions in three commonly used sequencing approaches (whole genome, whole exome, and targeted exon sequencing). Global mismatch rates and C · G > T · A substitutions were comparable between matched FF/FFPE samples, and discordant rates were low (<0.26%) in all samples. Finally, low-pass whole genome sequencing produces similar pattern of copy number alterations between FF/FFPE pairs. The results from our studies suggest the potential use of diagnostic FFPE samples for cancer genomic studies to characterize and catalog variations in cancer genomes.

  3. St. Petersburg Coastal and Marine Science Center's Core Archive Portal

    USGS Publications Warehouse

    Reich, Chris; Streubert, Matt; Dwyer, Brendan; Godbout, Meg; Muslic, Adis; Umberger, Dan

    2012-01-01

    This Web site contains information on rock cores archived at the U.S. Geological Survey (USGS) St. Petersburg Coastal and Marine Science Center (SPCMSC). Archived cores consist of 3- to 4-inch-diameter coral cores, 1- to 2-inch-diameter rock cores, and a few unlabeled loose coral and rock samples. This document - and specifically the archive Web site portal - is intended to be a 'living' document that will be updated continually as additional cores are collected and archived. This document may also contain future references and links to a catalog of sediment cores. Sediment cores will include vibracores, pushcores, and other loose sediment samples collected for research purposes. This document will: (1) serve as a database for locating core material currently archived at the USGS SPCMSC facility; (2) provide a protocol for entry of new core material into the archive system; and, (3) set the procedures necessary for checking out core material for scientific purposes. Core material may be loaned to other governmental agencies, academia, or non-governmental organizations at the discretion of the USGS SPCMSC curator.

  4. Influences of different dietary contents of macrominerals on the availability of trace elements in horses.

    PubMed

    Neustädter, L-T; Kamphues, J; Ratert, C

    2018-04-01

    In this study, influences of a reduced macromineral intake on the trace element metabolism in horses at maintenance were investigated. Background of this study is the revised recommendation on the macromineral supply for horses (GfE ). Balance studies on three adult pony geldings with body weights of 405 / 348 / 384 kg were performed to obtain data on apparent digestibility (aD), retention and serum concentrations of different trace elements (Cu, Zn, Se) at different dietary macromineral levels. A mineral supplement or a complementary feed-with a reduced macromineral content-was added to a hay-based diet (daily 5.5 kg hay per animal, split in three servings a day), beside distilled water was offered. The diets were offered one after the other in a way that all ponies had the same sequence of treatments. The native macromineral contents of the daily offered amount of hay already surpassed the new recommendations whereas dietary trace elements needed to be supplemented. There were no statistically significant differences (p ≤ .05) concerning the aD of copper, zinc and selenium comparing the diets with and without macromineral supplementation. Serum levels of these three trace elements were not affected by the different macromineral content of the diet. Results of this study, based on a 22-day feeding period for each treatment, indicate that a macromineral supplementation of a hay-based diet for adult horses at maintenance was not necessary. However, no negative effects of added macrominerals on the trace element metabolism occurred in this study. © 2017 Blackwell Verlag GmbH.

  5. Deep learning and shapes similarity for joint segmentation and tracing single neurons in SEM images

    NASA Astrophysics Data System (ADS)

    Rao, Qiang; Xiao, Chi; Han, Hua; Chen, Xi; Shen, Lijun; Xie, Qiwei

    2017-02-01

    Extracting the structure of single neurons is critical for understanding how they function within the neural circuits. Recent developments in microscopy techniques, and the widely recognized need for openness and standardization provide a community resource for automated reconstruction of dendritic and axonal morphology of single neurons. In order to look into the fine structure of neurons, we use the Automated Tape-collecting Ultra Microtome Scanning Electron Microscopy (ATUM-SEM) to get images sequence of serial sections of animal brain tissue that densely packed with neurons. Different from other neuron reconstruction method, we propose a method that enhances the SEM images by detecting the neuronal membranes with deep convolutional neural network (DCNN) and segments single neurons by active contour with group shape similarity. We joint the segmentation and tracing together and they interact with each other by alternate iteration that tracing aids the selection of candidate region patch for active contour segmentation while the segmentation provides the neuron geometrical features which improve the robustness of tracing. The tracing model mainly relies on the neuron geometrical features and is updated after neuron being segmented on the every next section. Our method enables the reconstruction of neurons of the drosophila mushroom body which is cut to serial sections and imaged under SEM. Our method provides an elementary step for the whole reconstruction of neuronal networks.

  6. 78 FR 22345 - Advisory Committee on the Electronic Records Archives (ACERA)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-15

    ... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on the Electronic Records Archives... and Records Administration (NARA) announces a meeting of the Advisory Committee on the Electronic... United States, on technical, mission, and service issues related to the Electronic Records Archives (ERA...

  7. 77 FR 21812 - Advisory Committee on the Electronic Records Archives (ACERA).

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-11

    ... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on the Electronic Records Archives... and Records Administration (NARA) announces a meeting of the Advisory Committee on the Electronic... United States, on technical, mission, and service issues related to the Electronic Records Archives (ERA...

  8. Commercial imagery archive product development

    NASA Astrophysics Data System (ADS)

    Sakkas, Alysa

    1999-12-01

    The Lockheed Martin (LM) team had garnered over a decade of operational experience in digital imagery management and analysis for the US Government at numerous worldwide sites. Recently, it set out to create a new commercial product to serve the needs of large-scale imagery archiving and analysis markets worldwide. LM decided to provide a turnkey commercial solution to receive, store, retrieve, process, analyze and disseminate in 'push' or 'pull' modes components and adapted and developed its own algorithms to provide added functionality not commercially available elsewhere. The resultant product, Intelligent Library System, satisfies requirements for (a) a potentially unbounded, data archive automated workflow management for increased user productivity; (c) automatic tracking and management of files stored on shelves; (d) ability to ingest, process and disseminate data involves with bandwidths ranging up to multi-gigabit per second; (e) access through a thin client- to-server network environment; (f) multiple interactive users needing retrieval of filters in seconds from both archived images or in real time, and (g) scalability that maintains information throughput performance as the size of the digital library grows.

  9. Proliferation of antibiotic resistance genes in microbial consortia of sequencing batch reactors (SBRs) upon exposure to trace erythromycin or erythromycin-H2O.

    PubMed

    Fan, Caian; He, Jianzhong

    2011-05-01

    A variety of antibiotics and their metabolites at sub-inhibitory level concentrations are suspected to expand resistance genes in the environment. However, knowledge is limited on the causal correlation of trace antibiotics or their metabolites with resistance proliferation. In this study, erythromycin (ERY) resistance genes were screened on microbial consortia of sequencing batch reactors (SBRs) after one year acclimation to ERY (100 μg/L) or dehydrated erythromycin (ERY-H(2)O, 50 μg/L). The identified esterase gene ereA explains that ERY could be degraded to six products by microbes acclimated to ERY (100 μg/L). However, ERY could not be degraded by microbes acclimated to ERY-H(2)O (50 μg/L), which may be due to the less proliferated ereA gene. Biodegradation of ERY required the presence of exogenous carbon source (e.g., glucose) and nutrients (e.g., nitrogen, phosphorus) for assimilation, but overdosed ammonium-N (>40 mg/L) inhibited degradation of ERY. Zoogloea, a kind of biofilm formation bacteria, became predominant in the ERY degradation consortia, suggesting that the input of ERY could induce biofilm resistance to antibiotics. Our study highlights that lower μg/L level of ERY or ERY-H(2)O in the environment encourages expansion of resistance genes in microbes. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. [Management and development of the dangerous preparation archive].

    PubMed

    Binetti, Roberto; Longo, Marcello; Scimonelli, Luigia; Costamagna, Francesca

    2006-01-01

    In the year 2000 an archive of dangerous preparations was created at the National Health Institute (Istituto Superiore di Sanità), following a principle included in the Directive 88/379/EEC on dangerous preparations, subsequently modified by the Directive 1999/45/EC, concerning the creation of a data bank on dangerous preparations in each European country. The information stored in the archive is useful for purposes of health consumer's and workers protection and prevention, and particularly in case of acute poisonings. The archive is fully informatised, therefore the companies can send the information using the web and the authorized Poison Centres can find the information on the archive using the web. In each Member State different procedures are in place to comply with the 1999/45/EC Directive; therefore an international coordination could be useful in order to create an European network of national data-banks on dangerous preparations.

  11. The digital archive of the International Halley Watch

    NASA Technical Reports Server (NTRS)

    Klinglesmith, D. A., III; Niedner, M. B.; Grayzeck, E.; Aronsson, M.; Newburn, R. L.; Warnock, A., III

    1992-01-01

    The International Halley Watch was established to coordinate, collect, archive, and distribute the scientific data from Comet P/Halley that would be obtained from both the ground and space. This paper describes one of the end products of that effort, namely the IHW Digital Archive. The IHW Digital Archive consists of 26 CD-ROM's containing over 32 gigabytes of data from the 9 IHW disciplines as well as data from the 5 spacecraft missions flown to comet P/Haley and P/Giacobini-Zinner. The total archive contains over 50,000 observations by 1,500 observers from at least 40 countries. The first 24 CD's, which are currently available, contain data from the 9 IHW disciplines. The two remaining CD's will have the spacecraft data and should be available within the next year. A test CD-ROM of these data has been created and is currently under review.

  12. Ultrasensitive, self-calibrated cavity ring-down spectrometer for quantitative trace gas analysis.

    PubMed

    Chen, Bing; Sun, Yu R; Zhou, Ze-Yi; Chen, Jian; Liu, An-Wen; Hu, Shui-Ming

    2014-11-10

    A cavity ring-down spectrometer is built for trace gas detection using telecom distributed feedback (DFB) diode lasers. The longitudinal modes of the ring-down cavity are used as frequency markers without active-locking either the laser or the high-finesse cavity. A control scheme is applied to scan the DFB laser frequency, matching the cavity modes one by one in sequence and resulting in a correct index at each recorded spectral data point, which allows us to calibrate the spectrum with a relative frequency precision of 0.06 MHz. Besides the frequency precision of the spectrometer, a sensitivity (noise-equivalent absorption) of 4×10-11  cm-1  Hz-1/2 has also been demonstrated. A minimum detectable absorption coefficient of 5×10-12  cm-1 has been obtained by averaging about 100 spectra recorded in 2  h. The quantitative accuracy is tested by measuring the CO2 concentrations in N2 samples prepared by the gravimetric method, and the relative deviation is less than 0.3%. The trace detection capability is demonstrated by detecting CO2 of ppbv-level concentrations in a high-purity nitrogen gas sample. Simple structure, high sensitivity, and good accuracy make the instrument very suitable for quantitative trace gas analysis.

  13. The ESA Gaia Archive: Data Release 1

    NASA Astrophysics Data System (ADS)

    Salgado, J.; González-Núñez, J.; Gutiérrez-Sánchez, R.; Segovia, J. C.; Durán, J.; Hernández, J. L.; Arviset, C.

    2017-10-01

    The ESA Gaia mission is producing the most accurate source catalogue in astronomy to date. This represents a challenge in archiving to make the information and data accessible to astronomers in an efficient way, due to the size and complexity of the data. Also, new astronomical missions, taking larger and larger volumes of data, are reinforcing this change in the development of archives. Archives, as simple applications to access data, are evolving into complex data centre structures where computing power services are available for users and data mining tools are integrated into the server side. In the case of astronomy missions that involve the use of large catalogues, such as Gaia (or Euclid to come), the common ways to work on the data need to be changed to the following paradigm: "move the code close to the data". This implies that data mining functionalities are becoming a must to allow for the maximum scientific exploitation of the data. To enable these capabilities, a TAP+ interface, crossmatch capabilities, full catalogue histograms, serialisation of intermediate results in cloud resources, such as VOSpace etc., have been implemented for the Gaia Data Release 1 (DR1), to enable the exploitation of these science resources by the community without any bottlenecks in the connection bandwidth. We present the architecture, infrastructure and tools already available in the Gaia Archive DR1 (http://archives.esac.esa.int/gaia/) and we describe the capabilities and infrastructure.

  14. Stewardship of very large digital data archives

    NASA Technical Reports Server (NTRS)

    Savage, Patric

    1992-01-01

    This paper addresses the problems foreseen by the author in stewarding the very large digital data archives that will accumulate during the mission of the Earth Orbiting Satellite (EOS). It focuses on the function of 'shepherding' archived digital data into an endless future. Stewardship entails a great deal more than storing and protecting the archive. It also includes all aspects of providing meaningful service to the community of users (scientists) who will want to access the data. The complete steward will be required to do the following: (1) provide against loss due to physical phenomena; (2) assure that data is not 'lost' due to storage technology obsolescence; (3) maintain data in a current formatting methodology with the additional requirement of being able to reconstitute the data to its original, as-received format; (4) secure against loss or pollution of data due to accidental, misguided, or willful software intrusion; (5) prevent unauthorized electronic access to the data, including unauthorized placement of data into the archive; (6) index the data in a metadatabase so that all anticipatable queries can be served without searching through the data itself; (7) provide responsive access to the metadatabase; (8) provide appropriately responsive access to the data; (9) incorporate additions and changes to the archive (and to the metadatabase) in a timely way; and (10) deliver only copies of data to clients - retain physical custody of the 'official' data. Items 1 through 4 are discussed in this paper.

  15. Reverse ray tracing for transformation optics.

    PubMed

    Hu, Chia-Yu; Lin, Chun-Hung

    2015-06-29

    Ray tracing is an important technique for predicting optical system performance. In the field of transformation optics, the Hamiltonian equations of motion for ray tracing are well known. The numerical solutions to the Hamiltonian equations of motion are affected by the complexities of the inhomogeneous and anisotropic indices of the optical device. Based on our knowledge, no previous work has been conducted on ray tracing for transformation optics with extreme inhomogeneity and anisotropicity. In this study, we present the use of 3D reverse ray tracing in transformation optics. The reverse ray tracing is derived from Fermat's principle based on a sweeping method instead of finding the full solution to ordinary differential equations. The sweeping method is employed to obtain the eikonal function. The wave vectors are then obtained from the gradient of that eikonal function map in the transformed space to acquire the illuminance. Because only the rays in the points of interest have to be traced, the reverse ray tracing provides an efficient approach to investigate the illuminance of a system. This approach is useful in any form of transformation optics where the material property tensor is a symmetric positive definite matrix. The performance and analysis of three transformation optics with inhomogeneous and anisotropic indices are explored. The ray trajectories and illuminances in these demonstration cases are successfully solved by the proposed reverse ray tracing method.

  16. Integration experiences and performance studies of A COTS parallel archive systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsing-bung; Scott, Cody; Grider, Bary

    2010-01-01

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and lessmore » robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of future

  17. Integration experiments and performance studies of a COTS parallel archive system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsing-bung; Scott, Cody; Grider, Gary

    2010-06-16

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching andmore » less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address requirements

  18. Archiving of Wideband Plasma Wave Data

    NASA Technical Reports Server (NTRS)

    Kurth, William S.

    1997-01-01

    Beginning with the third year of funding, we began a more ambitious archiving production effort, minimizing work on new software and concentrating on building representative archives of the missions mentioned above, recognizing that only a small percentage of the data from any one mission can be archived with reasonable effort. We concentrated on data from Dynamics Explorer and ISEE 1, archiving orbits or significant fractions of orbits which attempt to capture the essence of the mission and provide data which will hopefully be sufficient for ongoing and new research as well as to provide a reference to upcoming and current ISTP missions which will not fly in the same regions of space as the older missions and which will not have continuous wideband data. We archived approximately 181 Gigabytes of data, accounting for some 1582 hours of data. Included in these data are all of the AMPTE chemical releases, all of the Spacelab 2/PDP data obtained during the free-flight portion of its mission, as well as significant portions of the S3, DE-1, Imp-6, Hawkeye, Injun 5, and ISEE 1 and 2 data sets. Table 1 summarizes these data. All of the data archived are summarized in gif-formatted images of frequency-time spectrograms which are directly accessible via the internet. Each of the gif files are identified by year, day, and time as described in the Web page. This provides a user with a specific date/time in mind a way of determining very quickly if there is data for the interval in question and, by clicking on the file name, browsing the data. Alternately, a user can browse the data for interesting features and events simply by viewing each of the gif files. When a user finds data of interest, he/she can notify us by email of the time period involved. Based on the user's needs, we can provide data on a convenient medium or by ftp, or we can mount the appropriate data and provide access to our analysis tools via the network. We can even produce products such as plots or

  19. SUBSTANCE ABUSE AND MENTAL HEALTH DATA ARCHIVE (SAMHDA)

    EPA Science Inventory

    The Substance Abuse and Mental Health Data Archive (SAMHDA) is an initiative of the Office of Applied Studies, Substance Abuse and Mental Health Services Administration (SAMHSA) of the United States Department of Health and Human Services. The goal of the archive is to provide re...

  20. Heavy Metals and Related Trace Elements.

    ERIC Educational Resources Information Center

    Leland, Harry V.; And Others

    1978-01-01

    Presents a literature review of heavy metals and related trace elements in the environment, covering publications of 1976-77. This review includes: (1) trace treatment in natural water and in sediments; and (2) bioaccumulation and toxicity of trace elements. A list of 466 references is presented. (HM)

  1. The Emirates Space Data Center, a PDS4-Compliant Data Archive

    NASA Astrophysics Data System (ADS)

    DeWolfe, A. W.; Al Hammadi, O.; Amiri, S.

    2017-12-01

    As part of the UAE's Emirates Mars Mission (EMM), we are constructing a data archive to preserve and distribute science data from this and future missions. The archive will be publicly accessible and will provide access to Level 2 and 3 science data products from EMM, as well as ancillary data such as SPICE kernels and mission event timelines. As a member of the International Planetary Data Alliance (IPDA), the UAE has committed to making its archive PDS4-compatible, and maintaining the archive beyond the end of the mission. EMM is scheduled to begin collecting science data in spring 2021, and the archive is expected to begin releasing data in September 2021.

  2. The Gran Telescopio Canarias and Calar Alto Virtual Observatory Compliant Archives

    NASA Astrophysics Data System (ADS)

    Alacid, J. M.; Solano, E.; Jiménez-Esteban, F. M.; Velasco, A.

    2014-05-01

    The Gran Telescopio Canarias and Calar Alto archives are the result of the collaboration agreements between the Centro de Astrobiología and two entities: GRANTECAN S.A. and the Centro Astronómico Hispano Alemán (CAHA). The archives have been developed in the framework of the Spanish Virtual Observatory and are maintained by the Data Archive Unit at Centro de Astrobiología. The archives contain both raw and science ready data and have been designed in compliance with the standards defined by the International Virtual Observatory Alliance, which guarantees a high level of data accessibility and handling. In this paper we describe the main characteristics and functionalities of both archives.

  3. The Gran Telescopio Canarias and Calar Alto Virtual Observatory compliant archives

    NASA Astrophysics Data System (ADS)

    Solano, Enrique; Gutiérrez, Raúl; Alacid, José Manuel; Jiménez-Esteban, Francisco; Velasco Trasmonte, Almudena

    2012-09-01

    The Gran Telescopio Canarias (GTC) and Calar Alto archives are the result of the collaboration agreements between the Centro de Astrobiología (CAB, INTA-CSIC)) and two entities: GRANTECAN S.A. and the Centro Astronómico Hispano Alemán (CAHA). The archives have been developed in the framework of the Spanish Virtual Observatory and are maintained by the Data Archive Unit at CAB. The archives contain both raw and science ready data and have been designed in compliance with the standards defined by the International Virtual Observatory Alliance (IVOA) which guarantees a high level of data accessibility and handling. In this paper we describe the main characteristics and functionalities of both archives.

  4. Enhancement of real-time EPICS IOC PV management for the data archiving system

    NASA Astrophysics Data System (ADS)

    Kim, Jae-Ha

    2015-10-01

    The operation of a 100-MeV linear proton accelerator, the major driving values and experimental data need to be archived. According to the experimental conditions, different data are required. Functions that can add new data and delete data in real time need to be implemented. In an experimental physics and industrial control system (EPICS) input output controller (IOC), the value of process variables (PVs) are matched with the driving values and data. The PV values are archived in text file format by using the channel archiver. There is no need to create a database (DB) server, just a need for large hard disk. Through the web, the archived data can be loaded, and new PV values can be archived without stopping the archive engine. The details of the implementation of a data archiving system with channel archiver are presented, and some preliminary results are reported.

  5. The Nation's Memory: The United States National Archives and Records Administration. An Interview with Don W. Wilson, Archivist of the United States, National Archives and Records Administration.

    ERIC Educational Resources Information Center

    Brodhead, Michael J.; Zink, Steven D.

    1993-01-01

    Discusses the National Archives and Records Administration (NARA) through an interview with the Archivist of the United States, Don Wilson. Topics addressed include archival independence and congressional relations; national information policy; expansion plans; machine-readable archival records; preservation activities; and relations with other…

  6. Archiving InSight Lander Science Data Using PDS4 Standards

    NASA Astrophysics Data System (ADS)

    Stein, T.; Guinness, E. A.; Slavney, S.

    2017-12-01

    The InSight Mars Lander is scheduled for launch in 2018, and science data from the mission will be archived in the NASA Planetary Data System (PDS) using the new PDS4 standards. InSight is a geophysical lander with a science payload that includes a seismometer, a probe to measure subsurface temperatures and heat flow, a suite of meteorology instruments, a magnetometer, an experiment using radio tracking, and a robotic arm that will provide soil physical property information based on interactions with the surface. InSight is not the first science mission to archive its data using PDS4. However, PDS4 archives do not currently contain examples of the kinds of data that several of the InSight instruments will produce. Whereas the existing common PDS4 standards were sufficient for most of archiving requirements of InSight, the data generated by a few instruments required development of several extensions to the PDS4 information model. For example, the seismometer will deliver a version of its data in SEED format, which is standard for the terrestrial seismology community. This format required the design of a new product type in the PDS4 information model. A local data dictionary has also been developed for InSight that contains attributes that are not part of the common PDS4 dictionary. The local dictionary provides metadata relevant to all InSight data sets, and attributes specific to several of the instruments. Additional classes and attributes were designed for the existing PDS4 geometry dictionary that will capture metadata for the lander position and orientation, along with camera models for stereo image processing. Much of the InSight archive planning and design work has been done by a Data Archiving Working Group (DAWG), which has members from the InSight project and the PDS. The group coordinates archive design, schedules and peer review of the archive documentation and test products. The InSight DAWG archiving effort for PDS is being led by the PDS Geosciences

  7. Operational environmental satellite archives in the 21st Century

    NASA Astrophysics Data System (ADS)

    Barkstrom, Bruce R.; Bates, John J.; Privette, Jeff; Vizbulis, Rick

    2007-09-01

    NASA, NOAA, and USGS collections of Earth science data are large, federated, and have active user communities and collections. Our experience raises five categories of issues for long-term archival: *Organization of the data in the collections is not well-described by text-based categorization principles *Metadata organization for these data is not well-described by Dublin Core and needs attention to data access and data use patterns *Long-term archival requires risk management approaches to dealing with the unique threats to knowledge preservation specific to digital information *Long-term archival requires careful attention to archival cost management *Professional data stewards for these collections may require special training. This paper suggests three mechanisms for improving the quality of long-term archival: *Using a maturity model to assess the readiness of data for accession, for preservation, and for future data usefulness *Developing a risk management strategy for systematically dealing with threats of data loss *Developing a life-cycle cost model for continuously evolving the collections and the data centers that house them.

  8. Measurement of Selected Organic Trace Gases During TRACE-P

    NASA Technical Reports Server (NTRS)

    Atlas, Elliot

    2004-01-01

    Major goals of the TRACE-P mission were: 1) to investigate the chemical composition of radiatively important gases, aerosols, and their precursors in the Asian outflow over the western Pacific, and 2) to describe and understand the chemical evolution of the Asian outflow as it is transported and mixed into the global troposphere. The research performed as part of this proposal addressed these major goals with a study of the organic chemical composition of gases in the TRACE-P region. This work was a close collaboration with the Blake/Rowland research group at UC-Irvine, and they have provided a separate report for their funded effort.

  9. Archiving Mars Mission Data Sets with the Planetary Data System

    NASA Technical Reports Server (NTRS)

    Guinness, Edward A.

    2006-01-01

    This viewgraph presentation reviews the use of the Planetary Data System (PDS) to archive the datasets that are received from the Mars Missions. It reviews the lessons learned in the actual archiving process, and presents an overview of the actual archiving process. It also reviews the lessons learned from the perspectives of the projects, the data producers and the data users.

  10. Contour Tracking in Echocardiographic Sequences via Sparse Representation and Dictionary Learning

    PubMed Central

    Huang, Xiaojie; Dione, Donald P.; Compas, Colin B.; Papademetris, Xenophon; Lin, Ben A.; Bregasi, Alda; Sinusas, Albert J.; Staib, Lawrence H.; Duncan, James S.

    2013-01-01

    This paper presents a dynamical appearance model based on sparse representation and dictionary learning for tracking both endocardial and epicardial contours of the left ventricle in echocardiographic sequences. Instead of learning offline spatiotemporal priors from databases, we exploit the inherent spatiotemporal coherence of individual data to constraint cardiac contour estimation. The contour tracker is initialized with a manual tracing of the first frame. It employs multiscale sparse representation of local image appearance and learns online multiscale appearance dictionaries in a boosting framework as the image sequence is segmented frame-by-frame sequentially. The weights of multiscale appearance dictionaries are optimized automatically. Our region-based level set segmentation integrates a spectrum of complementary multilevel information including intensity, multiscale local appearance, and dynamical shape prediction. The approach is validated on twenty-six 4D canine echocardiographic images acquired from both healthy and post-infarct canines. The segmentation results agree well with expert manual tracings. The ejection fraction estimates also show good agreement with manual results. Advantages of our approach are demonstrated by comparisons with a conventional pure intensity model, a registration-based contour tracker, and a state-of-the-art database-dependent offline dynamical shape model. We also demonstrate the feasibility of clinical application by applying the method to four 4D human data sets. PMID:24292554

  11. Using whole genome sequencing to study American foulbrood epidemiology in honeybees

    PubMed Central

    Ågren, Joakim; Schäfer, Marc Oliver

    2017-01-01

    American foulbrood (AFB), caused by Paenibacillus larvae, is a devastating disease in honeybees. In most countries, the disease is controlled through compulsory burning of symptomatic colonies causing major economic losses in apiculture. The pathogen is endemic to honeybees world-wide and is readily transmitted via the movement of hive equipment or bees. Molecular epidemiology of AFB currently largely relies on placing isolates in one of four ERIC-genotypes. However, a more powerful alternative is multi-locus sequence typing (MLST) using whole-genome sequencing (WGS), which allows for high-resolution studies of disease outbreaks. To evaluate WGS as a tool for AFB-epidemiology, we applied core genome MLST (cgMLST) on isolates from a recent outbreak of AFB in Sweden. The high resolution of the cgMLST allowed different bacterial clones involved in the disease outbreak to be identified and to trace the source of infection. The source was found to be a beekeeper who had sold bees to two other beekeepers, proving the epidemiological link between them. No such conclusion could have been made using conventional MLST or ERIC-typing. This is the first time that WGS has been used to study the epidemiology of AFB. The results show that the technique is very powerful for high-resolution tracing of AFB-outbreaks. PMID:29140998

  12. Strike Up the Score: Deriving Searchable and Playable Digital Formats from Sheet Music; Smart Objects and Open Archives; Building the Archives of the Future: Advanced in Preserving Electronic Records at the National Archives and Records Administration; From the Digitized to the Digital Library.

    ERIC Educational Resources Information Center

    Choudhury, G. Sayeed; DiLauro, Tim; Droettboom, Michael; Fujinaga, Ichiro; MacMillan, Karl; Nelson, Michael L.; Maly, Kurt; Thibodeau, Kenneth; Thaller, Manfred

    2001-01-01

    These articles describe the experiences of the Johns Hopkins University library in digitizing their collection of sheet music; motivation for buckets, Smart Object, Dumb Archive (SODA) and the Open Archives Initiative (OAI), and initial experiences using them in digital library (DL) testbeds; requirements for archival institutions, the National…

  13. The COROT ground-based archive and access system

    NASA Astrophysics Data System (ADS)

    Solano, E.; González-Riestra, R.; Catala, C.; Baglin, A.

    2002-01-01

    A prototype of the COROT ground-based archive and access system is presented here. The system has been developed at LAEFF and it is based on the experience gained at Laboratorio de Astrofisica Espacial y Fisica Fundamental (LAEFF) with the INES (IUE Newly Extracted System) Archive.

  14. Social Science Data Archives and Libraries: A View to the Future.

    ERIC Educational Resources Information Center

    Clark, Barton M.

    1982-01-01

    Discusses factors militating against integration of social science data archives and libraries in near future, noting usage of materials, access requisite skills of librarians, economic stability of archives, existing structures which manage social science data archives. Role of librarians, data access tools, and cataloging of machine-readable…

  15. HEASARC - The High Energy Astrophysics Science Archive Research Center

    NASA Technical Reports Server (NTRS)

    Smale, Alan P.

    2011-01-01

    The High Energy Astrophysics Science Archive Research Center (HEASARC) is NASA's archive for high-energy astrophysics and cosmic microwave background (CMB) data, supporting the broad science goals of NASA's Physics of the Cosmos theme. It provides vital scientific infrastructure to the community by standardizing science data formats and analysis programs, providing open access to NASA resources, and implementing powerful archive interfaces. Over the next five years the HEASARC will ingest observations from up to 12 operating missions, while serving data from these and over 30 archival missions to the community. The HEASARC archive presently contains over 37 TB of data, and will contain over 60 TB by the end of 2014. The HEASARC continues to secure major cost savings for NASA missions, providing a reusable mission-independent framework for reducing, analyzing, and archiving data. This approach was recognized in the NRC Portals to the Universe report (2007) as one of the HEASARC's great strengths. This poster describes the past and current activities of the HEASARC and our anticipated developments in coming years. These include preparations to support upcoming high energy missions (NuSTAR, Astro-H, GEMS) and ground-based and sub-orbital CMB experiments, as well as continued support of missions currently operating (Chandra, Fermi, RXTE, Suzaku, Swift, XMM-Newton and INTEGRAL). In 2012 the HEASARC (which now includes LAMBDA) will support the final nine-year WMAP data release. The HEASARC is also upgrading its archive querying and retrieval software with the new Xamin system in early release - and building on opportunities afforded by the growth of the Virtual Observatory and recent developments in virtual environments and cloud computing.

  16. Simple, Script-Based Science Processing Archive

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Hegde, Mahabaleshwara; Barth, C. Wrandle

    2007-01-01

    The Simple, Scalable, Script-based Science Processing (S4P) Archive (S4PA) is a disk-based archival system for remote sensing data. It is based on the data-driven framework of S4P and is used for data transfer, data preprocessing, metadata generation, data archive, and data distribution. New data are automatically detected by the system. S4P provides services such as data access control, data subscription, metadata publication, data replication, and data recovery. It comprises scripts that control the data flow. The system detects the availability of data on an FTP (file transfer protocol) server, initiates data transfer, preprocesses data if necessary, and archives it on readily available disk drives with FTP and HTTP (Hypertext Transfer Protocol) access, allowing instantaneous data access. There are options for plug-ins for data preprocessing before storage. Publication of metadata to external applications such as the Earth Observing System Clearinghouse (ECHO) is also supported. S4PA includes a graphical user interface for monitoring the system operation and a tool for deploying the system. To ensure reliability, S4P continuously checks stored data for integrity, Further reliability is provided by tape backups of disks made once a disk partition is full and closed. The system is designed for low maintenance, requiring minimal operator oversight.

  17. Reproducibility of Illumina platform deep sequencing errors allows accurate determination of DNA barcodes in cells.

    PubMed

    Beltman, Joost B; Urbanus, Jos; Velds, Arno; van Rooij, Nienke; Rohr, Jan C; Naik, Shalin H; Schumacher, Ton N

    2016-04-02

    Next generation sequencing (NGS) of amplified DNA is a powerful tool to describe genetic heterogeneity within cell populations that can both be used to investigate the clonal structure of cell populations and to perform genetic lineage tracing. For applications in which both abundant and rare sequences are biologically relevant, the relatively high error rate of NGS techniques complicates data analysis, as it is difficult to distinguish rare true sequences from spurious sequences that are generated by PCR or sequencing errors. This issue, for instance, applies to cellular barcoding strategies that aim to follow the amount and type of offspring of single cells, by supplying these with unique heritable DNA tags. Here, we use genetic barcoding data from the Illumina HiSeq platform to show that straightforward read threshold-based filtering of data is typically insufficient to filter out spurious barcodes. Importantly, we demonstrate that specific sequencing errors occur at an approximately constant rate across different samples that are sequenced in parallel. We exploit this observation by developing a novel approach to filter out spurious sequences. Application of our new method demonstrates its value in the identification of true sequences amongst spurious sequences in biological data sets.

  18. Archive of digital boomer seismic reflection data collected during USGS field activities 95LCA03 and 96LCA02 in the Peace River of West-Central Florida, 1995 and 1996

    USGS Publications Warehouse

    Calderon, Karynna; Dadisman, Shawn V.; Tihansky, Ann B.; Lewelling, Bill R.; Flocks, James G.; Wiese, Dana S.; Kindinger, Jack G.; Harrison, Arnell S.

    2006-01-01

    In October and November of 1995 and February of 1996, the U.S. Geological Survey, in cooperation with the Southwest Florida Water Management District, conducted geophysical surveys of the Peace River in west-central Florida from east of Bartow to west of Arcadia. This report serves as an archive of unprocessed digital boomer seismic reflection data, trackline maps, navigation files, GIS files, Field Activity Collection System (FACS) logs, observers' logbooks, and formal FGDC metadata. Filtered and gained digital images of the seismic profiles are also provided. Refer to the Acronyms page for expansion of acronyms and abbreviations used in this report. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry and others, 1975) and may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU). Example SU processing scripts and USGS software for viewing the SEG-Y files (Zihlman, 1992) are also provided.

  19. TCP Packet Trace Analysis. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Shepard, Timothy J.

    1991-01-01

    Examination of a trace of packets collected from the network is often the only method available for diagnosing protocol performance problems in computer networks. This thesis explores the use of packet traces to diagnose performance problems of the transport protocol TCP. Unfortunately, manual examination of these traces can be so tedious that effective analysis is not possible. The primary contribution of this thesis is a graphical method of displaying the packet trace which greatly reduce, the tediousness of examining a packet trace. The graphical method is demonstrated by the examination of some packet traces of typical TCP connections. The performance of two different implementations of TCP sending data across a particular network path is compared. Traces many thousands of packets long are used to demonstrate how effectively the graphical method simplifies examination of long complicated traces. In the comparison of the two TCP implementations, the burstiness of the TCP transmitter appeared to be related to the achieved throughput. A method of quantifying this burstiness is presented and its possible relevance to understanding the performance of TCP is discussed.

  20. Next-Generation Sequencing of the Chrysanthemum nankingense (Asteraceae) Transcriptome Permits Large-Scale Unigene Assembly and SSR Marker Discovery

    PubMed Central

    Wang, Haibin; Jiang, Jiafu; Chen, Sumei; Qi, Xiangyu; Peng, Hui; Li, Pirui; Song, Aiping; Guan, Zhiyong; Fang, Weimin; Liao, Yuan; Chen, Fadi

    2013-01-01

    Background Simple sequence repeats (SSRs) are ubiquitous in eukaryotic genomes. Chrysanthemum is one of the largest genera in the Asteraceae family. Only few Chrysanthemum expressed sequence tag (EST) sequences have been acquired to date, so the number of available EST-SSR markers is very low. Methodology/Principal Findings Illumina paired-end sequencing technology produced over 53 million sequencing reads from C. nankingense mRNA. The subsequent de novo assembly yielded 70,895 unigenes, of which 45,789 (64.59%) unigenes showed similarity to the sequences in NCBI database. Out of 45,789 sequences, 107 have hits to the Chrysanthemum Nr protein database; 679 and 277 sequences have hits to the database of Helianthus and Lactuca species, respectively. MISA software identified a large number of putative EST-SSRs, allowing 1,788 primer pairs to be designed from the de novo transcriptome sequence and a further 363 from archival EST sequence. Among 100 primer pairs randomly chosen, 81 markers have amplicons and 20 are polymorphic for genotypes analysis in Chrysanthemum. The results showed that most (but not all) of the assays were transferable across species and that they exposed a significant amount of allelic diversity. Conclusions/Significance SSR markers acquired by transcriptome sequencing are potentially useful for marker-assisted breeding and genetic analysis in the genus Chrysanthemum and its related genera. PMID:23626799

  1. Identification and tracing of Enterococcus spp. by RAPD-PCR in traditional fermented sausages and meat environment.

    PubMed

    Martín, B; Corominas, L; Garriga, M; Aymerich, T

    2009-01-01

    Four local small-scale factories were studied to determine the sources of enterococci in traditional fermented sausages. Different points during the production of a traditional fermented sausage type (fuet) were evaluated. Randomly amplified polymorphic DNA (RAPD)-PCR was used to type 596 Enterococcus isolates from the final products, the initial meat batter, the casing, the workers' hands and the equipment. Species-specific PCR-multiplex and the partial sequencing of atpA gene and 16S rRNA gene sequencing allowed the identification of the isolates: Enterococcus faecalis (31.4%), Enterococcus faecium (30.7%), Enterococcus sanguinicola (14.9%), Enterococcus devriesei (9.7%), Enterococcus malodoratus (7.2%), Enterococcus gilvus (1.0%), Enterococcus gallinarum (1.3%), Enterococcus casseliflavus (3.4%), Enterococcus hermanniensis (0.2%), and Enterococcus durans (0.2%). A total of 92 different RAPD-PCR profiles were distributed among the different factories and samples evaluated. Most of the genotypes found in fuet samples were traced back to their source. The major sources of enterococci in the traditional fermented sausages studied were mainly the equipment followed by the raw ingredients, although a low proportion was traced back to human origin. This work contributes to determine the source of enterococcal contamination in fermented sausages and also to the knowledge of the meat environment.

  2. Comprehensive planning of data archive in Japanese planetary missions

    NASA Astrophysics Data System (ADS)

    Yamamoto, Yukio; Shinohara, Iku; Hoshino, Hirokazu; Tateno, Naoki; Hareyama, Makoto; Okada, Naoki; Ebisawa, Ken

    Comprehensive planning of data archive in Japanese planetary missions Japan Aerospace Exploration Agency (JAXA) provides HAYABUSA and KAGUYA data as planetary data archives. These data archives, however, were prepared independently. Therefore the inconsistency of data format has occurred, and the knowledge of data archiving activity is not inherited. Recently, the discussion of comprehensive planning of data archive has started to prepare up-coming planetary missions, which indicates the comprehensive plan of data archive is required in several steps. The framework of the comprehensive plan is divided into four items: Preparation, Evaluation, Preservation, and Service. 1. PREPARATION FRAMEWORK Data is classified into several types: raw data, level-0, 1, 2 processing data, ancillary data, and etc. The task of mission data preparation is responsible for instrument teams, but preparations beside mission data and support of data management are essential to make unified conventions and formats over instruments in a mission, and over missions. 2. EVALUATION FRAMEWORK There are two meanings of evaluation: format and quality. The format evaluation is often discussed in the preparation framework. The data quality evaluation which is often called quality assurance (QA) or quality control (QC) must be performed by third party apart from preparation teams. An instrument team has the initiative for the preparation itself, and the third-party group is organized to evaluate the instrument team's activity. 3. PRESERVATION FRAMEWORK The main topic of this framework is document management, archiving structure, and simple access method. The mission produces many documents in the process of the development. Instrument de-velopment is no exception. During long-term development of a mission, many documents are obsoleted and updated repeatedly. A smart system will help instrument team to reduce some troubles of document management and archiving task. JAXA attempts to follow PDS manners

  3. Theory and Practice of Lineage Tracing.

    PubMed

    Hsu, Ya-Chieh

    2015-11-01

    Lineage tracing is a method that delineates all progeny produced by a single cell or a group of cells. The possibility of performing lineage tracing initiated the field of Developmental Biology and continues to revolutionize Stem Cell Biology. Here, I introduce the principles behind a successful lineage-tracing experiment. In addition, I summarize and compare different methods for conducting lineage tracing and provide examples of how these strategies can be implemented to answer fundamental questions in development and regeneration. The advantages and limitations of each method are also discussed. © 2015 AlphaMed Press.

  4. Archive of digital Boomer seismic reflection data collected during USGS Cruises 94CCT01 and 95CCT01, eastern Texas and western Louisiana, 1994 and 1995

    USGS Publications Warehouse

    Calderon, Karynna; Dadisman, Shawn V.; Kindinger, Jack G.; Flocks, James G.; Morton, Robert A.; Wiese, Dana S.

    2004-01-01

    In June of 1994 and August and September of 1995, the U.S. Geological Survey, in cooperation with the University of Texas Bureau of Economic Geology, conducted geophysical surveys of the Sabine and Calcasieu Lake areas and the Gulf of Mexico offshore eastern Texas and western Louisiana. This report serves as an archive of unprocessed digital boomer seismic reflection data, trackline maps, navigation files, observers' logbooks, GIS information, and formal FGDC metadata. In addition, a filtered and gained GIF image of each seismic profile is provided. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry and others, 1975) and may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU). Examples of SU processing scripts and in-house (USGS) software for viewing SEG-Y files (Zihlman, 1992) are also provided. Processed profile images, trackline maps, navigation files, and formal metadata may be viewed with a web browser. Scanned handwritten logbooks and Field Activity Collection System (FACS) logs may be viewed with Adobe Reader.

  5. The European HST Science Data Archive. [and Data Management Facility (DMF)

    NASA Technical Reports Server (NTRS)

    Pasian, F.; Pirenne, B.; Albrecht, R.; Russo, G.

    1993-01-01

    The paper describes the European HST Science Data Archive. Particular attention is given to the flow from the HST spacecraft to the Science Data Archive at the Space Telescope European Coordinating Facility (ST-ECF); the archiving system at the ST-ECF, including the hardware and software system structure; the operations at the ST-ECF and differences with the Data Management Facility; and the current developments. A diagram of the logical structure and data flow of the system managing the European HST Science Data Archive is included.

  6. A simplified soil extraction sequence to monitor the main and trace element speciation in soil after compost and mineral fertilizer additions upon the composition of wheat grains

    NASA Astrophysics Data System (ADS)

    Sager, Manfred; Erhart, Eva

    2016-04-01

    High quality biological waste treatment aims at producing compost in order to maintain a clean environment and to sustain soil organic carbon levels. Fertilization with compost as a source of organic carbon, nutrients, and accessory elements, as well as fertilization with mineral N- and PK fertilizer have been tested in a field experiment on a calcaric Fluvisol in the Danube wetlands, at 4 levels each. Yields of wheat were recorded, and grains and soils were sampled from each treatment, and analyzed for main and trace element composition. The corresponding soils were characterized by mobile phases, obtained by leaching with 0,16M acetic acid to cover exchangeables plus carbonates, and subsequently by 0,1M oxalate buffer pH 3 to dissolve the pedogenic oxides. Total amounts were obtained from digests with perchloric- nitric-hydrofluoric acid. For quasi-total amounts, aqua regia was replaced by pressure decomposition with KClO3 in dilute nitric acid. The proposed extraction sequence permits to analyze and interpret soil for main elements, trace elements, nutrients and anions simultaneously. Factor analyses of soil extracts obtained from dilute acetic acid revealed Ba-Be-Cd-Cu-Li-S (traces), Ca-Mg-Mn (main carbonates), Al-Fe-B, Y, and P-K (nutrients) as chemically feasible principal components. Subsequent soil extracts from oxalate contained Al-B-Co-K-Na-Pb-Si-V-S (maybe acid silicate weathering), Cr-Li-Ni-Sr-Ti (maybe basic silicate weathering), Be-Cu-Fe-P, Co-Mg-Mn-Zn (Mn-oxides) and Ba-Sc as principal components. Factor analyses of total element data distinguished the principal components Ce-La-Li-Sc-Y-P (rare earths), Al-Ca-Fe-K-Mg-Na-P (main elements), Cd-Co-Cr-Cu-Ni-Zn (trace elements), As-Pb (contaminants), Ba-Mn-Sr, and Ti, which looks chemically feasible also. Factor analyses of those soil fractions which presumably form the main fractions of exchangeables, carbonates, pedogenic oxides and silicates, showed no cross connections, except for P. Oxalate

  7. Classification of conductance traces with recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Lauritzen, Kasper P.; Magyarkuti, András; Balogh, Zoltán; Halbritter, András; Solomon, Gemma C.

    2018-02-01

    We present a new automated method for structural classification of the traces obtained in break junction experiments. Using recurrent neural networks trained on the traces of minimal cross-sectional area in molecular dynamics simulations, we successfully separate the traces into two classes: point contact or nanowire. This is done without any assumptions about the expected features of each class. The trained neural network is applied to experimental break junction conductance traces, and it separates the classes as well as the previously used experimental methods. The effect of using partial conductance traces is explored, and we show that the method performs equally well using full or partial traces (as long as the trace just prior to breaking is included). When only the initial part of the trace is included, the results are still better than random chance. Finally, we show that the neural network classification method can be used to classify experimental conductance traces without using simulated results for training, but instead training the network on a few representative experimental traces. This offers a tool to recognize some characteristic motifs of the traces, which can be hard to find by simple data selection algorithms.

  8. BOOK REVIEW: Treasure-Hunting in Astronomical Plate Archives.

    NASA Astrophysics Data System (ADS)

    Kroll, Peter; La Dous, Constanze; Brauer, Hans-Juergen; Sterken, C.

    This book consists of the proceedings of a conference on the exploration of the invaluable scientific treasure present in astronomical plate archives worldwide. The book incorporates fifty scientific papers covering almost 250 pages. There are several most useful papers, such as, for example, an introduction to the world's large plate archives that serves the purpose of a guide for the beginning user of plate archives. It includes a very useful list of twelve mayor archives with many details on their advantages (completeness, number of plates, classification system and homogeneity of time coverage) and their limitations (plate quality, access, electronic catalogues, photographic services, limiting magnitudes, search software and cost to the user). Other topics cover available contemporary digitization machines, the applications of commercial flatbed scanners, technical aspects of plate consulting, astrophysical applications and astrometric uses, data reduction, data archiving and retrieval, and strategies to find astrophysically useful information on plates. The astrophysical coverage is very broad: from solar-system bodies to variable stars, sky surveys and sky patrols covering the galactic and extragalactic domain and even gravitational lensing. The book concludes by an illuminating paper on ALADIN, the reference tool for identification of astronomical sources. This work can be considered as a kind of field guide, and is recommended reading for anyone who wishes to undertake small- or large-scale consulting of photographic plate material. A shortcoming of the proceedings is the fact that very few papers have abstracts. BOOK REVIEW: Treasure-Hunting in Astronomical Plate Archives. Proceedings of the international workshop held at Sonneberg Observatory, March 4-6, 1999. Peter Kroll, Constanze la Dous and Hans-Juergen Brauer (Eds.)

  9. Using high-throughput barcode sequencing to efficiently map connectomes.

    PubMed

    Peikon, Ian D; Kebschull, Justus M; Vagin, Vasily V; Ravens, Diana I; Sun, Yu-Chi; Brouzes, Eric; Corrêa, Ivan R; Bressan, Dario; Zador, Anthony M

    2017-07-07

    The function of a neural circuit is determined by the details of its synaptic connections. At present, the only available method for determining a neural wiring diagram with single synapse precision-a 'connectome'-is based on imaging methods that are slow, labor-intensive and expensive. Here, we present SYNseq, a method for converting the connectome into a form that can exploit the speed and low cost of modern high-throughput DNA sequencing. In SYNseq, each neuron is labeled with a unique random nucleotide sequence-an RNA 'barcode'-which is targeted to the synapse using engineered proteins. Barcodes in pre- and postsynaptic neurons are then associated through protein-protein crosslinking across the synapse, extracted from the tissue, and joined into a form suitable for sequencing. Although our failure to develop an efficient barcode joining scheme precludes the widespread application of this approach, we expect that with further development SYNseq will enable tracing of complex circuits at high speed and low cost. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Kepler Archive Manual

    NASA Technical Reports Server (NTRS)

    Thompson, Susan E.; Fraquelli, Dorothy; Van Cleve, Jeffrey E.; Caldwell, Douglas A.

    2016-01-01

    A description of Kepler, its design, performance and operational constraints may be found in the Kepler Instrument Handbook (KIH, Van Cleve Caldwell 2016). A description of Kepler calibration and data processing is described in the Kepler Data Processing Handbook (KDPH, Jenkins et al. 2016; Fanelli et al. 2011). Science users should also consult the special ApJ Letters devoted to early Kepler results and mission design (April 2010, ApJL, Vol. 713 L79-L207). Additional technical details regarding the data processing and data qualities can be found in the Kepler Data Characteristics Handbook (KDCH, Christiansen et al. 2013) and the Data Release Notes (DRN). This archive manual specifically documents the file formats, as they exist for the last data release of Kepler, Data Release 25(KSCI-19065-002). The earlier versions of the archive manual and data release notes act as documentation for the earlier versions of the data files.

  11. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Data Archiving and Quality Control

    NASA Astrophysics Data System (ADS)

    He, B.; Cui, C.; Fan, D.; Li, C.; Xiao, J.; Yu, C.; Wang, C.; Cao, Z.; Chen, J.; Yi, W.; Li, S.; Mi, L.; Yang, S.

    2015-09-01

    AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences)1(Cui et al. 2014). To archive the astronomical data in China, we present the implementation of the astronomical data archiving system (ADAS). Data archiving and quality control are the infrastructure for the AstroCloud. Throughout the data of the entire life cycle, data archiving system standardized data, transferring data, logging observational data, archiving ambient data, And storing these data and metadata in database. Quality control covers the whole process and all aspects of data archiving.

  12. PCB Analysis Plan for Tank Archive Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NGUYEN, D.M.

    2001-03-22

    This analysis plan specifies laboratory analysis, quality assurance/quality control (QA/QC), and data reporting requirements for analyzing polychlorinated biphenyls (PCB) concentrations in archive samples. Tank waste archive samples that are planned for PCB analysis are identified in Nguyen 2001. The tanks and samples are summarized in Table 1-1. The analytical data will be used to establish a PCB baseline inventory in Hanford tanks.

  13. The NASA Exoplanet Science Institute Archives: KOA and NStED

    NASA Astrophysics Data System (ADS)

    Berriman, G. B.; Ciardi, D.; Abajian, M.; Barlow, T.; Bryden, G.; von Braun, K.; Good, J.; Kane, S.; Kong, M.; Laity, A.; Lynn, M.; Elroy, D. M.; Plavchan, P.; Ramirez, S.; Schmitz, M.; Stauffer, J.; Wyatt, P.; Zhang, A.; Goodrich, R.; Mader, J.; Tran, H.; Tsubota, M.; Beekley, A.; Berukoff, S.; Chan, B.; Lau, C.; Regelson, M.; Saucedo, M.; Swain, M.

    2010-12-01

    The NASA Exoplanet Science Institute (NExScI) maintains a series of archival services in support of NASA’s planet finding and characterization goals. Two of the larger archival services at NExScI are the Keck Observatory Archive (KOA) and the NASA Star and Exoplanet Database (NStED). KOA, a collaboration between the W. M. Keck Observatory and NExScI, serves raw data from the High Resolution Echelle Spectrograph (HIRES) and extracted spectral browse products. As of June 2009, KOA hosts over 28 million files (4.7 TB) from over 2,000 nights. In Spring 2010, it will begin to serve data from the Near-Infrared Echelle Spectrograph (NIRSPEC). NStED is a general purpose archive with the aim of providing support for NASA’s planet finding and characterization goals, and stellar astrophysics. There are two principal components of NStED: a database of (currently) all known exoplanets, and images; and an archive dedicated to high precision photometric surveys for transiting exoplanets. NStED is the US portal to the CNES mission CoRoT, the first space mission dedicated to the discovery and characterization of exoplanets. These archives share a common software and hardware architecture with the NASA/IPAC Infrared Science Archive (IRSA). The software architecture consists of standalone utilities that perform generic query and retrieval functions. They are called through program interfaces and plugged together to form applications through a simple executive library.

  14. Planetary Data Archiving Activities of ISRO

    NASA Astrophysics Data System (ADS)

    Gopala Krishna, Barla; D, Rao J.; Thakkar, Navita; Prashar, Ajay; Manthira Moorthi, S.

    ISRO has launched its first planetary mission to moon viz., Chandrayaan-1 on October 22, 2008. This mission carried eleven instruments; a wealth of science data has been collected during its mission life (November 2008 to August 2009), which is archived at Indian Space Science Data Centre (ISSDC). The data centre ISSDC is responsible for the Ingest, storage, processing, Archive, and dissemination of the payload and related ancillary data in addition to real-time spacecraft operations support. ISSDC is designed to provide high computation power, large storage and hosting a variety of applications necessary to support all the planetary and space science missions of ISRO. State-of-the-art architecture of ISSDC provides the facility to ingest the raw payload data of all the science payloads of the science satellites in automatic manner, processes raw data and generates payload specific processed outputs, generate higher level products and disseminates the data sets to principal investigators, guest observers, payload operations centres (POC) and to general public. The data archive makes use of the well-proven archive standards of the Planetary Data System (PDS). The long term Archive for five payloads of Chandrayaan-1 data viz., TMC, HySI, SARA, M3 and MiniSAR is released from ISSDC on19th April 2013 (http://www.issdc.gov.in) to the users. Additionally DEMs generated from possible passes of Chandrayaan-1 TMC stereo data and sample map sheets of Lunar Atlas are also archived and released from ISSDC along with the LTA. Mars Orbiter Mission (MOM) is the recent planetary mission launched on October 22, 2013; currently enroute to MARS, carrying five instruments (http://www.isro.org) viz., Mars Color Camera (MCC) to map various morphological features on Mars with varying resolution and scales using the unique elliptical orbit, Methane Sensor for Mars (MSM) to measure total column of methane in the Martian atmosphere, Thermal Infrared Imaging Spectrometer (TIS) to map surface

  15. Dusting the Archives of Childhood: Child Welfare Records as Historical Sources

    ERIC Educational Resources Information Center

    Vehkalahti, Kaisa

    2016-01-01

    Using administrative sources in the history of education and childhood involves a range of methodological and ethical considerations. This article discusses these problems, as well as the role of archives and archival policies in preserving history and shaping our understanding of past childhoods. Using Finnish child welfare archives from the…

  16. Checklist of Standards Applicable to the Preservation of Archives and Manuscripts.

    ERIC Educational Resources Information Center

    Walch, Victoria Irons, Comp.

    1990-01-01

    Presents a checklist of more than 150 standards that have been identified by the Society of American Archivists (SAA) Task Force on Archival Standards as applicable to the preservation of archives and manuscripts. The organizations that developed the standards are described, and increased archival participation in the standards development process…

  17. Increasing Access to Archival Records in Library Online Public Access Catalogs.

    ERIC Educational Resources Information Center

    Gilmore, Matthew B.

    1988-01-01

    Looks at the use of online public access catalogs, the utility of subject and call-number searching, and possible archival applications. The Wallace Archives at the Claremont Colleges is used as an example of the availability of bibliographic descriptions of multiformat archival materials through the library catalog. Sample records and searches…

  18. Early Miocene sequence development across the New Jersey margin

    USGS Publications Warehouse

    Monteverde, D.H.; Mountain, Gregory S.; Miller, K.G.

    2008-01-01

    Sequence stratigraphy provides an understanding of the interplay between eustasy, sediment supply and accommodation in the sedimentary construction of passive margins. We used this approach to follow the early to middle Miocene growth of the New Jersey margin and analyse the connection between relative changes of sea level and variable sediment supply. Eleven candidate sequence boundaries were traced in high-resolution multi-channel seismic profiles across the inner margin and matched to geophysical log signatures and lithologic changes in ODP Leg 150X onshore coreholes. Chronologies at these drill sites were then used to assign ages to the intervening seismic sequences. We conclude that the regional and global correlation of early Miocene sequences suggests a dominant role of global sea-level change but margin progradation was controlled by localized sediment contribution and that local conditions played a large role in sequence formation and preservation. Lowstand deposits were regionally restricted and their locations point to both single and multiple sediment sources. The distribution of highstand deposits, by contrast, documents redistribution by along shelf currents. We find no evidence that sea level fell below the elevation of the clinoform rollover, and the existence of extensive lowstand deposits seaward of this inflection point indicates efficient cross-shelf sediment transport mechanisms despite the apparent lack of well-developed fluvial drainage. ?? 2008 The Authors. Journal compilation ?? 2008 Blackwell Publishing.

  19. Core Genome Multilocus Sequence Typing Scheme for High- Resolution Typing of Enterococcus faecium.

    PubMed

    de Been, Mark; Pinholt, Mette; Top, Janetta; Bletz, Stefan; Mellmann, Alexander; van Schaik, Willem; Brouwer, Ellen; Rogers, Malbert; Kraat, Yvette; Bonten, Marc; Corander, Jukka; Westh, Henrik; Harmsen, Dag; Willems, Rob J L

    2015-12-01

    Enterococcus faecium, a common inhabitant of the human gut, has emerged in the last 2 decades as an important multidrug-resistant nosocomial pathogen. Since the start of the 21st century, multilocus sequence typing (MLST) has been used to study the molecular epidemiology of E. faecium. However, due to the use of a small number of genes, the resolution of MLST is limited. Whole-genome sequencing (WGS) now allows for high-resolution tracing of outbreaks, but current WGS-based approaches lack standardization, rendering them less suitable for interlaboratory prospective surveillance. To overcome this limitation, we developed a core genome MLST (cgMLST) scheme for E. faecium. cgMLST transfers genome-wide single nucleotide polymorphism(SNP) diversity into a standardized and portable allele numbering system that is far less computationally intensive than SNP-based analysis of WGS data. The E. faecium cgMLST scheme was built using 40 genome sequences that represented the diversity of the species. The scheme consists of 1,423 cgMLST target genes. To test the performance of the scheme, we performed WGS analysis of 103 outbreak isolates from five different hospitals in the Netherlands, Denmark, and Germany. The cgMLST scheme performed well in distinguishing between epidemiologically related and unrelated isolates, even between those that had the same sequence type (ST), which denotes the higher discriminatory power of this cgMLST scheme over that of conventional MLST. We also show that in terms of resolution, the performance of the E. faecium cgMLST scheme is equivalent to that of an SNP-based approach. In conclusion, the cgMLST scheme developed in this study facilitates rapid, standardized, and high-resolution tracing of E. faecium outbreaks.

  20. Teaching Electronic Records Management in the Archival Curriculum

    ERIC Educational Resources Information Center

    Zhang, Jane

    2016-01-01

    Electronic records management has been incorporated into the archival curriculum in North America since the 1990s. This study reported in this paper provides a systematic analysis of the content of electronic records management (ERM) courses currently taught in archival education programs. Through the analysis of course combinations and their…

  1. The American Archival Profession and Information Technology Standards.

    ERIC Educational Resources Information Center

    Cox, Richard J.

    1992-01-01

    Discussion of the use of standards by archivists highlights the U.S. MARC AMC (Archives-Manuscript Control) format for reporting archival records and manuscripts; their interest in specific standards being developed for the OSI (Open Systems Interconnection) reference model; and the management of records in electronic formats. (16 references) (LAE)

  2. Ray Tracing with Virtual Objects.

    ERIC Educational Resources Information Center

    Leinoff, Stuart

    1991-01-01

    Introduces the method of ray tracing to analyze the refraction or reflection of real or virtual images from multiple optical devices. Discusses ray-tracing techniques for locating images using convex and concave lenses or mirrors. (MDH)

  3. Viking Seismometer PDS Archive Dataset

    NASA Astrophysics Data System (ADS)

    Lorenz, R. D.

    2016-12-01

    The Viking Lander 2 seismometer operated successfully for over 500 Sols on the Martian surface, recording at least one likely candidate Marsquake. The Viking mission, in an era when data handling hardware (both on board and on the ground) was limited in capability, predated modern planetary data archiving, and ad-hoc repositories of the data, and the very low-level record at NSSDC, were neither convenient to process nor well-known. In an effort supported by the NASA Mars Data Analysis Program, we have converted the bulk of the Viking dataset (namely the 49,000 and 270,000 records made in High- and Event- modes at 20 and 1 Hz respectively) into a simple ASCII table format. Additionally, since wind-generated lander motion is a major component of the signal, contemporaneous meteorological data are included in summary records to facilitate correlation. These datasets are being archived at the PDS Geosciences Node. In addition to brief instrument and dataset descriptions, the archive includes code snippets in the freely-available language 'R' to demonstrate plotting and analysis. Further, we present examples of lander-generated noise, associated with the sampler arm, instrument dumps and other mechanical operations.

  4. What We Have Learned About the Existing Trace Element Partitioning data During the Population Phase of traceDs

    NASA Astrophysics Data System (ADS)

    Nielsen, R. L.; Ghiorso, M. S.; Trischman, T.

    2015-12-01

    The database traceDs is designed to provide a transparent and accessible resource of experimental partitioning data. It now includes ~ 90% of all the experimental trace element partitioning data (~4000 experiments) produced over the past 45 years, and is accessible through a web based interface (using the portal lepr.ofm-research.org). We set a minimum standard for inclusion, with the threshold criteria being the inclusion of: Experimental conditions (temperature, pressure, device, container, time, etc.) Major element composition of the phases Trace element analyses of the phases Data sources that did not report these minimum components were not included. The rationale for not including such data is that the degree of equilibration is unknown, and more important, no rigorous approach to modeling the behavior of trace elements is possible without knowledge of composition of the phases, and the temperature and pressure of formation/equilibration. The data are stored using a schema derived from that of the Library of Experimental Phase Relations (LEPR), modified to account for additional metadata, and restructured to permit multiple analytical entries for various element/technique/standard combinations. In the process of populating the database, we have learned a number of things about the existing published experimental partitioning data. Most important are: ~ 20% of the papers do not satisfy one or more of the threshold criteria. The standard format for presenting data is the average. This was developed as the standard during the time where there were space constraints for publication in spite of fact that all the information can now be published as electronic supplements. The uncertainties that are published with the compositional data are often not adequately explained (e.g. 1 or 2 sigma, standard deviation of the average, etc.). We propose a new set of publication standards for experimental data that include the minimum criteria described above, the publication

  5. Archiving Software Systems: Approaches to Preserve Computational Capabilities

    NASA Astrophysics Data System (ADS)

    King, T. A.

    2014-12-01

    A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.

  6. Masking as an effective quality control method for next-generation sequencing data analysis.

    PubMed

    Yun, Sajung; Yun, Sijung

    2014-12-13

    Next generation sequencing produces base calls with low quality scores that can affect the accuracy of identifying simple nucleotide variation calls, including single nucleotide polymorphisms and small insertions and deletions. Here we compare the effectiveness of two data preprocessing methods, masking and trimming, and the accuracy of simple nucleotide variation calls on whole-genome sequence data from Caenorhabditis elegans. Masking substitutes low quality base calls with 'N's (undetermined bases), whereas trimming removes low quality bases that results in a shorter read lengths. We demonstrate that masking is more effective than trimming in reducing the false-positive rate in single nucleotide polymorphism (SNP) calling. However, both of the preprocessing methods did not affect the false-negative rate in SNP calling with statistical significance compared to the data analysis without preprocessing. False-positive rate and false-negative rate for small insertions and deletions did not show differences between masking and trimming. We recommend masking over trimming as a more effective preprocessing method for next generation sequencing data analysis since masking reduces the false-positive rate in SNP calling without sacrificing the false-negative rate although trimming is more commonly used currently in the field. The perl script for masking is available at http://code.google.com/p/subn/. The sequencing data used in the study were deposited in the Sequence Read Archive (SRX450968 and SRX451773).

  7. The long hold: Storing data at the National Archives

    NASA Technical Reports Server (NTRS)

    Thibodeau, Kenneth

    1992-01-01

    The National Archives is, in many respects, in a unique position. For example, I find people from other organizations describing an archival medium as one which will last for three to five years. At the National Archives, we deal with the centuries, not years. From our perspective, there is no archival medium for data storage, and we do not expect there will ever be one. Predicting the long-term future of information technology beyond a mere five or ten years approaches the occult arts. But one prediction is probably safe. It is that the technology will continue to change, at least until analysts start talking about the post-information age. If we did have a medium which lasted a hundred years or longer, we probably would not have a device capable of reading it. The issue of obsolescence, as opposed to media stability, is more complex and more costly. It is especially complex at the National Archives because of two other aspects of our peculiar position. The first aspect is that we deal with incoherent data. The second is that we are charged with satisfying unknown and unknowable requirements. A brief overview of these aspects is presented.

  8. An External Archive-Guided Multiobjective Particle Swarm Optimization Algorithm.

    PubMed

    Zhu, Qingling; Lin, Qiuzhen; Chen, Weineng; Wong, Ka-Chun; Coello Coello, Carlos A; Li, Jianqiang; Chen, Jianyong; Zhang, Jun

    2017-09-01

    The selection of swarm leaders (i.e., the personal best and global best), is important in the design of a multiobjective particle swarm optimization (MOPSO) algorithm. Such leaders are expected to effectively guide the swarm to approach the true Pareto optimal front. In this paper, we present a novel external archive-guided MOPSO algorithm (AgMOPSO), where the leaders for velocity update are all selected from the external archive. In our algorithm, multiobjective optimization problems (MOPs) are transformed into a set of subproblems using a decomposition approach, and then each particle is assigned accordingly to optimize each subproblem. A novel archive-guided velocity update method is designed to guide the swarm for exploration, and the external archive is also evolved using an immune-based evolutionary strategy. These proposed approaches speed up the convergence of AgMOPSO. The experimental results fully demonstrate the superiority of our proposed AgMOPSO in solving most of the test problems adopted, in terms of two commonly used performance measures. Moreover, the effectiveness of our proposed archive-guided velocity update method and immune-based evolutionary strategy is also experimentally validated on more than 30 test MOPs.

  9. Computer program for optical systems ray tracing

    NASA Technical Reports Server (NTRS)

    Ferguson, T. J.; Konn, H.

    1967-01-01

    Program traces rays of light through optical systems consisting of up to 65 different optical surfaces and computes the aberrations. For design purposes, paraxial tracings with astigmation and third order tracings are provided.

  10. Online Continuous Trace Process Analytics Using Multiplexing Gas Chromatography.

    PubMed

    Wunsch, Marco R; Lehnig, Rudolf; Trapp, Oliver

    2017-04-04

    The analysis of impurities at a trace level in chemical products, nutrition additives, and drugs is highly important to guarantee safe products suitable for consumption. However, trace analysis in the presence of a dominating component can be a challenging task because of noncompatible linear detection ranges or strong signal overlap that suppresses the signal of interest. Here, we developed a technique for quantitative analysis using multiplexing gas chromatography (mpGC) for continuous and completely automated process trace analytics exemplified for the analysis of a CO 2 stream in a production plant for detection of benzene, toluene, ethylbenzene, and the three structural isomers of xylene (BTEX) in the concentration range of 0-10 ppb. Additional minor components are methane and methanol with concentrations up to 100 ppm. The sample is injected up to 512 times according to a pseudorandom binary sequence (PRBS) with a mean frequency of 0.1 Hz into a gas chromatograph equipped with a flame ionization detector (FID). A superimposed chromatogram is recorded which is deconvoluted into an averaged chromatogram with Hadamard transformation. Novel algorithms to maintain the data acquisition rate of the detector by application of Hadamard transformation and to suppress correlation noise induced by components with much higher concentrations than the target substances are shown. Compared to conventional GC-FID, the signal-to-noise ratio has been increased by a factor of 10 with mpGC-FID. Correspondingly, the detection limits for BTEX in CO 2 have been lowered from 10 to 1 ppb each. This has been achieved despite the presence of detectable components (methane and methanol) with a concentration about 1000 times higher than the target substances. The robustness and reliability of mpGC has been proven in a two-month field test in a chemical production plant.

  11. The global Landsat archive: Status, consolidation, and direction

    USGS Publications Warehouse

    Wulder, Michael A.; White, Joanne C.; Loveland, Thomas; Woodcock, Curtis; Belward, Alan; Cohen, Warren B.; Fosnight, Eugene A.; Shaw, Jerad; Masek, Jeffery G.; Roy, David P.

    2016-01-01

    New and previously unimaginable Landsat applications have been fostered by a policy change in 2008 that made analysis-ready Landsat data free and open access. Since 1972, Landsat has been collecting images of the Earth, with the early years of the program constrained by onboard satellite and ground systems, as well as limitations across the range of required computing, networking, and storage capabilities. Rather than robust on-satellite storage for transmission via high bandwidth downlink to a centralized storage and distribution facility as with Landsat-8, a network of receiving stations, one operated by the U.S. government, the other operated by a community of International Cooperators (ICs), were utilized. ICs paid a fee for the right to receive and distribute Landsat data and over time, more Landsat data was held outside the archive of the United State Geological Survey (USGS) than was held inside, much of it unique. Recognizing the critical value of these data, the USGS began a Landsat Global Archive Consolidation (LGAC) initiative in 2010 to bring these data into a single, universally accessible, centralized global archive, housed at the Earth Resources Observation and Science (EROS) Center in Sioux Falls, South Dakota. The primary LGAC goals are to inventory the data held by ICs, acquire the data, and ingest and apply standard ground station processing to generate an L1T analysis-ready product. As of January 1, 2015 there were 5,532,454 images in the USGS archive. LGAC has contributed approximately 3.2 million of those images, more than doubling the original USGS archive holdings. Moreover, an additional 2.3 million images have been identified to date through the LGAC initiative and are in the process of being added to the archive. The impact of LGAC is significant and, in terms of images in the collection, analogous to that of having had twoadditional Landsat-5 missions. As a result of LGAC, there are regions of the globe that now have markedly improved

  12. The GIK-Archive of sediment core radiographs with documentation

    NASA Astrophysics Data System (ADS)

    Grobe, Hannes; Winn, Kyaw; Werner, Friedrich; Driemel, Amelie; Schumacher, Stefanie; Sieger, Rainer

    2017-12-01

    The GIK-Archive of radiographs is a collection of X-ray negative and photographic images of sediment cores based on exposures taken since the early 1960s. During four decades of marine geological work at the University of Kiel, Germany, several thousand hours of sampling, careful preparation and X-raying were spent on producing a unique archive of sediment radiographs from several parts of the World Ocean. The archive consists of more than 18 500 exposures on chemical film that were digitized, geo-referenced, supplemented with metadata and archived in the data library PANGAEA®. With this publication, the images have become available open-access for use by the scientific community at https://doi.org/10.1594/PANGAEA.854841.

  13. Reiterating "Asylum Archive": Documenting Direct Provision in Ireland

    ERIC Educational Resources Information Center

    Nedeljkovic, Vukasin

    2018-01-01

    Originally a coping mechanism for an artist housed in a Direct Provision Centres while seeking asylum in Ireland, "Asylum Archive" has become much more than that. In 2018, it is now a collaborative archive, interactive and intermedial online document, and a scholarly research project. This iteration includes five new images of Railway…

  14. Trace elements: implications for nursing.

    PubMed

    Hayter, J

    1980-01-01

    Although most were unknown a few years ago, present evidence indicates that at least 25 trace elements have some pertinence to health. Unlike vitamins, they cannot be synthesized. Some trace elements are now considered important only because of their harmful effects but traces of them may be essential. Zinc is especially important during puberty, pregnancy and menopause and is related to protein metabolism. Both fluoride and cadmium accumulate in the body year after year. Cadmium is positively correlated with several chronic diseases, especially hypertension. It is obtained from smoking and drinking soft water. Silicon, generally associated with silicosis, may be necessary for healthy bone and connective tissue. Chromium, believed to be the glucose tolerance factor, is obtained from brewer's yeast, spices, and whole wheat products. Copper deficiency may be implicated in a wide range of cardiovascular and blood related disorders. Either marginal deficiencies or slight excesses of most trace elements are harmful. Nurses should instruct patients to avoid highly refined foods, fad diets, or synthetic and fabricated foods. A well balanced and varied diet is the best safeguard against trace element excesses or deficiencies.

  15. Trace Elements and Healthcare: A Bioinformatics Perspective.

    PubMed

    Zhang, Yan

    2017-01-01

    Biological trace elements are essential for human health. Imbalance in trace element metabolism and homeostasis may play an important role in a variety of diseases and disorders. While the majority of previous researches focused on experimental verification of genes involved in trace element metabolism and those encoding trace element-dependent proteins, bioinformatics study on trace elements is relatively rare and still at the starting stage. This chapter offers an overview of recent progress in bioinformatics analyses of trace element utilization, metabolism, and function, especially comparative genomics of several important metals. The relationship between individual elements and several diseases based on recent large-scale systematic studies such as genome-wide association studies and case-control studies is discussed. Lastly, developments of ionomics and its recent application in human health are also introduced.

  16. Origin of invasive Florida frogs traced to Cuba

    PubMed Central

    Heinicke, Matthew P.; Diaz, Luis M.; Hedges, S. Blair

    2011-01-01

    Two of the earliest examples of successful invasive amphibians are the greenhouse frog (Eleutherodactylus planirostris) and the Cuban treefrog (Osteopilus septentrionalis) in Florida. Although both are generally assumed to be recent introductions, they are widespread on Caribbean islands and also have been proposed as natural colonizers. We obtained nucleotide sequence data for both species and their closest relatives in their native and introduced ranges. Phylogenetic analyses trace the origin of E. planirostris to a small area in western Cuba, while O. septentrionalis is derived from at least two Cuban sources, one probably a remote peninsula in western Cuba. The tropical-to-temperate invasion began with colonization of the Florida Keys followed by human-mediated dispersal within peninsular Florida. The subtropical Keys may have served as an adaptive stepping stone for the successful invasion of the North American continent. PMID:21270024

  17. Fermilab History and Archives Project | Golden Books - The Early History of

    Science.gov Websites

    Fermilab History and Archives Project Home About the Archives History and Archives Online Request Contact ; - The Early History of URA and Fermilab Fermilab Golden Book Collection main page Click on Image for Larger View The Early History of URA and Fermilab Viewpoint of a URA President (1966-1981) Norman F

  18. Data archiving and serving system implementation in CLEP's GRAS Core System

    NASA Astrophysics Data System (ADS)

    Zuo, Wei; Zeng, Xingguo; Zhang, Zhoubin; Geng, Liang; Li, Chunlai

    2017-04-01

    The Ground Research & Applications System(GRAS) is one of the five systems of China's Lunar Exploration Project(CLEP), it is responsible for data acquisition, processing, management and application, and it is also the operation control center during satellite in-orbit and payload operation management. Chang'E-1, Chang'E-2 and Chang'E-3 have collected abundant lunar exploration data. The aim of this work is to present the implementation of data archiving and Serving in CLEP's GRAS Core System software. This first approach provides a client side API and server side software allowing the creation of a simplified version of CLEPDB data archiving software, and implements all required elements to complete data archiving flow from data acquisition until its persistent storage technology. The client side includes all necessary components that run on devices that acquire or produce data, distributing and streaming to configure remote archiving servers. The server side comprises an archiving service that stores into PDS files all received data. The archiving solution aims at storing data coming for the Data Acquisition Subsystem, the Operation Management Subsystem, the Data Preprocessing Subsystem and the Scientific Application & Research Subsystem. The serving solution aims at serving data for the various business systems, scientific researchers and public users. The data-driven and component clustering methods was adopted in this system, the former is used to solve real-time data archiving and data persistence services; the latter is used to keep the continuous supporting ability of archive and service to new data from Chang'E Mission. Meanwhile, it can save software development cost as well.

  19. The Hopkins Ultraviolet Telescope: The Final Archive

    NASA Technical Reports Server (NTRS)

    Dixon, William V.; Blair, William P.; Kruk, Jeffrey W.; Romelfanger, Mary L.

    2013-01-01

    The Hopkins Ultraviolet Telescope (HUT) was a 0.9 m telescope and moderate-resolution (Delta)lambda equals 3 A) far-ultraviolet (820-1850 Å) spectrograph that flew twice on the space shuttle, in 1990 December (Astro-1, STS-35) and 1995 March (Astro-2, STS-67). The resulting spectra were originally archived in a nonstandard format that lacked important descriptive metadata. To increase their utility, we have modified the original datareduction software to produce a new and more user-friendly data product, a time-tagged photon list similar in format to the Intermediate Data Files (IDFs) produced by the Far Ultraviolet Spectroscopic Explorer calibration pipeline. We have transferred all relevant pointing and instrument-status information from locally-archived science and engineering databases into new FITS header keywords for each data set. Using this new pipeline, we have reprocessed the entire HUT archive from both missions, producing a new set of calibrated spectral products in a modern FITS format that is fully compliant with Virtual Observatory requirements. For each exposure, we have generated quicklook plots of the fully-calibrated spectrum and associated pointing history information. Finally, we have retrieved from our archives HUT TV guider images, which provide information on aperture positioning relative to guide stars, and converted them into FITS-format image files. All of these new data products are available in the new HUT section of the Mikulski Archive for Space Telescopes (MAST), along with historical and reference documents from both missions. In this article, we document the improved data-processing steps applied to the data and show examples of the new data products.

  20. The Hopkins Ultraviolet Telescope: The Final Archive

    NASA Astrophysics Data System (ADS)

    Dixon, William V.; Blair, William P.; Kruk, Jeffrey W.; Romelfanger, Mary L.

    2013-04-01

    The Hopkins Ultraviolet Telescope (HUT) was a 0.9 m telescope and moderate-resolution (Δλ = 3 Å) far-ultraviolet (820-1850 Å) spectrograph that flew twice on the space shuttle, in 1990 December (Astro-1, STS-35) and 1995 March (Astro-2, STS-67). The resulting spectra were originally archived in a nonstandard format that lacked important descriptive metadata. To increase their utility, we have modified the original data-reduction software to produce a new and more user-friendly data product, a time-tagged photon list similar in format to the Intermediate Data Files (IDFs) produced by the Far Ultraviolet Spectroscopic Explorer calibration pipeline. We have transferred all relevant pointing and instrument-status information from locally-archived science and engineering databases into new FITS header keywords for each data set. Using this new pipeline, we have reprocessed the entire HUT archive from both missions, producing a new set of calibrated spectral products in a modern FITS format that is fully compliant with Virtual Observatory requirements. For each exposure, we have generated quick-look plots of the fully-calibrated spectrum and associated pointing history information. Finally, we have retrieved from our archives HUT TV guider images, which provide information on aperture positioning relative to guide stars, and converted them into FITS-format image files. All of these new data products are available in the new HUT section of the Mikulski Archive for Space Telescopes (MAST), along with historical and reference documents from both missions. In this article, we document the improved data-processing steps applied to the data and show examples of the new data products.

  1. Database resources of the National Center for Biotechnology Information

    PubMed Central

    Sayers, Eric W.; Barrett, Tanya; Benson, Dennis A.; Bolton, Evan; Bryant, Stephen H.; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M.; DiCuccio, Michael; Federhen, Scott; Feolo, Michael; Fingerman, Ian M.; Geer, Lewis Y.; Helmberg, Wolfgang; Kapustin, Yuri; Krasnov, Sergey; Landsman, David; Lipman, David J.; Lu, Zhiyong; Madden, Thomas L.; Madej, Tom; Maglott, Donna R.; Marchler-Bauer, Aron; Miller, Vadim; Karsch-Mizrachi, Ilene; Ostell, James; Panchenko, Anna; Phan, Lon; Pruitt, Kim D.; Schuler, Gregory D.; Sequeira, Edwin; Sherry, Stephen T.; Shumway, Martin; Sirotkin, Karl; Slotta, Douglas; Souvorov, Alexandre; Starchenko, Grigory; Tatusova, Tatiana A.; Wagner, Lukas; Wang, Yanli; Wilbur, W. John; Yaschenko, Eugene; Ye, Jian

    2012-01-01

    In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI Website. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central (PMC), Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Primer-BLAST, COBALT, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, Genome and related tools, the Map Viewer, Model Maker, Evidence Viewer, Trace Archive, Sequence Read Archive, BioProject, BioSample, Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Probe, Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART), Biosystems, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov. PMID:22140104

  2. Database resources of the National Center for Biotechnology Information.

    PubMed

    Sayers, Eric W; Barrett, Tanya; Benson, Dennis A; Bolton, Evan; Bryant, Stephen H; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M; Dicuccio, Michael; Federhen, Scott; Feolo, Michael; Fingerman, Ian M; Geer, Lewis Y; Helmberg, Wolfgang; Kapustin, Yuri; Krasnov, Sergey; Landsman, David; Lipman, David J; Lu, Zhiyong; Madden, Thomas L; Madej, Tom; Maglott, Donna R; Marchler-Bauer, Aron; Miller, Vadim; Karsch-Mizrachi, Ilene; Ostell, James; Panchenko, Anna; Phan, Lon; Pruitt, Kim D; Schuler, Gregory D; Sequeira, Edwin; Sherry, Stephen T; Shumway, Martin; Sirotkin, Karl; Slotta, Douglas; Souvorov, Alexandre; Starchenko, Grigory; Tatusova, Tatiana A; Wagner, Lukas; Wang, Yanli; Wilbur, W John; Yaschenko, Eugene; Ye, Jian

    2012-01-01

    In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI Website. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central (PMC), Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Primer-BLAST, COBALT, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, Genome and related tools, the Map Viewer, Model Maker, Evidence Viewer, Trace Archive, Sequence Read Archive, BioProject, BioSample, Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Probe, Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART), Biosystems, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov.

  3. Database resources of the National Center for Biotechnology Information

    PubMed Central

    2013-01-01

    In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI, http://www.ncbi.nlm.nih.gov) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central, Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Primer-BLAST, COBALT, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, the Genetic Testing Registry, Genome and related tools, the Map Viewer, Model Maker, Evidence Viewer, Trace Archive, Sequence Read Archive, BioProject, BioSample, Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus, Probe, Online Mendelian Inheritance in Animals, the Molecular Modeling Database, the Conserved Domain Database, the Conserved Domain Architecture Retrieval Tool, Biosystems, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page. PMID:23193264

  4. Database resources of the National Center for Biotechnology Information

    PubMed Central

    Acland, Abigail; Agarwala, Richa; Barrett, Tanya; Beck, Jeff; Benson, Dennis A.; Bollin, Colleen; Bolton, Evan; Bryant, Stephen H.; Canese, Kathi; Church, Deanna M.; Clark, Karen; DiCuccio, Michael; Dondoshansky, Ilya; Federhen, Scott; Feolo, Michael; Geer, Lewis Y.; Gorelenkov, Viatcheslav; Hoeppner, Marilu; Johnson, Mark; Kelly, Christopher; Khotomlianski, Viatcheslav; Kimchi, Avi; Kimelman, Michael; Kitts, Paul; Krasnov, Sergey; Kuznetsov, Anatoliy; Landsman, David; Lipman, David J.; Lu, Zhiyong; Madden, Thomas L.; Madej, Tom; Maglott, Donna R.; Marchler-Bauer, Aron; Karsch-Mizrachi, Ilene; Murphy, Terence; Ostell, James; O'Sullivan, Christopher; Panchenko, Anna; Phan, Lon; Pruitt, Don Preussm Kim D.; Rubinstein, Wendy; Sayers, Eric W.; Schneider, Valerie; Schuler, Gregory D.; Sequeira, Edwin; Sherry, Stephen T.; Shumway, Martin; Sirotkin, Karl; Siyan, Karanjit; Slotta, Douglas; Soboleva, Alexandra; Soussov, Vladimir; Starchenko, Grigory; Tatusova, Tatiana A.; Trawick, Bart W.; Vakatov, Denis; Wang, Yanli; Ward, Minghong; John Wilbur, W.; Yaschenko, Eugene; Zbicz, Kerry

    2014-01-01

    In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI, http://www.ncbi.nlm.nih.gov) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI Web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central, PubReader, Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link, Primer-BLAST, COBALT, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, the Genetic Testing Registry, Genome and related tools, the Map Viewer, Trace Archive, Sequence Read Archive, BioProject, BioSample, ClinVar, MedGen, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus, Probe, Online Mendelian Inheritance in Animals, the Molecular Modeling Database, the Conserved Domain Database, the Conserved Domain Architecture Retrieval Tool, Biosystems, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All these resources can be accessed through the NCBI home page. PMID:24259429

  5. Trace Elements in Ovaries: Measurement and Physiology.

    PubMed

    Ceko, Melanie J; O'Leary, Sean; Harris, Hugh H; Hummitzsch, Katja; Rodgers, Raymond J

    2016-04-01

    Traditionally, research in the field of trace element biology and human and animal health has largely depended on epidemiological methods to demonstrate involvement in biological processes. These studies were typically followed by trace element supplementation trials or attempts at identification of the biochemical pathways involved. With the discovery of biological molecules that contain the trace elements, such as matrix metalloproteinases containing zinc (Zn), cytochrome P450 enzymes containing iron (Fe), and selenoproteins containing selenium (Se), much of the current research focuses on these molecules, and, hence, only indirectly on trace elements themselves. This review focuses largely on two synchrotron-based x-ray techniques: X-ray absorption spectroscopy and x-ray fluorescence imaging that can be used to identify the in situ speciation and distribution of trace elements in tissues, using our recent studies of bovine ovaries, where the distribution of Fe, Se, Zn, and bromine were determined. It also discusses the value of other techniques, such as inductively coupled plasma mass spectrometry, used to garner information about the concentrations and elemental state of the trace elements. These applications to measure trace elemental distributions in bovine ovaries at high resolutions provide new insights into possible roles for trace elements in the ovary. © 2016 by the Society for the Study of Reproduction, Inc.

  6. An R package for state-trace analysis.

    PubMed

    Prince, Melissa; Hawkins, Guy; Love, Jonathon; Heathcote, Andrew

    2012-09-01

    State-trace analysis (Bamber, Journal of Mathematical Psychology, 19, 137-181, 1979) is a graphical analysis that can determine whether one or more than one latent variable mediates an apparent dissociation between the effects of two experimental manipulations. State-trace analysis makes only ordinal assumptions and so, is not confounded by range effects that plague alternative methods, especially when performance is measured on a bounded scale (such as accuracy). We describe and illustrate the application of a freely available GUI driven package, StateTrace, for the R language. StateTrace automates many aspects of a state-trace analysis of accuracy and other binary response data, including customizable graphics and the efficient management of computationally intensive Bayesian methods for quantifying evidence about the outcomes of a state-trace experiment, developed by Prince, Brown, and Heathcote (Psychological Methods, 17, 78-99, 2012).

  7. Stand-off detection of trace explosives by infrared photothermal imaging

    NASA Astrophysics Data System (ADS)

    Papantonakis, Michael R.; Kendziora, Chris; Furstenberg, Robert; Stepnowski, Stanley V.; Rake, Matthew; Stepnowski, Jennifer; McGill, R. Andrew

    2009-05-01

    We have developed a technique for the stand-off detection of trace explosives using infrared photothermal imaging. In this approach, infrared quantum cascade lasers tuned to strong vibrational absorption bands of the explosive particles illuminate a surface of interest, preferentially heating the explosives material. An infrared focal plane array is used to image the surface and detect a small increase in the thermal intensity upon laser illumination. We have demonstrated the technique using TNT and RDX residues at several meters of stand-off distance under laboratory conditions, while operating the lasers below the eye-safe intensity limit. Sensitivity to explosives traces as small as a single grain (~100 ng) of TNT has been demonstrated using an uncooled bolometer array. We show the viability of this approach on a variety of surfaces which transmit, reflect or absorb the infrared laser light and have a range of thermal conductivities. By varying the incident wavelength slightly, we demonstrate selectivity between TNT and RDX. Using a sequence of lasers at different wavelengths, we increase both sensitivity and selectivity while reducing the false alarm rate. At higher energy levels we also show it is possible to generate vapor from solid materials with inherently low vapor pressures.

  8. National Satellite Land Remote Sensing Data Archive

    USGS Publications Warehouse

    Faundeen, John L.; Kelly, Francis P.; Holm, Thomas M.; Nolt, Jenna E.

    2013-01-01

    The National Satellite Land Remote Sensing Data Archive (NSLRSDA) resides at the U.S. Geological Survey's (USGS) Earth Resources Observation and Science (EROS) Center. Through the Land Remote Sensing Policy Act of 1992, the U.S. Congress directed the Department of the Interior (DOI) to establish a permanent Government archive containing satellite remote sensing data of the Earth's land surface and to make this data easily accessible and readily available. This unique DOI/USGS archive provides a comprehensive, permanent, and impartial observational record of the planet's land surface obtained throughout more than five decades of satellite remote sensing. Satellite-derived data and information products are primary sources used to detect and understand changes such as deforestation, desertification, agricultural crop vigor, water quality, invasive plant species, and certain natural hazards such as flood extent and wildfire scars.

  9. Bioinformatic Characterization of Genes and Proteins Involved in Blood Clotting in Lampreys.

    PubMed

    Doolittle, Russell F

    2015-10-01

    Lampreys and hagfish are the earliest diverging of extant vertebrates and are obvious targets for investigating the origins of complex biochemical systems found in mammals. Currently, the simplest approach for such inquiries is to search for the presence of relevant genes in whole genome sequence (WGS) assemblies. Unhappily, in the past a high-quality complete genome sequence has not been available for either lampreys or hagfish, precluding the possibility of proving gene absence. Recently, improved but still incomplete genome assemblies for two species of lamprey have been posted, and, taken together with an extensive collection of short sequences in the NCBI trace archive, they have made it possible to make reliable counts for specific gene families. Particularly, a multi-source tactic has been used to study the lamprey blood clotting system with regard to the presence and absence of genes known to occur in higher vertebrates. As was suggested in earlier studies, lampreys lack genes for coagulation factors VIII and IX, both of which are critical for the "intrinsic" clotting system and responsible for hemophilia in humans. On the other hand, they have three each of genes for factors VII and X, participants in the "extrinsic" clotting system. The strategy of using raw trace sequence "reads" together with partial WGS assemblies for lampreys can be used in studies on the early evolution of other biochemical systems in vertebrates.

  10. Mars Observer data production, transfer, and archival: The data production assembly line

    NASA Technical Reports Server (NTRS)

    Childs, David B.

    1993-01-01

    This paper describes the data production, transfer, and archival process designed for the Mars Observer Flight Project. It addresses the developmental and operational aspects of the archive collection production process. The developmental aspects cover the design and packaging of data products for archival and distribution to the planetary community. Also discussed is the design and development of a data transfer and volume production process capable of handling the large throughput and complexity of the Mars Observer data products. The operational aspects cover the main functions of the process: creating data and engineering products, collecting the data products and ancillary products in a central repository, producing archive volumes, validating volumes, archiving, and distributing the data to the planetary community.

  11. Forensic trace DNA: a review

    PubMed Central

    2010-01-01

    DNA analysis is frequently used to acquire information from biological material to aid enquiries associated with criminal offences, disaster victim identification and missing persons investigations. As the relevance and value of DNA profiling to forensic investigations has increased, so too has the desire to generate this information from smaller amounts of DNA. Trace DNA samples may be defined as any sample which falls below recommended thresholds at any stage of the analysis, from sample detection through to profile interpretation, and can not be defined by a precise picogram amount. Here we review aspects associated with the collection, DNA extraction, amplification, profiling and interpretation of trace DNA samples. Contamination and transfer issues are also briefly discussed within the context of trace DNA analysis. Whilst several methodological changes have facilitated profiling from trace samples in recent years it is also clear that many opportunities exist for further improvements. PMID:21122102

  12. A simple algorithm for quantifying DNA methylation levels on multiple independent CpG sites in bisulfite genomic sequencing electropherograms.

    PubMed

    Leakey, Tatiana I; Zielinski, Jerzy; Siegfried, Rachel N; Siegel, Eric R; Fan, Chun-Yang; Cooney, Craig A

    2008-06-01

    DNA methylation at cytosines is a widely studied epigenetic modification. Methylation is commonly detected using bisulfite modification of DNA followed by PCR and additional techniques such as restriction digestion or sequencing. These additional techniques are either laborious, require specialized equipment, or are not quantitative. Here we describe a simple algorithm that yields quantitative results from analysis of conventional four-dye-trace sequencing. We call this method Mquant and we compare it with the established laboratory method of combined bisulfite restriction assay (COBRA). This analysis of sequencing electropherograms provides a simple, easily applied method to quantify DNA methylation at specific CpG sites.

  13. Obstacles to the Access, Use and Transfer of Information from Archives: A RAMP Study.

    ERIC Educational Resources Information Center

    Duchein, Michel

    This publication reviews means of access to information contained in the public archives (current administrative documents and archival records) and private archives (manuscripts of personal or family origin) of many countries and makes recommendations for improving access to archival information. Sections describe: (1) the origin and development…

  14. Trace metal (Mg/Ca and Sr/Ca) analyses of single coccoliths by Secondary Ion Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Prentice, Katy; Jones, Tom Dunkley; Lees, Jackie; Young, Jeremy; Bown, Paul; Langer, Gerald; Fearn, Sarah; EIMF

    2014-12-01

    Here we present the first multi-species comparison of modern and fossil coccolith trace metal data obtained from single liths. We present both trace metal analyses (Sr, Ca, Mg and Al) and distribution maps of individual Paleogene fossil coccoliths obtained by Secondary Ion Mass Spectrometry (SIMS). We use this data to determine the effects of variable coccolith preservation and diagenetic calcite overgrowths on the recorded concentrations of strontium and magnesium in coccolith calcite. The analysis of coccoliths from deep-ocean sediments spanning the Eocene/Oligocene transition demonstrates that primary coccolith calcite is resistant to the neomorphism that is common in planktonic foraminifera from similar depositional environments. Instead, where present, diagenetic calcite forms distinct overgrowths over primary coccolith calcite rather than replacing this calcite. Diagenetic overgrowths on coccoliths are easily distinguished in SIMS analyses on the basis of relatively higher Mg and lower Sr concentrations than co-occurring primary coccolith calcite. This interpretation is confirmed by the comparable SIMS analyses of modern cultured coccoliths of Coccolithus braarudii. Further, with diagenetic calcite overgrowth being the principle source of bias in coccolith-based geochemical records, we infer that lithologies with lower carbonate content, deposited below the palaeo-lysocline, are more likely to produce geochemical records dominated by primary coccolith calcite than carbonate-rich sediments where overgrowth is ubiquitous. The preservation of primary coccolith carbonate in low-carbonate lithologies thus provides a reliable geochemical archive where planktonic foraminifera are absent or have undergone neomorphism.

  15. Discourse Tracing as Qualitative Practice

    ERIC Educational Resources Information Center

    LeGreco, Marianne; Tracy, Sarah J.

    2009-01-01

    This article introduces a qualitative research method called "discourse tracing". Discourse tracing draws from contributions made by ethnographers, discourse critics, case study scholars, and process tracers. The approach offers new insights and an attendant language about how we engage in research designed specifically for the…

  16. cuBLASTP: Fine-Grained Parallelization of Protein Sequence Search on CPU+GPU.

    PubMed

    Zhang, Jing; Wang, Hao; Feng, Wu-Chun

    2017-01-01

    BLAST, short for Basic Local Alignment Search Tool, is a ubiquitous tool used in the life sciences for pairwise sequence search. However, with the advent of next-generation sequencing (NGS), whether at the outset or downstream from NGS, the exponential growth of sequence databases is outstripping our ability to analyze the data. While recent studies have utilized the graphics processing unit (GPU) to speedup the BLAST algorithm for searching protein sequences (i.e., BLASTP), these studies use coarse-grained parallelism, where one sequence alignment is mapped to only one thread. Such an approach does not efficiently utilize the capabilities of a GPU, particularly due to the irregularity of BLASTP in both execution paths and memory-access patterns. To address the above shortcomings, we present a fine-grained approach to parallelize BLASTP, where each individual phase of sequence search is mapped to many threads on a GPU. This approach, which we refer to as cuBLASTP, reorders data-access patterns and reduces divergent branches of the most time-consuming phases (i.e., hit detection and ungapped extension). In addition, cuBLASTP optimizes the remaining phases (i.e., gapped extension and alignment with trace back) on a multicore CPU and overlaps their execution with the phases running on the GPU.

  17. Interim report on Landsat national archive activities

    NASA Technical Reports Server (NTRS)

    Boyd, John E.

    1993-01-01

    The Department of the Interior (DOI) has the responsibility to preserve and to distribute most Landsat Thematic Mapper (TM) and Multispectral Scanner (MSS) data that have been acquired by the five Landsat satellites operational since July 1972. Data that are still covered by exclusive marketing rights, which were granted by the U.S. Government to the commercial Landsat operator, cannot be distributed by the DOI. As the designated national archive for Landsat data, the U.S. Geological Survey's EROS Data Center (EDC) has initiated two new programs to protect and make available any of the 625,000 MSS scenes currently archived and the 200,000 TM scenes to be archived at EDC by 1995. A specially configured system has begun converting Landsat MSS data from obsolete high density tapes (HDT's) to more dense digital cassette tapes. After transcription, continuous satellite swaths are (1) divided into standard scenes defined by a world reference system, (2) geographically located by latitude and longitude, and (3) assessed for overall quality. Digital browse images are created by subsampling the full-resolution swaths. Conversion of the TM HDT's will begin in the fourth quarter of 1992 and will be conducted concurrently with MSS conversion. Although the TM archive is three times larger than the entire MSS archive, conversion of data from both sensor systems and consolidation of the entire Landsat archive at EDC will be completed by the end of 1994. Some MSS HDT's have deteriorated, primarily as a result of hydrolysis of the pigment binder. Based on a small sample of the 11 terabytes of post-1978 MSS data and the 41 terabytes of TM data to be converted, it appears that to date, less than 2 percent of the data have been lost. The data loss occurs within small portions of some scenes; few scenes are lost entirely. Approximately 10,000 pre-1979 MSS HDT's have deteriorated to such an extent, as a result of hydrolysis, that the data cannot be recovered without special treatment of

  18. SysML model of exoplanet archive functionality and activities

    NASA Astrophysics Data System (ADS)

    Ramirez, Solange

    2016-08-01

    The NASA Exoplanet Archive is an online service that serves data and information on exoplanets and their host stars to help astronomical research related to search for and characterization of extra-solar planetary systems. In order to provide the most up to date data sets to the users, the exoplanet archive performs weekly updates that include additions into the database and updates to the services as needed. These weekly updates are complex due to interfaces within the archive. I will be presenting a SysML model that helps us perform these update activities in a weekly basis.

  19. Development of public science archive system of Subaru Telescope

    NASA Astrophysics Data System (ADS)

    Baba, Hajime; Yasuda, Naoki; Ichikawa, Shin-Ichi; Yagi, Masafumi; Iwamoto, Nobuyuki; Takata, Tadafumi; Horaguchi, Toshihiro; Taga, Masatochi; Watanabe, Masaru; Okumura, Shin-Ichiro; Ozawa, Tomohiko; Yamamoto, Naotaka; Hamabe, Masaru

    2002-09-01

    We have developed a public science archive system, Subaru-Mitaka-Okayama-Kiso Archive system (SMOKA), as a successor of Mitaka-Okayama-Kiso Archive (MOKA) system. SMOKA provides an access to the public data of Subaru Telescope, the 188 cm telescope at Okayama Astrophysical Observatory, and the 105 cm Schmidt telescope at Kiso Observatory of the University of Tokyo. Since 1997, we have tried to compile the dictionary of FITS header keywords. The accomplishment of the dictionary enabled us to construct an unified public archive of the data obtained with various instruments at the telescopes. SMOKA has two kinds of user interfaces; Simple Search and Advanced Search. Novices can search data by simply selecting the name of the target with the Simple Search interface. Experts would prefer to set detailed constraints on the query, using the Advanced Search interface. In order to improve the efficiency of searching, several new features are implemented, such as archive status plots, calibration data search, an annotation system, and an improved Quick Look Image browsing system. We can efficiently develop and operate SMOKA by adopting a three-tier model for the system. Java servlets and Java Server Pages (JSP) are useful to separate the front-end presentation from the middle and back-end tiers.

  20. User interface development and metadata considerations for the Atmospheric Radiation Measurement (ARM) archive

    NASA Technical Reports Server (NTRS)

    Singley, P. T.; Bell, J. D.; Daugherty, P. F.; Hubbs, C. A.; Tuggle, J. G.

    1993-01-01

    This paper will discuss user interface development and the structure and use of metadata for the Atmospheric Radiation Measurement (ARM) Archive. The ARM Archive, located at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee, is the data repository for the U.S. Department of Energy's (DOE's) ARM Project. After a short description of the ARM Project and the ARM Archive's role, we will consider the philosophy and goals, constraints, and prototype implementation of the user interface for the archive. We will also describe the metadata that are stored at the archive and support the user interface.

  1. Blood-collection device for trace and ultra-trace metal specimens evaluated.

    PubMed

    Moyer, T P; Mussmann, G V; Nixon, D E

    1991-05-01

    We evaluated the evacuated phlebotomy tube designed specifically for trace metal analysis by Sherwood Medical Co. Pools of human serum containing known concentrations of aluminum, arsenic, calcium, cadmium, copper, chromium, iron, lead, magnesium, manganese, mercury, selenium, and zinc were exposed to the tube and rubber stopper for defined periods ranging from 5 min to 24 h. Analysis for each element was performed in a randomized fashion under rigidly controlled conditions by use of standard electrothermal atomization atomic absorption spectroscopy, inductively coupled plasma atomic emission spectroscopy, and cold vapor atomic absorption spectrometry. In addition, for comparative purposes, we collected blood samples from normal volunteers by use of ultra-clean polystyrene phlebotomy syringes as well as standard evacuated phlebotomy tubes. We conclude that, except for lead, there was no significant contribution of any trace element studied from the evaluated tube and stopper to the serum. Because whole blood is the usual specimen for lead testing, the observation of a trace amount of lead in this tube designed for serum collection is trivial.

  2. Material of Geographic Import in the National Anthropological Archives.

    ERIC Educational Resources Information Center

    Glenn, James R.

    Presenting specific examples of the manuscripts, cartographic materials, and pictorial materials found in the National Anthropological Archives, this paper describes Archive holdings (in such areas as archeology, linguistics, physical anthropology, and various branches of ethnology) which are dated from 1850 to the present and are representative…

  3. EpiContactTrace: an R-package for contact tracing during livestock disease outbreaks and for risk-based surveillance.

    PubMed

    Nöremark, Maria; Widgren, Stefan

    2014-03-17

    During outbreak of livestock diseases, contact tracing can be an important part of disease control. Animal movements can also be of relevance for risk-based surveillance and sampling, i.e. both when assessing consequences of introduction or likelihood of introduction. In many countries, animal movement data are collected with one of the major objectives to enable contact tracing. However, often an analytical step is needed to retrieve appropriate information for contact tracing or surveillance. In this study, an open source tool was developed to structure livestock movement data to facilitate contact-tracing in real time during disease outbreaks and for input in risk-based surveillance and sampling. The tool, EpiContactTrace, was written in the R-language and uses the network parameters in-degree, out-degree, ingoing contact chain and outgoing contact chain (also called infection chain), which are relevant for forward and backward tracing respectively. The time-frames for backward and forward tracing can be specified independently and search can be done on one farm at a time or for all farms within the dataset. Different outputs are available; datasets with network measures, contacts visualised in a map and automatically generated reports for each farm either in HTML or PDF-format intended for the end-users, i.e. the veterinary authorities, regional disease control officers and field-veterinarians. EpiContactTrace is available as an R-package at the R-project website (http://cran.r-project.org/web/packages/EpiContactTrace/). We believe this tool can help in disease control since it rapidly can structure essential contact information from large datasets. The reproducible reports make this tool robust and independent of manual compilation of data. The open source makes it accessible and easily adaptable for different needs.

  4. EpiContactTrace: an R-package for contact tracing during livestock disease outbreaks and for risk-based surveillance

    PubMed Central

    2014-01-01

    Background During outbreak of livestock diseases, contact tracing can be an important part of disease control. Animal movements can also be of relevance for risk-based surveillance and sampling, i.e. both when assessing consequences of introduction or likelihood of introduction. In many countries, animal movement data are collected with one of the major objectives to enable contact tracing. However, often an analytical step is needed to retrieve appropriate information for contact tracing or surveillance. Results In this study, an open source tool was developed to structure livestock movement data to facilitate contact-tracing in real time during disease outbreaks and for input in risk-based surveillance and sampling. The tool, EpiContactTrace, was written in the R-language and uses the network parameters in-degree, out-degree, ingoing contact chain and outgoing contact chain (also called infection chain), which are relevant for forward and backward tracing respectively. The time-frames for backward and forward tracing can be specified independently and search can be done on one farm at a time or for all farms within the dataset. Different outputs are available; datasets with network measures, contacts visualised in a map and automatically generated reports for each farm either in HTML or PDF-format intended for the end-users, i.e. the veterinary authorities, regional disease control officers and field-veterinarians. EpiContactTrace is available as an R-package at the R-project website (http://cran.r-project.org/web/packages/EpiContactTrace/). Conclusions We believe this tool can help in disease control since it rapidly can structure essential contact information from large datasets. The reproducible reports make this tool robust and independent of manual compilation of data. The open source makes it accessible and easily adaptable for different needs. PMID:24636731

  5. Distributed trace using central performance counter memory

    DOEpatents

    Satterfield, David L; Sexton, James C

    2013-10-22

    A plurality of processing cores, are central storage unit having at least memory connected in a daisy chain manner, forming a daisy chain ring layout on an integrated chip. At least one of the plurality of processing cores places trace data on the daisy chain connection for transmitting the trace data to the central storage unit, and the central storage unit detects the trace data and stores the trace data in the memory co-located in with the central storage unit.

  6. Distributed trace using central performance counter memory

    DOEpatents

    Satterfield, David L.; Sexton, James C.

    2013-01-22

    A plurality of processing cores, are central storage unit having at least memory connected in a daisy chain manner, forming a daisy chain ring layout on an integrated chip. At least one of the plurality of processing cores places trace data on the daisy chain connection for transmitting the trace data to the central storage unit, and the central storage unit detects the trace data and stores the trace data in the memory co-located in with the central storage unit.

  7. New Archiving Distributed InfrastructuRe (NADIR): Status and Evolution

    NASA Astrophysics Data System (ADS)

    De Marco, M.; Knapic, C.; Smareglia, R.

    2015-09-01

    The New Archiving Distributed InfrastructuRe (NADIR) has been developed at INAF-OATs IA2 (Italian National Institute for Astrophysics - Astronomical Observatory of Trieste, Italian center of Astronomical Archives), as an evolution of the previous archiving and distribution system, used on several telescopes (LBT, TNG, Asiago, etc.) to improve performance, efficiency and reliability. At the present, NADIR system is running on LBT telescope and Vespa (Italian telescopes network for outreach) Ramella et al. (2014), and will be used on TNG, Asiago and IRA (Istituto Radio Astronomia) archives of Medicina, Noto and SRT radio telescopes Zanichelli et al. (2014) as the data models for radio data will be ready. This paper will discuss the progress status, the architectural choices and the solutions adopted, during the development and the commissioning phase of the project. A special attention will be given to the LBT case, due to some critical aspect of data flow and policies and standards compliance, adopted by the LBT organization.

  8. FBIS: A regional DNA barcode archival & analysis system for Indian fishes.

    PubMed

    Nagpure, Naresh Sahebrao; Rashid, Iliyas; Pathak, Ajey Kumar; Singh, Mahender; Singh, Shri Prakash; Sarkar, Uttam Kumar

    2012-01-01

    DNA barcode is a new tool for taxon recognition and classification of biological organisms based on sequence of a fragment of mitochondrial gene, cytochrome c oxidase I (COI). In view of the growing importance of the fish DNA barcoding for species identification, molecular taxonomy and fish diversity conservation, we developed a Fish Barcode Information System (FBIS) for Indian fishes, which will serve as a regional DNA barcode archival and analysis system. The database presently contains 2334 sequence records of COI gene for 472 aquatic species belonging to 39 orders and 136 families, collected from available published data sources. Additionally, it contains information on phenotype, distribution and IUCN Red List status of fishes. The web version of FBIS was designed using MySQL, Perl and PHP under Linux operating platform to (a) store and manage the acquisition (b) analyze and explore DNA barcode records (c) identify species and estimate genetic divergence. FBIS has also been integrated with appropriate tools for retrieving and viewing information about the database statistics and taxonomy. It is expected that FBIS would be useful as a potent information system in fish molecular taxonomy, phylogeny and genomics. The database is available for free at http://mail.nbfgr.res.in/fbis/

  9. FBIS: A regional DNA barcode archival & analysis system for Indian fishes

    PubMed Central

    Nagpure, Naresh Sahebrao; Rashid, Iliyas; Pathak, Ajey Kumar; Singh, Mahender; Singh, Shri Prakash; Sarkar, Uttam Kumar

    2012-01-01

    DNA barcode is a new tool for taxon recognition and classification of biological organisms based on sequence of a fragment of mitochondrial gene, cytochrome c oxidase I (COI). In view of the growing importance of the fish DNA barcoding for species identification, molecular taxonomy and fish diversity conservation, we developed a Fish Barcode Information System (FBIS) for Indian fishes, which will serve as a regional DNA barcode archival and analysis system. The database presently contains 2334 sequence records of COI gene for 472 aquatic species belonging to 39 orders and 136 families, collected from available published data sources. Additionally, it contains information on phenotype, distribution and IUCN Red List status of fishes. The web version of FBIS was designed using MySQL, Perl and PHP under Linux operating platform to (a) store and manage the acquisition (b) analyze and explore DNA barcode records (c) identify species and estimate genetic divergence. FBIS has also been integrated with appropriate tools for retrieving and viewing information about the database statistics and taxonomy. It is expected that FBIS would be useful as a potent information system in fish molecular taxonomy, phylogeny and genomics. Availability The database is available for free at http://mail.nbfgr.res.in/fbis/ PMID:22715304

  10. The LCOGT Science Archive and Data Pipeline

    NASA Astrophysics Data System (ADS)

    Lister, Tim; Walker, Z.; Ciardi, D.; Gelino, C. R.; Good, J.; Laity, A.; Swain, M.

    2013-01-01

    Las Cumbres Observatory Global Telescope (LCOGT) is building and deploying a world-wide network of optical telescopes dedicated to time-domain astronomy. In the past year, we have deployed and commissioned four new 1m telescopes at McDonald Observatory, Texas and at CTIO, Chile, with more to come at SAAO, South Africa and Siding Spring Observatory, Australia. To handle these new data sources coming from the growing LCOGT network, and to serve them to end users, we have constructed a new data pipeline and Science Archive. We describe the new LCOGT pipeline, currently under development and testing, which makes use of the ORAC-DR automated recipe-based data reduction pipeline and illustrate some of the new data products. We also present the new Science Archive, which is being developed in partnership with the Infrared Processing and Analysis Center (IPAC) and show some of the new features the Science Archive provides.

  11. Core Genome Multilocus Sequence Typing Scheme for High-Resolution Typing of Enterococcus faecium

    PubMed Central

    de Been, Mark; Pinholt, Mette; Top, Janetta; Bletz, Stefan; van Schaik, Willem; Brouwer, Ellen; Rogers, Malbert; Kraat, Yvette; Bonten, Marc; Corander, Jukka; Westh, Henrik; Harmsen, Dag

    2015-01-01

    Enterococcus faecium, a common inhabitant of the human gut, has emerged in the last 2 decades as an important multidrug-resistant nosocomial pathogen. Since the start of the 21st century, multilocus sequence typing (MLST) has been used to study the molecular epidemiology of E. faecium. However, due to the use of a small number of genes, the resolution of MLST is limited. Whole-genome sequencing (WGS) now allows for high-resolution tracing of outbreaks, but current WGS-based approaches lack standardization, rendering them less suitable for interlaboratory prospective surveillance. To overcome this limitation, we developed a core genome MLST (cgMLST) scheme for E. faecium. cgMLST transfers genome-wide single nucleotide polymorphism (SNP) diversity into a standardized and portable allele numbering system that is far less computationally intensive than SNP-based analysis of WGS data. The E. faecium cgMLST scheme was built using 40 genome sequences that represented the diversity of the species. The scheme consists of 1,423 cgMLST target genes. To test the performance of the scheme, we performed WGS analysis of 103 outbreak isolates from five different hospitals in the Netherlands, Denmark, and Germany. The cgMLST scheme performed well in distinguishing between epidemiologically related and unrelated isolates, even between those that had the same sequence type (ST), which denotes the higher discriminatory power of this cgMLST scheme over that of conventional MLST. We also show that in terms of resolution, the performance of the E. faecium cgMLST scheme is equivalent to that of an SNP-based approach. In conclusion, the cgMLST scheme developed in this study facilitates rapid, standardized, and high-resolution tracing of E. faecium outbreaks. PMID:26400782

  12. A Tale of Two Archives: PDS3/PDS4 Archiving and Distribution of Juno Mission Data

    NASA Astrophysics Data System (ADS)

    Stevenson, Zena; Neakrase, Lynn; Huber, Lyle; Chanover, Nancy J.; Beebe, Reta F.; Sweebe, Kathrine; Johnson, Joni J.

    2017-10-01

    The Juno mission to Jupiter, which was launched on 5 August 2011 and arrived at the Jovian system in July 2016, represents the last mission to be officially archived under the PDS3 archive standards. Modernization and availability of the newer PDS4 archive standard has prompted the PDS Atmospheres Node (ATM) to provide on-the-fly migration of Juno data from PDS3 to PDS4. Data distribution under both standards presents challenges in terms of how to present data to the end user in both standards, without sacrificing accessibility to the data or impacting the active PDS3 mission pipelines tasked with delivering the data on predetermined schedules. The PDS Atmospheres Node has leveraged its experience with prior active PDS4 missions (e.g., LADEE and MAVEN) and ongoing PDS3-to-PDS4 data migration efforts providing a seamless distribution of Juno data in both PDS3 and PDS4. When ATM receives a data delivery from the Juno Science Operations Center, the PDS3 labels are validated and then fed through PDS4 migration software built at ATM. Specifically, a collection of Python methods and scripts has been developed to make the migration process as automatic as possible, even when working with the more complex labels used by several of the Juno instruments. This is used to create all of the PDS4 data labels at once and build PDS4 archive bundles with minimal human effort. Resultant bundles are then validated against the PDS4 standard and released alongside the certified PDS3 versions of the same data. The newer design of the distribution pages provides access to both versions of the data, utilizing some of the enhanced capabilities of PDS4 to improve search and retrieval of Juno data. Webpages are designed with the intent of offering easy access to all documentation for Juno data as well as the data themselves in both standards for users of all experience levels. We discuss the structure and organization of the Juno archive and associated webpages as examples of joint PDS3/PDS4

  13. Back to the Future: Long-Term Seismic Archives Revisited

    NASA Astrophysics Data System (ADS)

    Waldhauser, F.; Schaff, D. P.

    2007-12-01

    Archives of digital seismic data recorded by seismometer networks around the world have grown tremendously over the last several decades helped by the deployment of seismic stations and their continued operation within the framework of monitoring seismic activity. These archives typically consist of waveforms of seismic events and associated parametric data such as phase arrival time picks and the location of hypocenters. Catalogs of earthquake locations are fundamental data in seismology, and even in the Earth sciences in general. Yet, these locations have notoriously low spatial resolution because of errors in both the picks and the models commonly used to locate events one at a time. This limits their potential to address fundamental questions concerning the physics of earthquakes, the structure and composition of the Earth's interior, and the seismic hazards associated with active faults. We report on the comprehensive use of modern waveform cross-correlation based methodologies for high- resolution earthquake location - as applied to regional and global long-term seismic databases. By simultaneous re-analysis of two decades of the digital seismic archive of Northern California, reducing pick errors via cross-correlation and model errors via double-differencing, we achieve up to three orders of magnitude resolution improvement over existing hypocenter locations. The relocated events image networks of discrete faults at seismogenic depths across various tectonic settings that until now have been hidden in location uncertainties. Similar location improvements are obtained for earthquakes recorded at global networks by re- processing 40 years of parametric data from the ISC and corresponding waveforms archived at IRIS. Since our methods are scaleable and run on inexpensive Beowulf clusters, periodic re-analysis of entire archives may thus become a routine procedure to continuously improve resolution in existing catalogs. We demonstrate the role of seismic archives

  14. Proceedings from the Texas ITS data uses and archiving workshop

    DOT National Transportation Integrated Search

    1999-03-01

    The "Texas ITS Data Uses and Archiving Workshop" was held November 10, 1998, in Austin, Texas, to : discuss issues and opportunities related to archiving data from intelligent transportation systems (ITS). The : workshop participants represented seve...

  15. CDDIS: NASA's Archive of Space Geodesy Data and Products Supporting GGOS

    NASA Technical Reports Server (NTRS)

    Noll, Carey; Michael, Patrick

    2016-01-01

    The Crustal Dynamics Data Information System (CDDIS) supports data archiving and distribution activities for the space geodesy and geodynamics community. The main objectives of the system are to store space geodesy and geodynamics related data and products in a central archive, to maintain information about the archival of these data,to disseminate these data and information in a timely manner to a global scientific research community, and provide user based tools for the exploration and use of the archive. The CDDIS data system and its archive is a key component in several of the geometric services within the International Association of Geodesy (IAG) and its observing systemthe Global Geodetic Observing System (GGOS), including the IGS, the International DORIS Service (IDS), the International Laser Ranging Service (ILRS), the International VLBI Service for Geodesy and Astrometry (IVS), and the International Earth Rotation and Reference Systems Service (IERS). The CDDIS provides on-line access to over 17 Tbytes of dataand derived products in support of the IAG services and GGOS. The systems archive continues to grow and improve as new activities are supported and enhancements are implemented. Recently, the CDDIS has established a real-time streaming capability for GNSS data and products. Furthermore, enhancements to metadata describing the contents ofthe archive have been developed to facilitate data discovery. This poster will provide a review of the improvements in the system infrastructure that CDDIS has made over the past year for the geodetic community and describe future plans for the system.

  16. Rejected Manuscripts in Publishers' Archives: Legal Rights and Access

    ERIC Educational Resources Information Center

    Hamburger, Susan

    2011-01-01

    This article focuses on an analysis of how various archival repositories deal with rejected manuscripts in publishers' archives as part of existing collections and as potential donations, and includes suggestions for ways to provide access while maintaining the author's legal rights. Viewpoints from the journal editor, author, archivist, and…

  17. Assessment of Self-Archiving in Institutional Repositories: Across Disciplines

    ERIC Educational Resources Information Center

    Xia, Jingfeng

    2007-01-01

    This research examined self-archiving practices by four disciplines in seven institutional repositories. By checking each individual item for its metadata and deposition status, the research found that a disciplinary culture is not obviously presented. Rather, self-archiving is regulated by a liaison system and a mandate policy.

  18. 21 CFR 892.2050 - Picture archiving and communications system.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Picture archiving and communications system. 892.2050 Section 892.2050 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... communications system. (a) Identification. A picture archiving and communications system is a device that...

  19. Archive Storage Media Alternatives.

    ERIC Educational Resources Information Center

    Ranade, Sanjay

    1990-01-01

    Reviews requirements for a data archive system and describes storage media alternatives that are currently available. Topics discussed include data storage; data distribution; hierarchical storage architecture, including inline storage, online storage, nearline storage, and offline storage; magnetic disks; optical disks; conventional magnetic…

  20. Production of Previews and Advanced Data Products for the ESO Science Archive

    NASA Astrophysics Data System (ADS)

    Rité, C.; Slijkhuis, R.; Rosati, P.; Delmotte, N.; Rino, B.; Chéreau, F.; Malapert, J.-C.

    2008-08-01

    We present a project being carried out by the Virtual Observatory Systems Department/Advanced Data Products group in order to populate the ESO Science Archive Facility with image previews and advanced data products. The main goal is to provide users of the ESO Science Archive Facility with the possibility of viewing pre-processed images associated with instruments like WFI, ISAAC and SOFI before actually retrieving the data for full processing. The image processing is done by using the ESO/MVM image reduction software developed at ESO, to produce astrometrically calibrated FITS images, ranging from simple previews of single archive images, to fully stacked mosaics. These data products can be accessed via the ESO Science Archive Query Form and also be viewed with the browser VirGO {http://archive.eso.org/cms/virgo}.