ERIC Educational Resources Information Center
Chang, May
2000-01-01
Describes the development of electronic finding aids for archives at the University of Illinois, Urbana-Champaign that used XML (extensible markup language) and EAD (encoded archival description) to enable more flexible information management and retrieval than using MARC or a relational database management system. EAD template is appended.…
A Structure Standard for Archival Context: EAC-CPF Is Here
ERIC Educational Resources Information Center
Dryden, Jean
2010-01-01
The archival community's new descriptive standard, "Encoded Archival Context" for Corporate Bodies, Persons, and Families (EAC-CPF), supports the sharing of descriptions of records creators and is a significant addition to the suite of standards for archival description. EAC-CPF is a data structure standard similar to its older sibling EAD…
Descriptive Metadata: Emerging Standards.
ERIC Educational Resources Information Center
Ahronheim, Judith R.
1998-01-01
Discusses metadata, digital resources, cross-disciplinary activity, and standards. Highlights include Standard Generalized Markup Language (SGML); Extensible Markup Language (XML); Dublin Core; Resource Description Framework (RDF); Text Encoding Initiative (TEI); Encoded Archival Description (EAD); art and cultural-heritage metadata initiatives;…
ERIC Educational Resources Information Center
Devarrewaere, Anthony; Roelly, Aude
2005-01-01
The Archives Departementales de la Cote-d'Or chose as a priority for its automation plan the acquisition of a search engine, to publish online archival descriptions and the library catalogue. The Archives deliberately opted for a practical approach, using for the encoding of the finding aids an automatic data export from an archival management…
The Semantic Mapping of Archival Metadata to the CIDOC CRM Ontology
ERIC Educational Resources Information Center
Bountouri, Lina; Gergatsoulis, Manolis
2011-01-01
In this article we analyze the main semantics of archival description, expressed through Encoded Archival Description (EAD). Our main target is to map the semantics of EAD to the CIDOC Conceptual Reference Model (CIDOC CRM) ontology as part of a wider integration architecture of cultural heritage metadata. Through this analysis, it is concluded…
International Metadata Initiatives: Lessons in Bibliographic Control.
ERIC Educational Resources Information Center
Caplan, Priscilla
This paper looks at a subset of metadata schemes, including the Text Encoding Initiative (TEI) header, the Encoded Archival Description (EAD), the Dublin Core Metadata Element Set (DCMES), and the Visual Resources Association (VRA) Core Categories for visual resources. It examines why they developed as they did, major point of difference from…
Making Technology Work for Scholarship: Investing in the Data.
ERIC Educational Resources Information Center
Hockey, Susan
This paper examines issues related to how providers and consumers can make the best use of electronic information, focusing on the humanities. Topics include: new technology or old; electronic text and data formats; Standard Generalized Markup Language (SGML); text encoding initiative; encoded archival description (EAD); other applications of…
The "Metrica Regni" Project: The Polish Experience of EAD
ERIC Educational Resources Information Center
Wajs, Hubert
2005-01-01
The fonds of Crown Chancery Public Register ("Metrica Regni") was chosen for the pilot project to introduce Encoded Archival Description (EAD) because of its historical value, typical archival structure and existing finding aids. The rights and privileges granted by Polish kings were recorded in the Register. The oldest books in the…
The long hold: Storing data at the National Archives
NASA Technical Reports Server (NTRS)
Thibodeau, Kenneth
1991-01-01
A description of the information collection and storage needs of the National Archives and Records Administration (NARA) is presented. The unique situation of NARA is detailed. Two aspects which make the issue of obsolescence especially complex and costly are dealing with incoherent data and satisfying unknown and unknowable requirements. The data is incoherent because it comes from a wide range of independent sources, covers unrelated subjects, and is organized and encoded in ways that are not only not controlled but often unknown until received. NARA's mission to preserve and provide access to records with enduring value makes NARA, in effect, the agent of future generations. NARA's responsibility to the future places itself is a perpetual quandary of devotion to serving needs which are unknown.
ModelArchiver—A program for facilitating the creation of groundwater model archives
Winston, Richard B.
2018-03-01
ModelArchiver is a program designed to facilitate the creation of groundwater model archives that meet the requirements of the U.S. Geological Survey (USGS) policy (Office of Groundwater Technical Memorandum 2016.02, https://water.usgs.gov/admin/memo/GW/gw2016.02.pdf, https://water.usgs.gov/ogw/policy/gw-model/). ModelArchiver version 1.0 leads the user step-by-step through the process of creating a USGS groundwater model archive. The user specifies the contents of each of the subdirectories within the archive and provides descriptions of the archive contents. Descriptions of some files can be specified automatically using file extensions. Descriptions also can be specified individually. Those descriptions are added to a readme.txt file provided by the user. ModelArchiver moves the content of the archive to the archive folder and compresses some folders into .zip files.As part of the archive, the modeler must create a metadata file describing the archive. The program has a built-in metadata editor and provides links to websites that can aid in creation of the metadata. The built-in metadata editor is also available as a stand-alone program named FgdcMetaEditor version 1.0, which also is described in this report. ModelArchiver updates the metadata file provided by the user with descriptions of the files in the archive. An optional archive list file generated automatically by ModelMuse can streamline the creation of archives by identifying input files, output files, model programs, and ancillary files for inclusion in the archive.
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 3 (L1V3).
Bergmann, Frank T; Cooper, Jonathan; König, Matthias; Moraru, Ion; Nickerson, David; Le Novère, Nicolas; Olivier, Brett G; Sahle, Sven; Smith, Lucian; Waltemath, Dagmar
2018-03-19
The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML) describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML) is an XML-based format that encodes, for a given simulation experiment, (i) which models to use; (ii) which modifications to apply to models before simulation; (iii) which simulation procedures to run on each model; (iv) how to post-process the data; and (v) how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1) implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.
Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas
2011-12-15
The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research can be accurately described and combined.
ERIC Educational Resources Information Center
Kurtz, Michael J.; Eichorn, Guenther; Accomazzi, Alberto; Grant, Carolyn S.; Demleitner, Markus; Murray, Stephen S.; Jones, Michael L. W.; Gay, Geri K.; Rieger, Robert H.; Millman, David; Bruggemann-Klein, Anne; Klein, Rolf; Landgraf, Britta; Wang, James Ze; Li, Jia; Chan, Desmond; Wiederhold, Gio; Pitti, Daniel V.
1999-01-01
Includes six articles that discuss a digital library for astronomy; comparing evaluations of digital collection efforts; cross-organizational access management of Web-based resources; searching scientific bibliographic databases based on content-based relations between documents; semantics-sensitive retrieval for digital picture libraries; and…
Subject Access Points in the MARC Record and Archival Finding Aid: Enough or Too Many?
ERIC Educational Resources Information Center
Cox, Elizabeth; Czechowski, Leslie
2007-01-01
In this research project, the authors set out to discover the current practice in both the archival and cataloging worlds for usage of access points in descriptive records and to learn how archival descriptive practices fit into long-established library cataloging procedures and practices. A sample of archival finding aids and MARC records at 123…
Data Integration Using SOAP in the VSO
NASA Astrophysics Data System (ADS)
Tian, K. Q.; Bogart, R. S.; Davey, A.; Dimitoglou, G.; Gurman, J. B.; Hill, F.; Martens, P. C.; Wampler, S.
2003-05-01
The Virtual Solar Observatory (VSO) project has implemented a time interval search for all four participating data archives. The back-end query services are implemented as web services, and are accessible via SOAP. SOAP (Simple Object Access Protocol) defines an RPC (Remote Procedure Call) mechanism that employs HTTP as its transport and encodes the client-server interactions (request and response messages) in XML (eXtensible Markup Language) documents. In addition to its core function of identifying relevant datasets in the local archive, the SOAP server at each data provider acts as a "wrapper" that maps descriptions in an abstract data model to those in the provider-specific data model, and vice versa. It is in this way that VSO integrates heterogeneous data services and allows access to them using a common interface. Our experience with SOAP has been fruitful. It has proven to be a better alternative to traditional web access methods, namely POST and GET, because of its flexibility and interoperability.
The Archival Photograph and Its Meaning: Formalisms for Modeling Images
ERIC Educational Resources Information Center
Benson, Allen C.
2009-01-01
This article explores ontological principles and their potential applications in the formal description of archival photographs. Current archival descriptive practices are reviewed and the larger question is addressed: do archivists who are engaged in describing photographs need a more formalized system of representation, or do existing encoding…
2011-01-01
Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research can be accurately described and combined. PMID:22172142
ERIC Educational Resources Information Center
Roth-Lochner, Barbara; Grange, Didier
2005-01-01
This paper presents the results of a partnership begun in 2002 in the field of archival description between the Geneva City Archives (AVG) and the Manuscripts Department of the Public and University Library of Geneva (BPU). This cooperation has allowed the creation of two computer applications, which share technical and conceptual foundations.…
ERIC Educational Resources Information Center
Angel, Christine Marie
2012-01-01
This study is a comparison of the descriptive tagging practices among library, archive, and museum professionals using an inter-indexing consistency approach. The first purpose of this study was to determine the extent of the similarities and differences among professional groups when assigning descriptive tags to a wide variety of objects that…
ESDORA: A Data Archive Infrastructure Using Digital Object Model and Open Source Frameworks
NASA Astrophysics Data System (ADS)
Shrestha, Biva; Pan, Jerry; Green, Jim; Palanisamy, Giriprakash; Wei, Yaxing; Lenhardt, W.; Cook, R. Bob; Wilson, B. E.; Leggott, M.
2011-12-01
There are an array of challenges associated with preserving, managing, and using contemporary scientific data. Large volume, multiple formats and data services, and the lack of a coherent mechanism for metadata/data management are some of the common issues across data centers. It is often difficult to preserve the data history and lineage information, along with other descriptive metadata, hindering the true science value for the archived data products. In this project, we use digital object abstraction architecture as the information/knowledge framework to address these challenges. We have used the following open-source frameworks: Fedora-Commons Repository, Drupal Content Management System, Islandora (Drupal Module) and Apache Solr Search Engine. The system is an active archive infrastructure for Earth Science data resources, which include ingestion, archiving, distribution, and discovery functionalities. We use an ingestion workflow to ingest the data and metadata, where many different aspects of data descriptions (including structured and non-structured metadata) are reviewed. The data and metadata are published after reviewing multiple times. They are staged during the reviewing phase. Each digital object is encoded in XML for long-term preservation of the content and relations among the digital items. The software architecture provides a flexible, modularized framework for adding pluggable user-oriented functionality. Solr is used to enable word search as well as faceted search. A home grown spatial search module is plugged in to allow user to make a spatial selection in a map view. A RDF semantic store within the Fedora-Commons Repository is used for storing information on data lineage, dissemination services, and text-based metadata. We use the semantic notion "isViewerFor" to register internally or externally referenced URLs, which are rendered within the same web browser when possible. With appropriate mapping of content into digital objects, many different data descriptions, including structured metadata, data history, auditing trails, are captured and coupled with the data content. The semantic store provides a foundation for possible further utilizations, including provide full-fledged Earth Science ontology for data interpretation or lineage tracking. Datasets from the NASA-sponsored Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) as well as from the Synthesis Thematic Data Center (MAST-DC) are used in a testing deployment with the system. The testing deployment allows us to validate the features and values described here for the integrated system, which will be presented here. Overall, we believe that the integrated system is valid, reusable data archive software that provides digital stewardship for Earth Sciences data content, now and in the future. References: [1] Devarakonda, Ranjeet, and Harold Shanafield. "Drupal: Collaborative framework for science research." Collaboration Technologies and Systems (CTS), 2011 International Conference on. IEEE, 2011. [2] Devarakonda, Ranjeet, et al. "Semantic search integration to climate data." Collaboration Technologies and Systems (CTS), 2014 International Conference on. IEEE, 2014.
Digitized Archival Primary Sources in STEM: A Selected Webliography
ERIC Educational Resources Information Center
Jankowski, Amy
2017-01-01
Accessibility and findability of digitized archival resources can be a challenge, particularly for students or researchers not familiar with archival formats and digital interfaces, which adhere to different descriptive standards than more widely familiar library resources. Numerous aggregate archival collection databases exist, which provide a…
Cassini/Huygens Program Archive Plan for Science Data
NASA Technical Reports Server (NTRS)
Conners, D.
2000-01-01
The purpose of this document is to describe the Cassini/Huygens science data archive system which includes policy, roles and responsibilities, description of science and supplementary data products or data sets, metadata, documentation, software, and archive schedule and methods for archive transfer to the NASA Planetary Data System (PDS).
Using and Distributing Spaceflight Data: The Johnson Space Center Life Sciences Data Archive
NASA Technical Reports Server (NTRS)
Cardenas, J. A.; Buckey, J. C.; Turner, J. N.; White, T. S.; Havelka,J. A.
1995-01-01
Life sciences data collected before, during and after spaceflight are valuable and often irreplaceable. The Johnson Space Center Life is hard to find, and much of the data (e.g. Sciences Data Archive has been designed to provide researchers, engineers, managers and educators interactive access to information about and data from human spaceflight experiments. The archive system consists of a Data Acquisition System, Database Management System, CD-ROM Mastering System and Catalog Information System (CIS). The catalog information system is the heart of the archive. The CIS provides detailed experiment descriptions (both written and as QuickTime movies), hardware descriptions, hardware images, documents, and data. An initial evaluation of the archive at a scientific meeting showed that 88% of those who evaluated the catalog want to use the system when completed. The majority of the evaluators found the archive flexible, satisfying and easy to use. We conclude that the data archive effectively provides key life sciences data to interested users.
Increasing Access to Archival Records in Library Online Public Access Catalogs.
ERIC Educational Resources Information Center
Gilmore, Matthew B.
1988-01-01
Looks at the use of online public access catalogs, the utility of subject and call-number searching, and possible archival applications. The Wallace Archives at the Claremont Colleges is used as an example of the availability of bibliographic descriptions of multiformat archival materials through the library catalog. Sample records and searches…
Clay Tablets to Micro Chips: The Evolution of Archival Practice into the Twenty-First Century.
ERIC Educational Resources Information Center
Hannestad, Stephen E.
1991-01-01
Describes archival concepts and theories and their evolution in recent times. Basic archival functions--appraisal, arrangement, description, reference, preservation, and publication--are introduced. Early applications of automation to archives (including SPINDEX, NARS-5, NARS-A-1, MARC AMC, presNET, CTRACK, PHOTO, and DIARY) and automation trends…
How Twenty-Five People Shook the Archival World: The Case of Descriptive Standards
ERIC Educational Resources Information Center
Davis, Susan E.
2006-01-01
This study explores the development of the archival profession during the 1980s, a period that experienced rapid change and the adoption of the first descriptive standards. The research focuses on the leadership roles played by individuals acting independently and on behalf of their employing institutions and professional associations in the…
Archive data base and handling system for the Orbiter flying qualities experiment program
NASA Technical Reports Server (NTRS)
Myers, T. T.; Dimarco, R.; Magdaleno, R. E.; Aponso, B. L.
1986-01-01
The OFQ archives data base and handling system assembled as part of the Orbiter Flying Qualities (OFQ) research of the Orbiter Experiments Program (EOX) are described. The purpose of the OFQ archives is to preserve and document shuttle flight data relevant to vehicle dynamics, flight control, and flying qualities in a form that permits maximum use for qualified users. In their complete form, the OFQ archives contain descriptive text (general information about the flight, signal descriptions and units) as well as numerical time history data. Since the shuttle program is so complex, the official data base contains thousands of signals and very complex entries are required to obtain data. The OFQ archives are intended to provide flight phase oriented data subsets with relevant signals which are easily identified for flying qualities research.
NASA Technical Reports Server (NTRS)
Jackson, Bruce
2006-01-01
DAVEtools is a set of Java archives that embodies tools for manipulating flight-dynamics models that have been encoded in dynamic aerospace vehicle exchange markup language (DAVE-ML). [DAVE-ML is an application program, written in Extensible Markup Language (XML), for encoding complete computational models of the dynamics of aircraft and spacecraft.
An Invitation to the ALA Archives.
ERIC Educational Resources Information Center
Beckel, Deborah; Brichford, Maynard
1984-01-01
Description of materials found in American Library Association Archives located at University of Illinois highlights 1905 letter defending Melvil Dewey, the 1900 Saguenay River Trip, children's librarians, library education, 1926 visit to President Coolidge by foreign librarians, and the American Library in Mexico. Notes on using the archives are…
Publications - GMC 407 | Alaska Division of Geological & Geophysical
locations, and archive inventory for 32 near-shore marine sediment Vibracore samples, West Dock Causeway , Drilling procedures, sample descriptions, boring logs, borehole locations, and archive inventory for 32
NASA Technical Reports Server (NTRS)
Singley, P. T.; Bell, J. D.; Daugherty, P. F.; Hubbs, C. A.; Tuggle, J. G.
1993-01-01
This paper will discuss user interface development and the structure and use of metadata for the Atmospheric Radiation Measurement (ARM) Archive. The ARM Archive, located at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee, is the data repository for the U.S. Department of Energy's (DOE's) ARM Project. After a short description of the ARM Project and the ARM Archive's role, we will consider the philosophy and goals, constraints, and prototype implementation of the user interface for the archive. We will also describe the metadata that are stored at the archive and support the user interface.
EAC and the Development of National and European Gateways to Archives
ERIC Educational Resources Information Center
Ottosson, Per-Gunnar
2005-01-01
In the development of gateways to archives there are two different approaches, one focusing on the descriptions of the material and the other on the creators. Search and retrieval with precision and quality require controlled access points and name authority control. National registries of private archives have a long tradition in implementing the…
ERIC Educational Resources Information Center
Bourdon, Francoise
2005-01-01
The translation into French of the Encoded Archival Context (EAC) DTD tag library has been in progress for a few months. It is being carried out by a group of experts gathered by AFNOR, the French national standards agency. The main goal of this group is to foster the interoperability of authority data between archives, libraries and museums, and…
ERIC Educational Resources Information Center
Goulet, Anne; Maftei, Nicolas
2005-01-01
At the Archives Departementales des Pyrenees-Atlantiques, the encoding of more than forty legacy finding aids written between 1863 and 2000 is part of a program of digitization of the collections. Because of the size of the project, an external consultant, ArchProteus, has been brought in and specific management procedures have been put in place…
ITS data archiving : five-year program description
DOT National Transportation Integrated Search
2000-03-01
The purpose of this document is to explain the need for and elements of a Federal program : addressing the archiving and multi-agency use of data generated from Intelligent : Transportation Systems (ITS) applications. The development of this program ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shoopman, J. D.
This report documents Livermore Computing (LC) activities in support of ASC L2 milestone 5589: Modernization and Expansion of LLNL Archive Disk Cache, due March 31, 2016. The full text of the milestone is included in Attachment 1. The description of the milestone is: Description: Configuration of archival disk cache systems will be modernized to reduce fragmentation, and new, higher capacity disk subsystems will be deployed. This will enhance archival disk cache capability for ASC archive users, enabling files written to the archives to remain resident on disk for many (6–12) months, regardless of file size. The milestone was completed inmore » three phases. On August 26, 2015 subsystems with 6PB of disk cache were deployed for production use in LLNL’s unclassified HPSS environment. Following that, on September 23, 2015 subsystems with 9 PB of disk cache were deployed for production use in LLNL’s classified HPSS environment. On January 31, 2016, the milestone was fully satisfied when the legacy Data Direct Networks (DDN) archive disk cache subsystems were fully retired from production use in both LLNL’s unclassified and classified HPSS environments, and only the newly deployed systems were in use.« less
ITS data archiving : five-year program description
DOT National Transportation Integrated Search
2000-05-01
The purpose of this document is to explain the need for and elements of a Federal program addressing the archiving and multi-agency use of data generated from Intelligent Transportation Systems (ITS) applications. The development of this program buil...
Technology and the Transformation of Archival Description
ERIC Educational Resources Information Center
Pitti, Daniel V.
2005-01-01
The emergence of computer and network technologies has presented the archival profession with daunting challenges as well as inspiring opportunities. Archivists have been actively imagining and realizing the application of advanced technologies to their professional functions and activities. Using advanced technologies, archivists have been…
Life Sciences Data Archive Scientific Development
NASA Technical Reports Server (NTRS)
Buckey, Jay C., Jr.
1995-01-01
The Life Sciences Data Archive will provide scientists, managers and the general public with access to biomedical data collected before, during and after spaceflight. These data are often irreplaceable and represent a major resource from the space program. For these data to be useful, however, they must be presented with enough supporting information, description and detail so that an interested scientist can understand how, when and why the data were collected. The goal of this contract was to provide a scientific consultant to the archival effort at the NASA-Johnson Space Center. This consultant (Jay C. Buckey, Jr., M.D.) is a scientist, who was a co-investigator on both the Spacelab Life Sciences-1 and Spacelab Life Sciences-2 flights. In addition he was an alternate payload specialist for the Spacelab Life Sciences-2 flight. In this role he trained on all the experiments on the flight and so was familiar with the protocols, hardware and goals of all the experiments on the flight. Many of these experiments were flown on both SLS-1 and SLS-2. This background was useful for the archive, since the first mission to be archived was Spacelab Life Sciences-1. Dr. Buckey worked directly with the archive effort to ensure that the parameters, scientific descriptions, protocols and data sets were accurate and useful.
Deep hierarchical attention network for video description
NASA Astrophysics Data System (ADS)
Li, Shuohao; Tang, Min; Zhang, Jun
2018-03-01
Pairing video to natural language description remains a challenge in computer vision and machine translation. Inspired by image description, which uses an encoder-decoder model for reducing visual scene into a single sentence, we propose a deep hierarchical attention network for video description. The proposed model uses convolutional neural network (CNN) and bidirectional LSTM network as encoders while a hierarchical attention network is used as the decoder. Compared to encoder-decoder models used in video description, the bidirectional LSTM network can capture the temporal structure among video frames. Moreover, the hierarchical attention network has an advantage over single-layer attention network on global context modeling. To make a fair comparison with other methods, we evaluate the proposed architecture with different types of CNN structures and decoders. Experimental results on the standard datasets show that our model has a more superior performance than the state-of-the-art techniques.
ERIC Educational Resources Information Center
Delcambre, Angie C., Comp.; And Others
This finding aid is a selected list of supernatural-related narratives recorded in the United States and held in the Archive of Folk Culture of the Library of Congress. Brief descriptions of the recordings are accompanied by identification numbers. Information about listening to or ordering any of the listed recordings is available from the…
Bookbinding and the Conservation of Books. A Dictionary of Descriptive Terminology.
ERIC Educational Resources Information Center
Roberts, Matt T.; Etherington, Don
Intended for bookbinders and conservators of library and archival material and for those working in related fields, such as bibliography and librarianship, this dictionary contains definitions for the nomenclature of bookbinding and the conservation of archival material, illustrations of bookbinding equipment and processes, and biographical…
The screwworm eradication data system archives
NASA Technical Reports Server (NTRS)
Barnes, C. M.; Spaulding, R. R.; Giddings, L. E.
1977-01-01
The archives accumulated during 1 year of operation of the Satellite Temperature-Monitoring System during development of the Screwworm Eradication Data System are reported. Brief descriptions of all the kinds of tapes, as well as their potential uses, are presented. Reference is made to other documents that explain the generation of these data.
77 FR 7151 - Privacy Act of 1974: New System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-10
... Outstanding, Total Export Credit loses for last 3 years, Five Largest Export Sales Markets, Description of... litigation; h. By National Archives and Records Administration for record management inspections in its role...-275-02-01-1a approved by National Archives and Records Administration September 27, 2002. System...
Building a Collaborative Position Description Archive as a Community of Practice
ERIC Educational Resources Information Center
Keith, Brian W.; Smith, Bonnie J.; Taylor, Laurie N.
2017-01-01
Analyzing position descriptions provides insights into new and emerging trends, especially as the role of academic and research libraries continues to evolve, and new position types and new ways of organizing work emerge. Personnel officers and other library leaders frequently collaborate by sharing position descriptions in an effort to understand…
Overview: DVD-video disc set of seafloor transects during USGS research cruises in the Pacific Ocean
Chezar, Henry; Newman, Ivy
2006-01-01
Many USGS research programs involve the gathering of underwater seafloor video footage. This footage was captured on a variety of media, including Beta III and VHS tapes. Much of this media is now deteriorating, prompting the migration of this video footage onto DVD-Video discs. Advantages of using DVD-Video discs are: less storage space, ease of transport, wider distribution, and non-degradational viewing of the media. The videos in this particular collection (328 of them) were made on the ocean floor under President Reagan's Exclusive Economic Zone proclamation of 1983. There are now five copies of these 328 discs in existence: at the USGS libraries in Menlo Park, Calif., Denver, Colo., and Reston, Va.; at the USGS Publications Warehouse (masters from which to make copies for customers); and Hank Chezar's USGS Western Coastal and Marine Geology team archives. The purpose of Open-File Report 2004-1101 is to provide users with a listing of the available DVD-Video discs (with their Open-File Report numbers) along with a brief description of their associated USGS research activities. Each disc was created by first encoding the source video and audio into MPEG-2 streams using the MediaPress Pro hardware encoder. A menu for the disc was then made using Adobe Photoshop 6.0. The disc was then authored using DVD Studio Pro and subsequently written onto a DVD-R recordable disc.
Clark, Edward B; Hickinbotham, Simon J; Stepney, Susan
2017-05-01
We present a novel stringmol-based artificial chemistry system modelled on the universal constructor architecture (UCA) first explored by von Neumann. In a UCA, machines interact with an abstract description of themselves to replicate by copying the abstract description and constructing the machines that the abstract description encodes. DNA-based replication follows this architecture, with DNA being the abstract description, the polymerase being the copier, and the ribosome being the principal machine in expressing what is encoded on the DNA. This architecture is semantically closed as the machine that defines what the abstract description means is itself encoded on that abstract description. We present a series of experiments with the stringmol UCA that show the evolution of the meaning of genomic material, allowing the concept of semantic closure and transitions between semantically closed states to be elucidated in the light of concrete examples. We present results where, for the first time in an in silico system, simultaneous evolution of the genomic material, copier and constructor of a UCA, giving rise to viable offspring. © 2017 The Author(s).
STScI Archive Manual, Version 7.0
NASA Astrophysics Data System (ADS)
Padovani, Paolo
1999-06-01
The STScI Archive Manual provides information a user needs to know to access the HST archive via its two user interfaces: StarView and a World Wide Web (WWW) interface. It provides descriptions of the StarView screens used to access information in the database and the format of that information, and introduces the use to the WWW interface. Using the two interfaces, users can search for observations, preview public data, and retrieve data from the archive. Using StarView one can also find calibration reference files and perform detailed association searches. With the WWW interface archive users can access, and obtain information on, all Multimission Archive at Space Telescope (MAST) data, a collection of mainly optical and ultraviolet datasets which include, amongst others, the International Ultraviolet Explorer (IUE) Final Archive. Both interfaces feature a name resolver which simplifies searches based on target name.
Impact of Environmental Pollution on the Preservation of Archives and Records: A RAMP Study.
ERIC Educational Resources Information Center
Pascoe, M. W.
Following a description of the essential chemical and physical structures of most archive documents, this paper examines the various pollutants that can damage these documents and gives their characteristics. The pollutants are categorized as environmental (e.g., smokes, mineral dusts); exterior gas and vapor (e.g., oxygen, water, sulphur…
Astronomical catalog desk reference, 1994 edition
NASA Technical Reports Server (NTRS)
1994-01-01
The Astronomical Catalog Desk Reference is designed to aid astronomers in locating machine readable catalogs in the Astronomical Data Center (ADC) archives. The key reference components of this document are as follows: A listing of shortened titles for all catalogs available from the ADC (includes the name of the lead author and year of publication), brief descriptions of over 300 astronomical catalogs, an index of ADC catalog numbers by subject keyword, and an index of ADC catalog numbers by author. The heart of this document is the set of brief descriptions generated by the ADC staff. The 1994 edition of the Astronomical Catalog Desk Reference contains descriptions for over one third of the catalogs in the ADC archives. Readers are encouraged to refer to this section for concise summaries of those catalogs and their contents.
Multiple descriptions based on multirate coding for JPEG 2000 and H.264/AVC.
Tillo, Tammam; Baccaglini, Enrico; Olmo, Gabriella
2010-07-01
Multiple description coding (MDC) makes use of redundant representations of multimedia data to achieve resiliency. Descriptions should be generated so that the quality obtained when decoding a subset of them only depends on their number and not on the particular received subset. In this paper, we propose a method based on the principle of encoding the source at several rates, and properly blending the data encoded at different rates to generate the descriptions. The aim is to achieve efficient redundancy exploitation, and easy adaptation to different network scenarios by means of fine tuning of the encoder parameters. We apply this principle to both JPEG 2000 images and H.264/AVC video data. We consider as the reference scenario the distribution of contents on application-layer overlays with multiple-tree topology. The experimental results reveal that our method favorably compares with state-of-art MDC techniques.
NASA Technical Reports Server (NTRS)
Thompson, Susan E.; Fraquelli, Dorothy; Van Cleve, Jeffrey E.; Caldwell, Douglas A.
2016-01-01
A description of Kepler, its design, performance and operational constraints may be found in the Kepler Instrument Handbook (KIH, Van Cleve Caldwell 2016). A description of Kepler calibration and data processing is described in the Kepler Data Processing Handbook (KDPH, Jenkins et al. 2016; Fanelli et al. 2011). Science users should also consult the special ApJ Letters devoted to early Kepler results and mission design (April 2010, ApJL, Vol. 713 L79-L207). Additional technical details regarding the data processing and data qualities can be found in the Kepler Data Characteristics Handbook (KDCH, Christiansen et al. 2013) and the Data Release Notes (DRN). This archive manual specifically documents the file formats, as they exist for the last data release of Kepler, Data Release 25(KSCI-19065-002). The earlier versions of the archive manual and data release notes act as documentation for the earlier versions of the data files.
A new archival infrastructure for highly-structured astronomical data
NASA Astrophysics Data System (ADS)
Dovgan, Erik; Knapic, Cristina; Sponza, Massimo; Smareglia, Riccardo
2018-03-01
With the advent of the 2020 Radio Astronomy Telescopes era, the amount and format of the radioastronomical data is becoming a massive and performance-critical challenge. Such an evolution of data models and data formats require new data archiving techniques that allow massive and fast storage of data that are at the same time also efficiently processed. A useful expertise for efficient archiviation has been obtained through data archiving of Medicina and Noto Radio Telescopes. The presented archival infrastructure named the Radio Archive stores and handles various formats, such as FITS, MBFITS, and VLBI's XML, which includes description and ancillary files. The modeling and architecture of the archive fulfill all the requirements of both data persistence and easy data discovery and exploitation. The presented archive already complies with the Virtual Observatory directives, therefore future service implementations will also be VO compliant. This article presents the Radio Archive services and tools, from the data acquisition to the end-user data utilization.
ERIC Educational Resources Information Center
Nicewarner, Metta
1988-01-01
Description of the microfilming of a women's studies archive at the Texas Woman's University Library discusses: (1) project background; (2) criteria for equipment purchase; (3) equipment selected; (4) recommended resources; (5) indexing and layout decisions; (6) the filming process; and (7) the pros and cons of in-house microreproduction. (three…
Archiving of HEAO-1 data products and the creation of a general user's guide to the archive
NASA Technical Reports Server (NTRS)
Nousek, John A.
1993-01-01
The activities at Penn State University are described. Initiated at Penn State in Jan. 1989, the goal of this program was to preserve the results of the HEAO-1 mission by transforming the obsolete and disorganized data products into modern and documented forms. The result of this effort was an archive of top level data products, totalling 70 Mbytes; a general User's Guide to the archive, which is attached; and a hardcopy archive containing standardized plots and output of fits made to all the pointing data taken by the HEAO-1 A-2 LED experiment. A more detailed description of these activities is found in the following sections. Accompanying this document is a copy of the User's Guide which may provide additional detail.
The European Radiobiology Archives (ERA)--content, structure and use illustrated by an example.
Gerber, G B; Wick, R R; Kellerer, A M; Hopewell, J W; Di Majo, V; Dudoignon, N; Gössner, W; Stather, J
2006-01-01
The European Radiobiology Archives (ERA), supported by the European Commission and the European Late Effect Project Group (EULEP), together with the US National Radiobiology Archives (NRA) and the Japanese Radiobiology Archives (JRA) have collected all information still available on long-term animal experiments, including some selected human studies. The archives consist of a database in Microsoft Access, a website, databases of references and information on the use of the database. At present, the archives contain a description of the exposure conditions, animal strains, etc. from approximately 350,000 individuals; data on survival and pathology are available from approximately 200,000 individuals. Care has been taken to render pathological diagnoses compatible among different studies and to allow the lumping of pathological diagnoses into more general classes. 'Forms' in Access with an underlying computer code facilitate the use of the database. This paper describes the structure and content of the archives and illustrates an example for a possible analysis of such data.
ChIP-seq guidelines and practices of the ENCODE and modENCODE consortia.
Landt, Stephen G; Marinov, Georgi K; Kundaje, Anshul; Kheradpour, Pouya; Pauli, Florencia; Batzoglou, Serafim; Bernstein, Bradley E; Bickel, Peter; Brown, James B; Cayting, Philip; Chen, Yiwen; DeSalvo, Gilberto; Epstein, Charles; Fisher-Aylor, Katherine I; Euskirchen, Ghia; Gerstein, Mark; Gertz, Jason; Hartemink, Alexander J; Hoffman, Michael M; Iyer, Vishwanath R; Jung, Youngsook L; Karmakar, Subhradip; Kellis, Manolis; Kharchenko, Peter V; Li, Qunhua; Liu, Tao; Liu, X Shirley; Ma, Lijia; Milosavljevic, Aleksandar; Myers, Richard M; Park, Peter J; Pazin, Michael J; Perry, Marc D; Raha, Debasish; Reddy, Timothy E; Rozowsky, Joel; Shoresh, Noam; Sidow, Arend; Slattery, Matthew; Stamatoyannopoulos, John A; Tolstorukov, Michael Y; White, Kevin P; Xi, Simon; Farnham, Peggy J; Lieb, Jason D; Wold, Barbara J; Snyder, Michael
2012-09-01
Chromatin immunoprecipitation (ChIP) followed by high-throughput DNA sequencing (ChIP-seq) has become a valuable and widely used approach for mapping the genomic location of transcription-factor binding and histone modifications in living cells. Despite its widespread use, there are considerable differences in how these experiments are conducted, how the results are scored and evaluated for quality, and how the data and metadata are archived for public use. These practices affect the quality and utility of any global ChIP experiment. Through our experience in performing ChIP-seq experiments, the ENCODE and modENCODE consortia have developed a set of working standards and guidelines for ChIP experiments that are updated routinely. The current guidelines address antibody validation, experimental replication, sequencing depth, data and metadata reporting, and data quality assessment. We discuss how ChIP quality, assessed in these ways, affects different uses of ChIP-seq data. All data sets used in the analysis have been deposited for public viewing and downloading at the ENCODE (http://encodeproject.org/ENCODE/) and modENCODE (http://www.modencode.org/) portals.
ChIP-seq guidelines and practices of the ENCODE and modENCODE consortia
Landt, Stephen G.; Marinov, Georgi K.; Kundaje, Anshul; Kheradpour, Pouya; Pauli, Florencia; Batzoglou, Serafim; Bernstein, Bradley E.; Bickel, Peter; Brown, James B.; Cayting, Philip; Chen, Yiwen; DeSalvo, Gilberto; Epstein, Charles; Fisher-Aylor, Katherine I.; Euskirchen, Ghia; Gerstein, Mark; Gertz, Jason; Hartemink, Alexander J.; Hoffman, Michael M.; Iyer, Vishwanath R.; Jung, Youngsook L.; Karmakar, Subhradip; Kellis, Manolis; Kharchenko, Peter V.; Li, Qunhua; Liu, Tao; Liu, X. Shirley; Ma, Lijia; Milosavljevic, Aleksandar; Myers, Richard M.; Park, Peter J.; Pazin, Michael J.; Perry, Marc D.; Raha, Debasish; Reddy, Timothy E.; Rozowsky, Joel; Shoresh, Noam; Sidow, Arend; Slattery, Matthew; Stamatoyannopoulos, John A.; Tolstorukov, Michael Y.; White, Kevin P.; Xi, Simon; Farnham, Peggy J.; Lieb, Jason D.; Wold, Barbara J.; Snyder, Michael
2012-01-01
Chromatin immunoprecipitation (ChIP) followed by high-throughput DNA sequencing (ChIP-seq) has become a valuable and widely used approach for mapping the genomic location of transcription-factor binding and histone modifications in living cells. Despite its widespread use, there are considerable differences in how these experiments are conducted, how the results are scored and evaluated for quality, and how the data and metadata are archived for public use. These practices affect the quality and utility of any global ChIP experiment. Through our experience in performing ChIP-seq experiments, the ENCODE and modENCODE consortia have developed a set of working standards and guidelines for ChIP experiments that are updated routinely. The current guidelines address antibody validation, experimental replication, sequencing depth, data and metadata reporting, and data quality assessment. We discuss how ChIP quality, assessed in these ways, affects different uses of ChIP-seq data. All data sets used in the analysis have been deposited for public viewing and downloading at the ENCODE (http://encodeproject.org/ENCODE/) and modENCODE (http://www.modencode.org/) portals. PMID:22955991
Linking multiple biodiversity informatics platforms with Darwin Core Archives
2014-01-01
Abstract We describe an implementation of the Darwin Core Archive (DwC-A) standard that allows for the exchange of biodiversity information contained within the Scratchpads virtual research environment with external collaborators. Using this single archive file Scratchpad users can expose taxonomies, specimen records, species descriptions and a range of other data to a variety of third-party aggregators and tools (currently Encyclopedia of Life, eMonocot Portal, CartoDB, and the Common Data Model) for secondary use. This paper describes our technical approach to dynamically building and validating Darwin Core Archives for the 600+ Scratchpad user communities, which can be used to serve the diverse data needs of all of our content partners. PMID:24723785
Architecture for VLSI design of Reed-Solomon encoders
NASA Technical Reports Server (NTRS)
Liu, K. Y.
1982-01-01
A description is given of the logic structure of the universal VLSI symbol-slice Reed-Solomon (RS) encoder chip, from a group of which an RS encoder may be constructed through cascading and proper interconnection. As a design example, it is shown that an RS encoder presently requiring approximately 40 discrete CMOS ICs may be replaced by an RS encoder consisting of four identical, interconnected VLSI RS encoder chips, offering in addition to greater compactness both a lower power requirement and greater reliability.
Architecture for VLSI design of Reed-Solomon encoders
NASA Astrophysics Data System (ADS)
Liu, K. Y.
1982-02-01
A description is given of the logic structure of the universal VLSI symbol-slice Reed-Solomon (RS) encoder chip, from a group of which an RS encoder may be constructed through cascading and proper interconnection. As a design example, it is shown that an RS encoder presently requiring approximately 40 discrete CMOS ICs may be replaced by an RS encoder consisting of four identical, interconnected VLSI RS encoder chips, offering in addition to greater compactness both a lower power requirement and greater reliability.
Project Atlas Field Definitions | NOAA Gulf Spill Restoration
Archive Home Project Atlas Field Definitions Project Atlas Field Definitions Field Definition Project Title The Project Title as listed in the Final Early Restoration Plan and Environmental Assessment (FERP /EA). General Information: Project Description Narrative description of the project. General
NASA's Planetary Data System: Support for the Delivery of Derived Data Sets at the Atmospheres Node
NASA Astrophysics Data System (ADS)
Chanover, Nancy J.; Beebe, Reta; Neakrase, Lynn; Huber, Lyle; Rees, Shannon; Hornung, Danae
2015-11-01
NASA’s Planetary Data System is charged with archiving electronic data products from NASA planetary missions that are sponsored by NASA’s Science Mission Directorate. This archive, currently organized by science disciplines, uses standards for describing and storing data that are designed to enable future scientists who are unfamiliar with the original experiments to analyze the data, and to do this using a variety of computer platforms, with no additional support. These standards address the data structure, description contents, and media design. The new requirement in the NASA ROSES-2015 Research Announcement to include a Data Management Plan will result in an increase in the number of derived data sets that are being delivered to the PDS. These data sets may come from the Planetary Data Archiving, Restoration and Tools (PDART) program, other Data Analysis Programs (DAPs) or be volunteered by individuals who are publishing the results of their analysis. In response to this increase, the PDS Atmospheres Node is developing a set of guidelines and user tools to make the process of archiving these derived data products more efficient. Here we provide a description of Atmospheres Node resources, including a letter of support for the proposal stage, a communication schedule for the planned archive effort, product label samples and templates in extensible markup language (XML), documentation templates, and validation tools necessary for producing a PDS4-compliant derived data bundle(s) efficiently and accurately.
The Planetary Data System Web Catalog Interface--Another Use of the Planetary Data System Data Model
NASA Technical Reports Server (NTRS)
Hughes, S.; Bernath, A.
1995-01-01
The Planetary Data System Data Model consists of a set of standardized descriptions of entities within the Planetary Science Community. These can be real entities in the space exploration domain such as spacecraft, instruments, and targets; conceptual entities such as data sets, archive volumes, and data dictionaries; or the archive data products such as individual images, spectrum, series, and qubes.
Communication Encoding and Decoding in Children from Different Socioeconomic and Racial Groups.
ERIC Educational Resources Information Center
Quay, Lorene C.; And Others
Although lower socioeconomic status (SES) black children have been shown to be inferior to middle-SES white children in communication accuracy, whether the problem is in encoding (production), decoding (comprehension), or both is not clear. To evaluate encoding and decoding separately, tape recordings of picture descriptions were obtained from…
Semantic Modelling of Digital Forensic Evidence
NASA Astrophysics Data System (ADS)
Kahvedžić, Damir; Kechadi, Tahar
The reporting of digital investigation results are traditionally carried out in prose and in a large investigation may require successive communication of findings between different parties. Popular forensic suites aid in the reporting process by storing provenance and positional data but do not automatically encode why the evidence is considered important. In this paper we introduce an evidence management methodology to encode the semantic information of evidence. A structured vocabulary of terms, ontology, is used to model the results in a logical and predefined manner. The descriptions are application independent and automatically organised. The encoded descriptions aim to help the investigation in the task of report writing and evidence communication and can be used in addition to existing evidence management techniques.
Early Learnings from the National Library of New Zealand's National Digital Heritage Archive Project
ERIC Educational Resources Information Center
Knight, Steve
2010-01-01
Purpose: The purpose of this paper is to provide a brief description of the digital preservation programme at the National Library of New Zealand. Design/methodology/approach: Following a description of the legislative and strategic context for digital preservation in New Zealand, details are provided of the system for the National Digital…
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.
Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar
2015-09-04
The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.
Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar
2015-06-01
The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.
Semi-automated Data Set Submission Work Flow for Archival with the ORNL DAAC
NASA Astrophysics Data System (ADS)
Wright, D.; Beaty, T.; Cook, R. B.; Devarakonda, R.; Eby, P.; Heinz, S. L.; Hook, L. A.; McMurry, B. F.; Shanafield, H. A.; Sill, D.; Santhana Vannan, S.; Wei, Y.
2013-12-01
The ORNL DAAC archives and publishes, free of charge, data and information relevant to biogeochemical, ecological, and environmental processes. The ORNL DAAC primarily archives data produced by NASA's Terrestrial Ecology Program; however, any data that are pertinent to the biogeochemical and ecological community are of interest. The data set submission process to the ORNL DAAC has been recently updated and semi-automated to provide a consistent data provider experience and to create a uniform data product. The data archived at the ORNL DAAC must be well formatted, self-descriptive, and documented, as well as referenced in a peer-reviewed publication. If the ORNL DAAC is the appropriate archive for a data set, the data provider will be sent an email with several URL links to guide them through the submission process. The data provider will be asked to fill out a short online form to help the ORNL DAAC staff better understand the data set. These questions cover information about the data set, a description of the data set, temporal and spatial characteristics of the data set, and how the data were prepared and delivered. The questionnaire is generic and has been designed to gather input on the various diverse data sets the ORNL DAAC archives. A data upload module and metadata editor further guide the data provider through the submission process. For submission purposes, a complete data set includes data files, document(s) describing data, supplemental files, metadata record(s), and an online form. There are five major functions the ORNL DAAC performs during the process of archiving data: 1) Ingestion is the ORNL DAAC side of submission; data are checked, metadata records are compiled, and files are converted to archival formats. 2) Metadata records and data set documentation made searchable and the data set is given a permanent URL. 3) The data set is published, assigned a DOI, and advertised. 4) The data set is provided long-term post-project support. 5) Stewardship of data ensures the data are stored on state of the art computer systems with reliable backups.
funRNA: a fungi-centered genomics platform for genes encoding key components of RNAi.
Choi, Jaeyoung; Kim, Ki-Tae; Jeon, Jongbum; Wu, Jiayao; Song, Hyeunjeong; Asiegbu, Fred O; Lee, Yong-Hwan
2014-01-01
RNA interference (RNAi) is involved in genome defense as well as diverse cellular, developmental, and physiological processes. Key components of RNAi are Argonaute, Dicer, and RNA-dependent RNA polymerase (RdRP), which have been functionally characterized mainly in model organisms. The key components are believed to exist throughout eukaryotes; however, there is no systematic platform for archiving and dissecting these important gene families. In addition, few fungi have been studied to date, limiting our understanding of RNAi in fungi. Here we present funRNA http://funrna.riceblast.snu.ac.kr/, a fungal kingdom-wide comparative genomics platform for putative genes encoding Argonaute, Dicer, and RdRP. To identify and archive genes encoding the abovementioned key components, protein domain profiles were determined from reference sequences obtained from UniProtKB/SwissProt. The domain profiles were searched using fungal, metazoan, and plant genomes, as well as bacterial and archaeal genomes. 1,163, 442, and 678 genes encoding Argonaute, Dicer, and RdRP, respectively, were predicted. Based on the identification results, active site variation of Argonaute, diversification of Dicer, and sequence analysis of RdRP were discussed in a fungus-oriented manner. funRNA provides results from diverse bioinformatics programs and job submission forms for BLAST, BLASTMatrix, and ClustalW. Furthermore, sequence collections created in funRNA are synced with several gene family analysis portals and databases, offering further analysis opportunities. funRNA provides identification results from a broad taxonomic range and diverse analysis functions, and could be used in diverse comparative and evolutionary studies. It could serve as a versatile genomics workbench for key components of RNAi.
SPASE: The Connection Among Solar and Space Physics Data Centers
NASA Technical Reports Server (NTRS)
Thieman, James R.; King, Todd A.; Roberts, D. Aaron
2011-01-01
The Space Physics Archive Search and Extract (SPASE) project is an international collaboration among Heliophysics (solar and space physics) groups concerned with data acquisition and archiving. Within this community there are a variety of old and new data centers, resident archives, "virtual observatories", etc. acquiring, holding, and distributing data. A researcher interested in finding data of value for his or her study faces a complex data environment. The SPASE group has simplified the search for data through the development of the SPASE Data Model as a common method to describe data sets in the various archives. The data model is an XML-based schema and is now in operational use. There are both positives and negatives to this approach. The advantage is the common metadata language enabling wide-ranging searches across the archives, but it is difficult to inspire the data holders to spend the time necessary to describe their data using the Model. Software tools have helped, but the main motivational factor is wide-ranging use of the standard by the community. The use is expanding, but there are still other groups who could benefit from adopting SPASE. The SPASE Data Model is also being expanded in the sense of providing the means for more detailed description of data sets with the aim of enabling more automated ingestion and use of the data through detailed format descriptions. We will discuss the present state of SPASE usage and how we foresee development in the future. The evolution is based on a number of lessons learned - some unique to Heliophysics, but many common to the various data disciplines.
NASA Technical Reports Server (NTRS)
Kuo, Kwo-Sen; Rilee, Michael Lee
2017-01-01
Current data processing practice limits the volume and variety of relevant geoscience data that can practically be applied to important problems. File archives in centralized data centers are the principal means by which Earth Science data are accessed. This approach, however, requires laborious search, retrieval, and eventual customization/adaptation for the data to be used. Such fractionation makes it even more difficult to share outcomes, i.e. research artifacts and data products, hampering reusability and repeatability, since end users generally have their own research agenda and preferences as well as scarce resources. Thus, while finding and downloading data files from central data centers are already costly for end users working in their own field, using data products from other disciplines rapidly becomes prohibitive. This curtails scientific productivity, limits avenues of study, and endangers quality and reproducibility. The Spatio-Temporal Adaptive Resolution Encoding (STARE) is a unifying scheme that facilitates the indexing, access, and fusion of diverse Earth Science data. STARE implements an innovative encoding of geo-spatiotemporal information, originally developed for aligning datasets with diverse spatiotemporal characteristics in an array database. The spatial component of STARE recursively quadfurcates a root polyhedron, producing a hierarchical scheme for addressing geographic locations and regions. The temporal component of STARE uses conventional date-time units as an indexing hierarchy. The additional encoding of spatial and temporal resolution information in STARE enables comparisons and conditional selections across diverse datasets. Moreover, spatiotemporal set-operations, e.g. union and intersection, are mapped to efficient integer operations with STARE. Applied to existing data models (point, grid, spacecraft swath) and corresponding granules, STARE indexes provide a streamlined description usable as geo-spatiotemporal metadata. When coupled with large scale, distributed hardware and software, STARE-based data access reduces pre-analysis data preparation costs by offering a convenient means to align different datasets spatiotemporally without specialized effort in parallel computing or distributed data management.
NASA Astrophysics Data System (ADS)
Kuo, K. S.; Rilee, M. L.
2017-12-01
Current data processing practice limits the volume and variety of relevant geoscience data that can practically be applied to important problems. File archives in centralized data centers are the principal means by which Earth Science data are accessed. This approach, however, requires laborious search, retrieval, and eventual customization/adaptation for the data to be used. Such fractionation makes it even more difficult to share outcomes, i.e. research artifacts and data products, hampering reusability and repeatability, since end users generally have their own research agenda and preferences as well as scarce resources. Thus, while finding and downloading data files from central data centers are already costly for end users working in their own field, using data products from other disciplines rapidly becomes prohibitive. This curtails scientific productivity, limits avenues of study, and endangers quality and reproducibility. The Spatio-Temporal Adaptive Resolution Encoding ( STARE ) is a unifying scheme that facilitates the indexing, access, and fusion of diverse Earth Science data. STARE implements an innovative encoding of geo-spatiotemporal information, originally developed for aligning datasets with diverse spatiotemporal characteristics in an array database. The spatial component of STARE recursively quadfurcates a root polyhedron, producing a hierarchical scheme for addressing geographic locations and regions. The temporal component of STARE uses conventional date-time units as an indexing hierarchy. The additional encoding of spatial and temporal resolution information in STARE enables comparisons and conditional selections across diverse datasets. Moreover, spatiotemporal set-operations, e.g. union and intersection, are mapped to efficient integer operations with STARE. Applied to existing data models (point, grid, spacecraft swath) and corresponding granules, STARE indexes provide a streamlined description usable as geo-spatiotemporal metadata. When coupled with large scale, distributed hardware and software, STARE-based data access reduces pre-analysis data preparation costs by offering a convenient means to align different datasets spatiotemporally without specialized effort in parallel computing or distributed data management.
Fallon, Nevada FORGE Distinct Element Reservoir Modeling
Blankenship, Doug; Pettitt, Will; Riahi, Azadeh; Hazzard, Jim; Blanksma, Derrick
2018-03-12
Archive containing input/output data for distinct element reservoir modeling for Fallon FORGE. Models created using 3DEC, InSite, and in-house Python algorithms (ITASCA). List of archived files follows; please see 'Modeling Metadata.pdf' (included as a resource below) for additional file descriptions. Data sources include regional geochemical model, well positions and geometry, principal stress field, capability for hydraulic fractures, capability for hydro-shearing, reservoir geomechanical model-stimulation into multiple zones, modeled thermal behavior during circulation, and microseismicity.
ERIC Educational Resources Information Center
Levy, David M.; Huttenlocher, Dan; Moll, Angela; Smith, MacKenzie; Hodge, Gail M.; Chandler, Adam; Foley, Dan; Hafez, Alaaeldin M.; Redalen, Aaron; Miller, Naomi
2000-01-01
Includes six articles focusing on the purpose of digital public libraries; encoding electronic documents through compression techniques; a distributed finding aid server; digital archiving practices in the framework of information life cycle management; converting metadata into MARC format and Dublin Core formats; and evaluating Web sites through…
COMBINE archive and OMEX format: one file to share all information to reproduce a modeling project.
Bergmann, Frank T; Adams, Richard; Moodie, Stuart; Cooper, Jonathan; Glont, Mihai; Golebiewski, Martin; Hucka, Michael; Laibe, Camille; Miller, Andrew K; Nickerson, David P; Olivier, Brett G; Rodriguez, Nicolas; Sauro, Herbert M; Scharm, Martin; Soiland-Reyes, Stian; Waltemath, Dagmar; Yvon, Florent; Le Novère, Nicolas
2014-12-14
With the ever increasing use of computational models in the biosciences, the need to share models and reproduce the results of published studies efficiently and easily is becoming more important. To this end, various standards have been proposed that can be used to describe models, simulations, data or other essential information in a consistent fashion. These constitute various separate components required to reproduce a given published scientific result. We describe the Open Modeling EXchange format (OMEX). Together with the use of other standard formats from the Computational Modeling in Biology Network (COMBINE), OMEX is the basis of the COMBINE Archive, a single file that supports the exchange of all the information necessary for a modeling and simulation experiment in biology. An OMEX file is a ZIP container that includes a manifest file, listing the content of the archive, an optional metadata file adding information about the archive and its content, and the files describing the model. The content of a COMBINE Archive consists of files encoded in COMBINE standards whenever possible, but may include additional files defined by an Internet Media Type. Several tools that support the COMBINE Archive are available, either as independent libraries or embedded in modeling software. The COMBINE Archive facilitates the reproduction of modeling and simulation experiments in biology by embedding all the relevant information in one file. Having all the information stored and exchanged at once also helps in building activity logs and audit trails. We anticipate that the COMBINE Archive will become a significant help for modellers, as the domain moves to larger, more complex experiments such as multi-scale models of organs, digital organisms, and bioengineering.
NASA participation in the 1980 PEPE/NEROS project: Data archive
NASA Technical Reports Server (NTRS)
Brewer, D. A.; Remsberg, E. E.; Loar, G. R.; Bendura, R. J.
1982-01-01
Eight experimental air quality measurement systems were investigated during July and August 1980 as part of the EPA PEPE/NEROS fiel measurement program. Data from those efforts have been entered into an archive that may be accessed by other researchers. The data sets consists of airborne measurements of regional mixed layer heights and aerosol and ozone distributions as well as point measurements of meteorological parameters and ozone obtained during diurnal transitions in the planetary boundary layer. This report gives a discussion of each measurement system, a preliminary assessment of data quality, a description of the archive format for each data set, and a summary of several proposed scientific studies which will utilize these data.
GORGONA - the characteristic of the software system.
NASA Astrophysics Data System (ADS)
Artim, M.; Zejda, M.
A description of the new software system is given. The GORGONA system is established to the processing, making and administration of archives of periodic variable stars observations, observers and observed variable stars.
Interactive searching of facial image databases
NASA Astrophysics Data System (ADS)
Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean
1995-09-01
A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.
MPEG-7 audio-visual indexing test-bed for video retrieval
NASA Astrophysics Data System (ADS)
Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian
2003-12-01
This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.
Kinjo, Akira R.; Suzuki, Hirofumi; Yamashita, Reiko; Ikegawa, Yasuyo; Kudou, Takahiro; Igarashi, Reiko; Kengaku, Yumiko; Cho, Hasumi; Standley, Daron M.; Nakagawa, Atsushi; Nakamura, Haruki
2012-01-01
The Protein Data Bank Japan (PDBj, http://pdbj.org) is a member of the worldwide Protein Data Bank (wwPDB) and accepts and processes the deposited data of experimentally determined macromolecular structures. While maintaining the archive in collaboration with other wwPDB partners, PDBj also provides a wide range of services and tools for analyzing structures and functions of proteins, which are summarized in this article. To enhance the interoperability of the PDB data, we have recently developed PDB/RDF, PDB data in the Resource Description Framework (RDF) format, along with its ontology in the Web Ontology Language (OWL) based on the PDB mmCIF Exchange Dictionary. Being in the standard format for the Semantic Web, the PDB/RDF data provide a means to integrate the PDB with other biological information resources. PMID:21976737
A Complete Public Archive for the Einstein Imaging Proportional Counter
NASA Technical Reports Server (NTRS)
Helfand, David J.
1996-01-01
Consistent with our proposal to the Astrophysics Data Program in 1992, we have completed the design, construction, documentation, and distribution of a flexible and complete archive of the data collected by the Einstein Imaging Proportional Counter. Along with software and data delivered to the High Energy Astrophysics Science Archive Research Center at Goddard Space Flight Center, we have compiled and, where appropriate, published catalogs of point sources, soft sources, hard sources, extended sources, and transient flares detected in the database along with extensive analyses of the instrument's backgrounds and other anomalies. We include in this document a brief summary of the archive's functionality, a description of the scientific catalogs and other results, a bibliography of publications supported in whole or in part under this contract, and a list of personnel whose pre- and post-doctoral education consisted in part in participation in this project.
Image acquisition unit for the Mayo/IBM PACS project
NASA Astrophysics Data System (ADS)
Reardon, Frank J.; Salutz, James R.
1991-07-01
The Mayo Clinic and IBM Rochester, Minnesota, have jointly developed a picture archiving, distribution and viewing system for use with Mayo's CT and MRI imaging modalities. Images are retrieved from the modalities and sent over the Mayo city-wide token ring network to optical storage subsystems for archiving, and to server subsystems for viewing on image review stations. Images may also be retrieved from archive and transmitted back to the modalities. The subsystems that interface to the modalities and communicate to the other components of the system are termed Image Acquisition Units (LAUs). The IAUs are IBM Personal System/2 (PS/2) computers with specially developed software. They operate independently in a network of cooperative subsystems and communicate with the modalities, archive subsystems, image review server subsystems, and a central subsystem that maintains information about the content and location of images. This paper provides a detailed description of the function and design of the Image Acquisition Units.
NASA Technical Reports Server (NTRS)
Reardon, John E.; Violett, Duane L., Jr.
1991-01-01
The AFAS Database System was developed to provide the basic structure of a comprehensive database system for the Marshall Space Flight Center (MSFC) Structures and Dynamics Laboratory Aerophysics Division. The system is intended to handle all of the Aerophysics Division Test Facilities as well as data from other sources. The system was written for the DEC VAX family of computers in FORTRAN-77 and utilizes the VMS indexed file system and screen management routines. Various aspects of the system are covered, including a description of the user interface, lists of all code structure elements, descriptions of the file structures, a description of the security system operation, a detailed description of the data retrieval tasks, a description of the session log, and a description of the archival system.
The SPASE Data Model for Heliophysics Data: Is it Working?
NASA Technical Reports Server (NTRS)
Thieman, James; King, Todd; Roberts, Aaron
2011-01-01
The Space Physics Archive Search and Extract (SPASE) Data Model was developed to provide a metadata standard for describing Heliophysics (Space and Solar Physics) data within that science discipline. The SPASE Data Model has matured over the many years of its creation and is presently represented by Version 2.2.1. Information about SPASE can be obtained from the website group.org. The Data Model defines terms and values as well as the relationships between them in order to describe the data resources in the Heliophysics data environment. This data environment is quite complex, consisting of Virtual Observatories, Resident Archives, Data Providers, Partnering Data Centers, Services, Final Archives, and a Deep Archive. SPASE is the metadata language standard intended to permeate the complexity and provide a common method of obtaining and understanding data. Is it working in this capacity? SPASE has been used to describe a wide range of data. Examples range from ground-based magnetometer data to interplanetary satellite measurements to space weather model results. Has it achieved the goal of making the data easier to find and use? To find data of interest it is necessary that all the data of importance be described using the SPASE Data Model. Within the part of the data community associated with NASA (supported through NASA funding) there are obligations to use SPASE and (0 describe the old and new data using the SPASE XML schema. Although this pan of the community is not near 100% compliance with the mandate, there is good progress being made and the goal should be reachable in the future. Outside of the NASA data community there is still work to be done to convince the international community that SPASE descriptions are w011h the cost of their generation. Some of these groups such as Cluster, HELlO, GAIA, NOAA/NGDe. CSSDP, VSTO, SuperMAG, and IUGONET have agreed to use SPASE. but there are still other groups of importance that need (0 be reached. It is also assumed that the terminology is sufficiently broad and the descriptions are sufficiently complete that researchers needing data of a specific type or from a specific period can find and acquire what they need. A valid SPASE description can be very brief or very thorough depending on the willingness of the author to spend the time necessary to make the description useful. There is evidence that users are finding what they need through the SPASE descriptions, and this standard is a big step forward in Heliophysics data location. Does SPASE make it easier to use the data once they are found,) Thorough descriptions of data using SPASE can describe the data down to the level of individual parameters and exactly how the data are organized and stored. Should the SPASE data descriptions be written in such a way that they can be automatically ingested and understood by software tools'? Heliophysics instruments are becoming morc versatile all the time and the complexity of the data makes it tedious and time consuming to write SPASE descriptions with this level of sophistication even with the improvement of the tools used to generate the descriptions. Is it better to just write human-readable descriptions of the data at the parameter level or to refer to references that provide this information? This is a debate that is presently taking place and software is being developed to test what is possible.
Using the Stereotype Content Model to examine group depictions in Fascism: An Archival Approach.
Durante, Federica; Volpato, Chiara; Fiske, Susan T
2010-04-01
The Stereotype Content Model (SCM) suggests potentially universal intergroup depictions. If universal, they should apply across history in archival data. Bridging this gap, we examined social groups descriptions during Italy's Fascist era. In Study 1, articles published in a Fascist magazine- La Difesa della Razza -were content analyzed, and results submitted to correspondence analysis. Admiration prejudice depicted ingroups; envious and contemptuous prejudices depicted specific outgroups, generally in line with SCM predictions. No paternalistic prejudice appeared; historical reasons might explain this finding. Results also fit the recently developed BIAS Map of behavioral consequences. In Study 2, ninety-six undergraduates rated the content-analysis traits on warmth and competence, without knowing their origin. They corroborated SCM's interpretations of the archival data.
Wavelet/scalar quantization compression standard for fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1996-06-12
US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class ofmore » potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.« less
Huh, Sun
2013-01-01
ScienceCentral, a free or open access, full-text archive of scientific journal literature at the Korean Federation of Science and Technology Societies, was under test in September 2013. Since it is a Journal Article Tag Suite-based full text database, extensible markup language files of all languages can be presented, according to Unicode Transformation Format 8-bit encoding. It is comparable to PubMed Central: however, there are two distinct differences. First, its scope comprises all science fields; second, it accepts all language journals. Launching ScienceCentral is the first step for free access or open access academic scientific journals of all languages to leap to the world, including scientific journals from Croatia.
Motion Imagery and Robotics Application Project (MIRA)
NASA Technical Reports Server (NTRS)
Grubbs, Rodney P.
2010-01-01
This viewgraph presentation describes the Motion Imagery and Robotics Application (MIRA) Project. A detailed description of the MIRA camera service software architecture, encoder features, and on-board communications are presented. A description of a candidate camera under development is also shown.
NASA Technical Reports Server (NTRS)
Andres, Vince; Walter, David; Hallal, Charles; Jones, Helene; Callac, Chris
2004-01-01
The SSC Multimedia Archive is an automated electronic system to manage images, acquired both by film and digital cameras, for the Public Affairs Office (PAO) at Stennis Space Center (SSC). Previously, the image archive was based on film photography and utilized a manual system that, by today s standards, had become inefficient and expensive. Now, the SSC Multimedia Archive, based on a server at SSC, contains both catalogs and images for pictures taken both digitally and with a traditional, film-based camera, along with metadata about each image. After a "shoot," a photographer downloads the images into the database. Members of the PAO can use a Web-based application to search, view and retrieve images, approve images for publication, and view and edit metadata associated with the images. Approved images are archived and cross-referenced with appropriate descriptions and information. Security is provided by allowing administrators to explicitly grant access privileges to personnel to only access components of the system that they need to (i.e., allow only photographers to upload images, only PAO designated employees may approve images).
2008-03-01
capture archived encoded data. A black basalt slab with strange inscriptions on it, the Rosetta Stone was unearthed in July 1799 by Napoleon’s army...the bitstream. When you start thinking about 1s and 0s and when you start thinking about fiber optics, its just either on or off. So there is a
LANDSAT-D data format control book. Volume 6, appendix G: GSFC HDT-AM inventory tape (GHIT-AM)
NASA Technical Reports Server (NTRS)
1981-01-01
The data format specifications of the Goddard HDT inventory tapes (GHITS), which accompany shipments of archival digital multispectral scanner image data (HDT-AM tapes), are defined. The GHIT is a nine-track, 1600-BPI tape which conforms to the ANSI standard and serves as an inventory and description of the image data included in the shipment. The archival MSS tapes (HDT-AMs) contain radiometrically corrected but geometrically uncorrected image data plus certain ancillary data necessary to perform the geometric corrections.
Twistor encoding of Lienard--Wiechert fields in Minkowski space-time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, J.R.
1985-03-01
The twistor encoding of the anti-self-dual Lienard--Wiechert field on Minkowski space-time yields a considerably richer structure than that of the Coulomb field encoding due to the presence of a nonzero radiation field. The combination of advanced and retarded transverse fields together with the longitudinal field and the individual aspects of these fields provides this structure. Higher-order longitudinal moments can be incorporated so that general longitudinal fields can be given a twistor description.
Using the Stereotype Content Model to examine group depictions in Fascism: An Archival Approach
Durante, Federica; Volpato, Chiara; Fiske, Susan T.
2013-01-01
The Stereotype Content Model (SCM) suggests potentially universal intergroup depictions. If universal, they should apply across history in archival data. Bridging this gap, we examined social groups descriptions during Italy’s Fascist era. In Study 1, articles published in a Fascist magazine— La Difesa della Razza —were content analyzed, and results submitted to correspondence analysis. Admiration prejudice depicted ingroups; envious and contemptuous prejudices depicted specific outgroups, generally in line with SCM predictions. No paternalistic prejudice appeared; historical reasons might explain this finding. Results also fit the recently developed BIAS Map of behavioral consequences. In Study 2, ninety-six undergraduates rated the content-analysis traits on warmth and competence, without knowing their origin. They corroborated SCM’s interpretations of the archival data. PMID:24403646
Models of optical quantum computing
NASA Astrophysics Data System (ADS)
Krovi, Hari
2017-03-01
I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.
NASA Astrophysics Data System (ADS)
Pal, Amrindra; Kumar, Santosh; Sharma, Sandeep; Raghuwanshi, Sanjeev K.
2016-04-01
Encoder is a device that allows placing digital information from many inputs to many outputs. Any application of combinational logic circuit can be implemented by using encoder and external gates. In this paper, 4 to 2 line encoder is proposed using electro-optic effect inside lithium-niobate based Mach-Zehnder interferometers (MZIs). The MZI structures have powerful capability to switching an optical input signal to a desired output port. The paper constitutes a mathematical description of the proposed device and thereafter simulation using MATLAB. The study is verified using beam propagation method (BPM).
ERIC Educational Resources Information Center
Shaughnessy, Michael F.; Cockrell, Kelly
Two experiments examining the "distinctiveness of encoding" hypothesis are reported. The hypothesis suggests that specific forms of processing of events may result in the formation of more exact perceptual descriptions and thus more distinctive records in memory. The two experiments reported address shortcomings in previous research on…
NASA Astrophysics Data System (ADS)
Knapic, C.; Zanichelli, A.; Dovgan, E.; Nanni, M.; Stagni, M.; Righini, S.; Sponza, M.; Bedosti, F.; Orlati, A.; Smareglia, R.
2016-07-01
Radio Astronomical Data models are becoming very complex since the huge possible range of instrumental configurations available with the modern Radio Telescopes. What in the past was the last frontiers of data formats in terms of efficiency and flexibility is now evolving with new strategies and methodologies enabling the persistence of a very complex, hierarchical and multi-purpose information. Such an evolution of data models and data formats require new data archiving techniques in order to guarantee data preservation following the directives of Open Archival Information System and the International Virtual Observatory Alliance for data sharing and publication. Currently, various formats (FITS, MBFITS, VLBI's XML description files and ancillary files) of data acquired with the Medicina and Noto Radio Telescopes can be stored and handled by a common Radio Archive, that is planned to be released to the (inter)national community by the end of 2016. This state-of-the-art archiving system for radio astronomical data aims at delegating as much as possible to the software setting how and where the descriptors (metadata) are saved, while the users perform user-friendly queries translated by the web interface into complex interrogations on the database to retrieve data. In such a way, the Archive is ready to be Virtual Observatory compliant and as much as possible user-friendly.
A description of sexual offending committed by Canadian teachers.
Moulden, Heather M; Firestone, Philip; Kingston, Drew A; Wexler, Audrey F
2010-07-01
The aim of this investigation was to describe teachers who sexually offend against youth and the circumstances related to these offenses. Archival Violent Crime Linkage Analysis System reports were obtained from the Royal Canadian Mounted Police, and demographic and criminal characteristics for the offender, as well as information about the victim and offense, were selected for analyses. A descriptive approach was used to analyze the qualitative reports for a group of 113 Canadian sexual offenders between 1995 and 2002. The results provide a description of adult male teachers who offended within their position of trust as well as offense and victim characteristics.
Huh, Sun
2013-01-01
ScienceCentral, a free or open access, full-text archive of scientific journal literature at the Korean Federation of Science and Technology Societies, was under test in September 2013. Since it is a Journal Article Tag Suite-based full text database, extensible markup language files of all languages can be presented, according to Unicode Transformation Format 8-bit encoding. It is comparable to PubMed Central: however, there are two distinct differences. First, its scope comprises all science fields; second, it accepts all language journals. Launching ScienceCentral is the first step for free access or open access academic scientific journals of all languages to leap to the world, including scientific journals from Croatia. PMID:24266292
Handbook of sensor technical characteristics
NASA Astrophysics Data System (ADS)
Tanner, S.
1982-07-01
Space and terrestrial applications remote sensor systems are described. Each sensor is presented separately. Information is included on its objectives, description, technical characteristics, data products obtained, data archives location, period of operation, and measurement and potential derived parameters. Each sensor is cross indexed.
History Sources on the Internet.
ERIC Educational Resources Information Center
Fink, Kenneth D.
This paper provides descriptions of key online history resources useful to teachers, librarians, and other education professionals. Highlights include: primary sources on the Internet; archives; Online Public Access Catalogs (OPACs); the American Historical Association (AHA) Web site; state and federal government resources; business history…
Handbook of sensor technical characteristics
NASA Technical Reports Server (NTRS)
Tanner, S.
1982-01-01
Space and terrestrial applications remote sensor systems are described. Each sensor is presented separately. Information is included on its objectives, description, technical characteristics, data products obtained, data archives location, period of operation, and measurement and potential derived parameters. Each sensor is cross indexed.
The PDS4 Information Model and its Role in Agile Science Data Curation
NASA Astrophysics Data System (ADS)
Hughes, J. S.; Crichton, D.
2017-12-01
PDS4 is an information model-driven service architecture supporting the capture, management, distribution and integration of massive planetary science data captured in distributed data archives world-wide. The PDS4 Information Model (IM), the core element of the architecture, was developed using lessons learned from 20 years of archiving Planetary Science Data and best practices for information model development. The foundational principles were adopted from the Open Archival Information System (OAIS) Reference Model (ISO 14721), the Metadata Registry Specification (ISO/IEC 11179), and W3C XML (Extensible Markup Language) specifications. These provided respectively an object oriented model for archive information systems, a comprehensive schema for data dictionaries and hierarchical governance, and rules for rules for encoding documents electronically. The PDS4 Information model is unique in that it drives the PDS4 infrastructure by providing the representation of concepts and their relationships, constraints, rules, and operations; a sharable, stable, and organized set of information requirements; and machine parsable definitions that are suitable for configuring and generating code. This presentation will provide an over of the PDS4 Information Model and how it is being leveraged to develop and evolve the PDS4 infrastructure and enable agile curation of over 30 years of science data collected by the international Planetary Science community.
Principles of metadata organization at the ENCODE data coordination center
Hong, Eurie L.; Sloan, Cricket A.; Chan, Esther T.; Davidson, Jean M.; Malladi, Venkat S.; Strattan, J. Seth; Hitz, Benjamin C.; Gabdank, Idan; Narayanan, Aditi K.; Ho, Marcus; Lee, Brian T.; Rowe, Laurence D.; Dreszer, Timothy R.; Roe, Greg R.; Podduturi, Nikhil R.; Tanaka, Forrest; Hilton, Jason A.; Cherry, J. Michael
2016-01-01
The Encyclopedia of DNA Elements (ENCODE) Data Coordinating Center (DCC) is responsible for organizing, describing and providing access to the diverse data generated by the ENCODE project. The description of these data, known as metadata, includes the biological sample used as input, the protocols and assays performed on these samples, the data files generated from the results and the computational methods used to analyze the data. Here, we outline the principles and philosophy used to define the ENCODE metadata in order to create a metadata standard that can be applied to diverse assays and multiple genomic projects. In addition, we present how the data are validated and used by the ENCODE DCC in creating the ENCODE Portal (https://www.encodeproject.org/). Database URL: www.encodeproject.org PMID:26980513
Restoration of Apollo Data by the Lunar Data Project/PDS Lunar Data Node: An Update
NASA Technical Reports Server (NTRS)
Williams, David R.; Hills, H. Kent; Taylor, Patrick T.; Grayzeck, Edwin J.; Guinness, Edward A.
2016-01-01
The Apollo 11, 12, and 14 through 17 missions orbited and landed on the Moon, carrying scientific instruments that returned data from all phases of the missions, included long-lived Apollo Lunar Surface Experiments Packages (ALSEPs) deployed by the astronauts on the lunar surface. Much of these data were never archived, and some of the archived data were on media and in formats that are outmoded, or were deposited with little or no useful documentation to aid outside users. This is particularly true of the ALSEP data returned autonomously for many years after the Apollo missions ended. The purpose of the Lunar Data Project and the Planetary Data System (PDS) Lunar Data Node is to take data collections already archived at the NASA Space Science Data Coordinated Archive (NSSDCA) and prepare them for archiving through PDS, and to locate lunar data that were never archived, bring them into NSSDCA, and then archive them through PDS. Preparing these data for archiving involves reading the data from the original media, be it magnetic tape, microfilm, microfiche, or hard-copy document, converting the outmoded, often binary, formats when necessary, putting them into a standard digital form accepted by PDS, collecting the necessary ancillary data and documentation (metadata) to ensure that the data are usable and well-described, summarizing the metadata in documentation to be included in the data set, adding other information such as references, mission and instrument descriptions, contact information, and related documentation, and packaging the results in a PDS-compliant data set. The data set is then validated and reviewed by a group of external scientists as part of the PDS final archive process. We present a status report on some of the data sets that we are processing.
Tool for Constructing Data Albums for Significant Weather Events
NASA Astrophysics Data System (ADS)
Kulkarni, A.; Ramachandran, R.; Conover, H.; McEniry, M.; Goodman, H.; Zavodsky, B. T.; Braun, S. A.; Wilson, B. D.
2012-12-01
Case study analysis and climatology studies are common approaches used in Atmospheric Science research. Research based on case studies involves a detailed description of specific weather events using data from different sources, to characterize physical processes in play for a given event. Climatology-based research tends to focus on the representativeness of a given event, by studying the characteristics and distribution of a large number of events. To gather relevant data and information for case studies and climatology analysis is tedious and time consuming; current Earth Science data systems are not suited to assemble multi-instrument, multi mission datasets around specific events. For example, in hurricane science, finding airborne or satellite data relevant to a given storm requires searching through web pages and data archives. Background information related to damages, deaths, and injuries requires extensive online searches for news reports and official storm summaries. We will present a knowledge synthesis engine to create curated "Data Albums" to support case study analysis and climatology studies. The technological challenges in building such a reusable and scalable knowledge synthesis engine are several. First, how to encode domain knowledge in a machine usable form? This knowledge must capture what information and data resources are relevant and the semantic relationships between the various fragments of information and data. Second, how to extract semantic information from various heterogeneous sources including unstructured texts using the encoded knowledge? Finally, how to design a structured database from the encoded knowledge to store all information and to support querying? The structured database must allow both knowledge overviews of an event as well as drill down capability needed for detailed analysis. An application ontology driven framework is being used to design the knowledge synthesis engine. The knowledge synthesis engine is being applied to build a portal for hurricane case studies at the Global Hydrology and Resource Center (GHRC), a NASA Data Center. This portal will auto-generate Data Albums for specific hurricane events, compiling information from distributed resources such as NASA field campaign collections, relevant data sets, storm reports, pictures, videos and other useful sources.
ERIC Educational Resources Information Center
Athanasopoulos, Panos; Bylund, Emanuel
2013-01-01
In this article, we explore whether cross-linguistic differences in grammatical aspect encoding may give rise to differences in memory and cognition. We compared native speakers of two languages that encode aspect differently (English and Swedish) in four tasks that examined verbal descriptions of stimuli, online triads matching, and memory-based…
ERIC Educational Resources Information Center
Foley, Mary Ann; Fried, Adina Rachel; Cowan, Emily; Bays, Rebecca Brooke
2014-01-01
In 2 experiments, the effect of collaborative encoding on memory was examined by testing 2 interactive components of co-construction processes. One component focused on the nature of the interactive exchange between collaborators: As the partners worked together to create descriptions about ways to interact with familiar objects, constraints were…
An Introduction to the Resource Description Framework.
ERIC Educational Resources Information Center
Miller, Eric
1998-01-01
Explains the Resource Description Framework (RDF), an infrastructure developed under the World Wide Web Consortium that enables the encoding, exchange, and reuse of structured metadata. It is an application of Extended Markup Language (XML), which is a subset of Standard Generalized Markup Language (SGML), and helps with expressing semantics.…
Ames Life Science Data Archive: Translational Rodent Research at Ames
NASA Technical Reports Server (NTRS)
Wood, Alan E.; French, Alison J.; Ngaotheppitak, Ratana; Leung, Dorothy M.; Vargas, Roxana S.; Maese, Chris; Stewart, Helen
2014-01-01
The Life Science Data Archive (LSDA) office at Ames is responsible for collecting, curating, distributing and maintaining information pertaining to animal and plant experiments conducted in low earth orbit aboard various space vehicles from 1965 to present. The LSDA will soon be archiving data and tissues samples collected on the next generation of commercial vehicles; e.g., SpaceX & Cygnus Commercial Cargo Craft. To date over 375 rodent flight experiments with translational application have been archived by the Ames LSDA office. This knowledge base of fundamental research can be used to understand mechanisms that affect higher organisms in microgravity and help define additional research whose results could lead the way to closing gaps identified by the Human Research Program (HRP). This poster will highlight Ames contribution to the existing knowledge base and how the LSDA can be a resource to help answer the questions surrounding human health in long duration space exploration. In addition, it will illustrate how this body of knowledge was utilized to further our understanding of how space flight affects the human system and the ability to develop countermeasures that negate the deleterious effects of space flight. The Ames Life Sciences Data Archive (ALSDA) includes current descriptions of over 700 experiments conducted aboard the Shuttle, International Space Station (ISS), NASA/MIR, Bion/Cosmos, Gemini, Biosatellites, Apollo, Skylab, Russian Foton, and ground bed rest studies. Research areas cover Behavior and Performance, Bone and Calcium Physiology, Cardiovascular Physiology, Cell and Molecular Biology, Chronobiology, Developmental Biology, Endocrinology, Environmental Monitoring, Gastrointestinal Physiology, Hematology, Immunology, Life Support System, Metabolism and Nutrition, Microbiology, Muscle Physiology, Neurophysiology, Pharmacology, Plant Biology, Pulmonary Physiology, Radiation Biology, Renal, Fluid and Electrolyte Physiology, and Toxicology. These experiment descriptions and data can be accessed online via the public LSDA website (http://lsda.jsc.nasa.gov) and information can be requested via the Data Request form at http://lsda.jsc.nasa.gov/common/dataRequest/dataRequest.aspx or by contacting the ALSDA Office at: Alison.J.French@nasa.gov
National Weather- RFC Development Management
Map News Organization Search NWS ALL NOAA Go RFC Development Management Presentations Projects & ; Plans RFC Development Program RFC Archive Database Documentation Outline Workshops Contact Us resources and services. Description Graphic The RFC Development Management component of the Office of
MXA: a customizable HDF5-based data format for multi-dimensional data sets
NASA Astrophysics Data System (ADS)
Jackson, M.; Simmons, J. P.; De Graef, M.
2010-09-01
A new digital file format is proposed for the long-term archival storage of experimental data sets generated by serial sectioning instruments. The format is known as the multi-dimensional eXtensible Archive (MXA) format and is based on the public domain Hierarchical Data Format (HDF5). The MXA data model, its description by means of an eXtensible Markup Language (XML) file with associated Document Type Definition (DTD) are described in detail. The public domain MXA package is available through a dedicated web site (mxa.web.cmu.edu), along with implementation details and example data files.
Representation of viruses in the remediated PDB archive
Lawson, Catherine L.; Dutta, Shuchismita; Westbrook, John D.; Henrick, Kim; Berman, Helen M.
2008-01-01
A new scheme has been devised to represent viruses and other biological assemblies with regular noncrystallographic symmetry in the Protein Data Bank (PDB). The scheme describes existing and anticipated PDB entries of this type using generalized descriptions of deposited and experimental coordinate frames, symmetry and frame transformations. A simplified notation has been adopted to express the symmetry generation of assemblies from deposited coordinates and matrix operations describing the required point, helical or crystallographic symmetry. Complete correct information for building full assemblies, subassemblies and crystal asymmetric units of all virus entries is now available in the remediated PDB archive. PMID:18645236
Telidon Videotex presentation level protocol: Augmented picture description instructions
NASA Astrophysics Data System (ADS)
Obrien, C. D.; Brown, H. G.; Smirle, J. C.; Lum, Y. F.; Kukulka, J. Z.; Kwan, A.
1982-02-01
The Telidon Videotex system is a method by which graphic and textual information and transactional services can be accessed from information sources by the general public. In order to transmit information to a Telidon terminal at a minimum bandwidth, and in a manner independent of the type of communications channel, a coding scheme was devised which permits the encoding of a picture into the geometric drawing elements which compose it. These picture description instructions are an alpha geometric coding model and are based on the primitives of POINT, LINE, ARC, RECTANGLE, POLYGON, and INCREMENT. Text is encoded as (ASCII) characters along with a supplementary table of accents and special characters. A mosaic shape table is included for compatibility. A detailed specification of the coding scheme and a description of the principles which make it independent of communications channel and display hardware are provided.
Detailed description of the Mayo/IBM PACS
NASA Astrophysics Data System (ADS)
Gehring, Dale G.; Persons, Kenneth R.; Rothman, Melvyn L.; Salutz, James R.; Morin, Richard L.
1991-07-01
The Mayo Clinic and IBM/Rochester have jointly developed a picture archiving system (PACS) for use with Mayo's MRI and Neuro-CT imaging modalities. The system was developed to replace the imaging system's vendor-supplied magnetic tape archiving capability. The system consists of seven MR imagers and nine CT scanners, each interfaced to the PACS via IBM Personal System/2(tm) (PS/2) computers, which act as gateways from the imaging modality to the PACS network. The PAC system operates on the token-ring component of Mayo's city-wide local area network. Also on the PACS network are four optical storage subsystems used for image archival, three optical subsystems used for image retrieval, an IBM Application System/400(tm) (AS/400) computer used for database management and multiple PS/2-based image display systems and their image servers.
SPASE 2010 - Providing Access to the Heliophysics Data Environment
NASA Astrophysics Data System (ADS)
Thieman, J. R.; King, T. A.; Roberts, D.; Spase Consortium
2010-12-01
The Heliophysics division of NASA has adopted the Space Physics Archive Search and Extract (SPASE) Data Model for use within the Heliophysics Data Environment which is composed of virtual observatories, value-added services, resident and active archives, and other data providers. The SPASE Data Model has also been adopted by Japan's Inter-university Upper atmosphere Global Observation NETwork (IUGONET), NOAA's National Geophysics Data Center (NGDC), and the Canadian Space Science Data Portal (CSSDP). Europe's HELIO project harvests information from SPASE descriptions of resources as does Planetary Plasma Interactions (PPI) Node of NASA's Planetary Data System (PDS). All of the data sets in the Heliophysics Data Environment are intended to be described by the Space Physics Archive Search and Extract (SPASE) Data Model. Many have already been described in this way. The current version of the SPASE Data Model (2.2.0) may be found on the SPASE web site at http://www.spase-group.org SPASE data set descriptions are not as difficult to create as it might seem. Help is available in both the documentation and the many tools created to support SPASE description creators. There are now a number of very experienced users who are willing to help as well. The SPASE consortium has advanced to the next step in the odyssey to achieve well coordinated federation of resource providers by designing and implementing a set of core services to facilitate the exchange of metadata and delivery of data packages. An example is the registry service shown at http://vmo.igpp.ucla.edu/registry SPASE also incorporates new technologies that are useful to the overall effort, such as cloud storage. A review of the advances, uses of the SPASE data model, and role of services in a federated environment is presented.
NASA Astrophysics Data System (ADS)
Conway, Esther; Waterfall, Alison; Pepler, Sam; Newey, Charles
2015-04-01
In this paper we decribe a business process modelling approach to the integration of exisiting archival activities. We provide a high level overview of existing practice and discuss how procedures can be extended and supported through the description of preservation state. The aim of which is to faciliate the dynamic controlled management of scientific data through its lifecycle. The main types of archival processes considered are: • Management processes that govern the operation of an archive. These management processes include archival governance (preservation state management, selection of archival candidates and strategic management) . • Operational processes that constitute the core activities of the archive which maintain the value of research assets. These operational processes are the acquisition, ingestion, deletion, generation of metadata and preservation actvities, • Supporting processes, which include planning, risk analysis and monitoring of the community/preservation environment. We then proceed by describing the feasability testing of extended risk management and planning procedures which integrate current practices. This was done through the CEDA Archival Format Audit which inspected British Atmospherics Data Centre and National Earth Observation Data Centre Archival holdings. These holdings are extensive, comprising of around 2PB of data and 137 million individual files which were analysed and characterised in terms of format based risk. We are then able to present an overview of the risk burden faced by a large scale archive attempting to maintain the usability of heterogeneous environmental data sets. We conclude by presenting a dynamic data management information model that is capable of describing the preservation state of archival holdings throughout the data lifecycle. We provide discussion of the following core model entities and their relationships: • Aspirational entities, which include Data Entity definitions and their associated Preservation Objectives. • Risk entities, which act as drivers for change within the data lifecycle. These include Acquisitional Risks, Technical Risks, Strategic Risks and External Risks • Plan entities, which detail the actions to bring about change within an archive. These include Acquisition Plans, Preservation Plans and Monitoring plans • The Result entities describe the successful outcomes of the executed plans. These include Acquisitions, Mitigations and Accepted Risks.
Principles of metadata organization at the ENCODE data coordination center.
Hong, Eurie L; Sloan, Cricket A; Chan, Esther T; Davidson, Jean M; Malladi, Venkat S; Strattan, J Seth; Hitz, Benjamin C; Gabdank, Idan; Narayanan, Aditi K; Ho, Marcus; Lee, Brian T; Rowe, Laurence D; Dreszer, Timothy R; Roe, Greg R; Podduturi, Nikhil R; Tanaka, Forrest; Hilton, Jason A; Cherry, J Michael
2016-01-01
The Encyclopedia of DNA Elements (ENCODE) Data Coordinating Center (DCC) is responsible for organizing, describing and providing access to the diverse data generated by the ENCODE project. The description of these data, known as metadata, includes the biological sample used as input, the protocols and assays performed on these samples, the data files generated from the results and the computational methods used to analyze the data. Here, we outline the principles and philosophy used to define the ENCODE metadata in order to create a metadata standard that can be applied to diverse assays and multiple genomic projects. In addition, we present how the data are validated and used by the ENCODE DCC in creating the ENCODE Portal (https://www.encodeproject.org/). Database URL: www.encodeproject.org. © The Author(s) 2016. Published by Oxford University Press.
Next-generation digital information storage in DNA.
Church, George M; Gao, Yuan; Kosuri, Sriram
2012-09-28
Digital information is accumulating at an astounding rate, straining our ability to store and archive it. DNA is among the most dense and stable information media known. The development of new technologies in both DNA synthesis and sequencing make DNA an increasingly feasible digital storage medium. We developed a strategy to encode arbitrary digital information in DNA, wrote a 5.27-megabit book using DNA microchips, and read the book by using next-generation DNA sequencing.
Catalog Descriptions Using VOTable Files
NASA Astrophysics Data System (ADS)
Thompson, R.; Levay, K.; Kimball, T.; White, R.
2008-08-01
Additional information is frequently required to describe database table contents and make it understandable to users. For this reason, the Multimission Archive at Space Telescope (MAST) creates Òdescription filesÓ for each table/catalog. After trying various XML and CSV formats, we finally chose VOTable. These files are easy to update via an HTML form, easily read using an XML parser such as (in our case) the PHP5 SimpleXML extension, and have found multiple uses in our data access/retrieval process.
Data Recovery from SCATHA Satellite
NASA Technical Reports Server (NTRS)
Fennell, J. F.; Boyd, G. M.; Redding, M. T.; McNab, M. C.
1997-01-01
This document gives a brief description of the SCATHA (P78-2) satellite and consolidates into one location information relevant to the generation of the SCATHA Summary Data parameters for the European Space Agency (ESA), under ESTEC Contract No. 11006/94/NL/CC, and the National Aeronautics and Space Administration (NASA), under Grant No. NAGW-414 1. Included are descriptions of the instruments from which the Summary Data parameters are generated, their derivation, and archival. Any questions pertaining to the Summary Data parameters should be directed to Dr. Joseph Fennell.
NASA Technical Reports Server (NTRS)
Chu, W. P.; Osborn, M. T.; Mcmaster, L. R.
1988-01-01
This document is intended to serve as a guide to the use of the data products from the Stratospheric Aerosol Measurement (SAM) 2 experiment for scientific investigations of polar stratospheric aerosols. Included is a detailed description of the Beta and Aerosol Number Density Archive Tape (BANAT), which is the SAM 2 data product containing the aerosol extinction data available for these investigations. Also included are brief descriptions of the instrument operation, data collection, processing and validation, and some of the scientific analyses conducted to date.
Academic Advising and Gender Communication
ERIC Educational Resources Information Center
Nemeth, Sean
2017-01-01
Purpose: The purpose of this correlational study was to identify whether there are differences in student satisfaction scores in academic advisement gender pairings in an undergraduate university setting. Methodology: This study was a descriptive correlational research study utilizing archival survey data. The collected data consisted of numeric…
Using Network Analysis to Characterize Biogeographic Data in a Community Archive
NASA Astrophysics Data System (ADS)
Wellman, T. P.; Bristol, S.
2017-12-01
Informative measures are needed to evaluate and compare data from multiple providers in a community-driven data archive. This study explores insights from network theory and other descriptive and inferential statistics to examine data content and application across an assemblage of publically available biogeographic data sets. The data are archived in ScienceBase, a collaborative catalog of scientific data supported by the U.S Geological Survey to enhance scientific inquiry and acuity. In gaining understanding through this investigation and other scientific venues our goal is to improve scientific insight and data use across a spectrum of scientific applications. Network analysis is a tool to reveal patterns of non-trivial topological features in the data that do not exhibit complete regularity or randomness. In this work, network analyses are used to explore shared events and dependencies between measures of data content and application derived from metadata and catalog information and measures relevant to biogeographic study. Descriptive statistical tools are used to explore relations between network analysis properties, while inferential statistics are used to evaluate the degree of confidence in these assessments. Network analyses have been used successfully in related fields to examine social awareness of scientific issues, taxonomic structures of biological organisms, and ecosystem resilience to environmental change. Use of network analysis also shows promising potential to identify relationships in biogeographic data that inform programmatic goals and scientific interests.
Viking Seismometer PDS Archive Dataset
NASA Astrophysics Data System (ADS)
Lorenz, R. D.
2016-12-01
The Viking Lander 2 seismometer operated successfully for over 500 Sols on the Martian surface, recording at least one likely candidate Marsquake. The Viking mission, in an era when data handling hardware (both on board and on the ground) was limited in capability, predated modern planetary data archiving, and ad-hoc repositories of the data, and the very low-level record at NSSDC, were neither convenient to process nor well-known. In an effort supported by the NASA Mars Data Analysis Program, we have converted the bulk of the Viking dataset (namely the 49,000 and 270,000 records made in High- and Event- modes at 20 and 1 Hz respectively) into a simple ASCII table format. Additionally, since wind-generated lander motion is a major component of the signal, contemporaneous meteorological data are included in summary records to facilitate correlation. These datasets are being archived at the PDS Geosciences Node. In addition to brief instrument and dataset descriptions, the archive includes code snippets in the freely-available language 'R' to demonstrate plotting and analysis. Further, we present examples of lander-generated noise, associated with the sampler arm, instrument dumps and other mechanical operations.
Signal-to-noise ratio comparison of encoding methods for hyperpolarized noble gas MRI
NASA Technical Reports Server (NTRS)
Zhao, L.; Venkatesh, A. K.; Albert, M. S.; Panych, L. P.
2001-01-01
Some non-Fourier encoding methods such as wavelet and direct encoding use spatially localized bases. The spatial localization feature of these methods enables optimized encoding for improved spatial and temporal resolution during dynamically adaptive MR imaging. These spatially localized bases, however, have inherently reduced image signal-to-noise ratio compared with Fourier or Hadamad encoding for proton imaging. Hyperpolarized noble gases, on the other hand, have quite different MR properties compared to proton, primarily the nonrenewability of the signal. It could be expected, therefore, that the characteristics of image SNR with respect to encoding method will also be very different from hyperpolarized noble gas MRI compared to proton MRI. In this article, hyperpolarized noble gas image SNRs of different encoding methods are compared theoretically using a matrix description of the encoding process. It is shown that image SNR for hyperpolarized noble gas imaging is maximized for any orthonormal encoding method. Methods are then proposed for designing RF pulses to achieve normalized encoding profiles using Fourier, Hadamard, wavelet, and direct encoding methods for hyperpolarized noble gases. Theoretical results are confirmed with hyperpolarized noble gas MRI experiments. Copyright 2001 Academic Press.
Cook, Daniel L; Farley, Joel F; Tapscott, Stephen J
2001-01-01
Background: We propose that a computerized, internet-based graphical description language for systems biology will be essential for describing, archiving and analyzing complex problems of biological function in health and disease. Results: We outline here a conceptual basis for designing such a language and describe BioD, a prototype language that we have used to explore the utility and feasibility of this approach to functional biology. Using example models, we demonstrate that a rather limited lexicon of icons and arrows suffices to describe complex cell-biological systems as discrete models that can be posted and linked on the internet. Conclusions: Given available computer and internet technology, BioD may be implemented as an extensible, multidisciplinary language that can be used to archive functional systems knowledge and be extended to support both qualitative and quantitative functional analysis. PMID:11305940
NASA Astrophysics Data System (ADS)
Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.
1999-10-01
Litton PRC and Litton Data Systems Division are developing a system, the Imaged Document Optical Correlation and Conversion System (IDOCCS), to provide a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provides the search and retrieval of information from imaged documents. IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited; e.g., imaged documents containing an agency's seal or logo can be singled out. In this paper, we present a description of IDOCCS as well as preliminary performance results and theoretical projections.
Cloning and expression of clt genes encoding milk-clotting proteases from Myxococcus xanthus 422.
Poza, M; Prieto-Alcedo, M; Sieiro, C; Villa, T G
2004-10-01
The screening of a gene library of the milk-clotting strain Myxococcus xanthus 422 constructed in Escherichia coli allowed the description of eight positive clones containing 26 open reading frames. Only three of them (cltA, cltB, and cltC) encoded proteins that exhibited intracellular milk-clotting ability in E. coli, Saccharomyces cerevisiae, and Pichia pastoris expression systems.
The MAO NASU Plate Archive Database. Current Status and Perspectives
NASA Astrophysics Data System (ADS)
Pakuliak, L. K.; Sergeeva, T. P.
2006-04-01
The preliminary online version of the database of the MAO NASU plate archive is constructed on the basis of the relational database management system MySQL and permits an easy supplement of database with new collections of astronegatives, provides a high flexibility in constructing SQL-queries for data search optimization, PHP Basic Authorization protected access to administrative interface and wide range of search parameters. The current status of the database will be reported and the brief description of the search engine and means of the database integrity support will be given. Methods and means of the data verification and tasks for the further development will be discussed.
Two Models for Implementing Senior Mentor Programs in Academic Medical Settings
ERIC Educational Resources Information Center
Corwin, Sara J.; Bates, Tovah; Cohan, Mary; Bragg, Dawn S.; Roberts, Ellen
2007-01-01
This paper compares two models of undergraduate geriatric medical education utilizing senior mentoring programs. Descriptive, comparative multiple-case study was employed analyzing program documents, archival records, and focus group data. Themes were compared for similarities and differences between the two program models. Findings indicate that…
DOT National Transportation Integrated Search
2003-08-01
This report is broken into four sections. Section 1 provides a very brief description of transit in early American cities, describes the second generation of transit and the role of government planning and concludes with the third generation of trans...
Re-Imagining Archival Display: Creating User-Friendly Finding Aids
ERIC Educational Resources Information Center
Daines, J. Gordon, III; Nimer, Cory L.
2011-01-01
This article examines how finding aids are structured and delivered, considering alternative approaches. It suggests that single-level displays, those that present a single component of a multilevel description to users at a time, have the potential to transform the delivery and display of collection information while improving the user…
Gulf Coast Ecosystem Restoration Task Force Meeting and Public Listening
Data Media & News Publications Press Releases Story Archive Home Gulf Coast Ecosystem Restoration Task Force Meeting and Public Listening Session Gulf Coast Ecosystem Restoration Task Force Meeting and Title: Gulf Coast Ecosystem Restoration Task Force Meeting and Public Listening SessionDescription: The
Documentation and Development. Experience in Algeria
ERIC Educational Resources Information Center
Tchuigoua, J. Founou
1972-01-01
A description of the activities of the Documentation, Library and Archives Department of the Algiers Chamber of Commerce and Industry, which is run by a small staff on a modest budget, provides documentation services for the staff of the Chamber of Commerce and also assists other centers in Algeria. (Author)
Mistreatment in Assisted Living Facilities: Complaints, Substantiations, and Risk Factors
ERIC Educational Resources Information Center
Phillips, Linda R.; Guo, Guifang
2011-01-01
Purpose of the Study: Use archived public data from Arizona to explore relationships among selected institutional and resident risk and situation-specific factors and complaints and substantiated allegations of various types of mistreatment in assisted living facilities (ALFs). Design and Methods: An exploratory/descriptive 2-group design was…
Digitized Special Collections and Multiple User Groups
ERIC Educational Resources Information Center
Gueguen, Gretchen
2010-01-01
Many organizations have evolved since their early attempts to mount digital exhibits on the Web and are experimenting with ways to increase the scale of their digitized collections by utilizing archival finding aid description rather than resource-intensive collections and exhibits. This article examines usability research to predict how such…
Hgis and Archive Researches: a Tool for the Study of the Ancient Mill Channel of Cesena (italy)
NASA Astrophysics Data System (ADS)
Bitelli, G.; Bartolini, F.; Gatta, G.
2016-06-01
The present study aims to demonstrate the usefulness of GIS to support archive searches and historical studies (e.g. related to industrial archaeology), in the case of an ancient channel for mill powering near Cesena (Emilia-Romagna, Italy), whose history is weaved together with the history of the Compagnia dei Molini di Cesena mill company, the most ancient limited company in Italy. Several historical maps (about 40 sheets in total) inherent the studied area and 80 archive documents (drawings, photos, specifications, administrative acts, newspaper articles), over a period of more than 600 years, were collected. Once digitized, historical maps were analysed, georeferenced and mosaicked where necessary. Subsequently, in all the maps the channel with its four mills and the Savio river were vectorized. All the additional archive documents were digitized, catalogued and stored. Using the QGIS open source platform, a Historical GIS was created, encompassing the current cartographic base and all historical maps, with their vectorized elements; each archive document was linked to the proper historical map, so that the document can be immediately retrieved and visualized. In such a HGIS, the maps form the base for a spatial and temporal navigation, facilitated by a specific interface; the external documents linked to them complete the description of the represented elements. This simple and interactive tool offers a new approach to archive searches, as it allows reconstruction in space and time of the evolution of the ancient channel and the history of this important mill company.
Neri, Dario; Lerner, Richard A
2018-06-20
The discovery of organic ligands that bind specifically to proteins is a central problem in chemistry, biology, and the biomedical sciences. The encoding of individual organic molecules with distinctive DNA tags, serving as amplifiable identification bar codes, allows the construction and screening of combinatorial libraries of unprecedented size, thus facilitating the discovery of ligands to many different protein targets. Fundamentally, one links powers of genetics and chemical synthesis. After the initial description of DNA-encoded chemical libraries in 1992, several experimental embodiments of the technology have been reduced to practice. This review provides a historical account of important milestones in the development of DNA-encoded chemical libraries, a survey of relevant ongoing research activities, and a glimpse into the future.
Feels like the real thing: imagery is both more realistic and emotional than verbal thought.
Mathews, Andrew; Ridgeway, Valerie; Holmes, Emily A
2013-01-01
The production of mental images involves processes that overlap with perception and the extent of this overlap may contribute to reality monitoring errors (i.e., images misremembered as actual events). We hypothesised that mental images would be more confused with having actually seen a pictured object than would alternative representations, such as verbal descriptions. We also investigated whether affective reactions to images were greater than to verbal descriptions, and whether emotionality was associated with more or less reality monitoring confusion. In two experiments signal detection analysis revealed that mental images were more likely to be confused with viewed pictures than were verbal descriptions. There was a general response bias to endorse all emotionally negative items, but accuracy of discrimination between imagery and viewed pictures was not significantly influenced by emotional valence. In a third experiment we found that accuracy of reality monitoring depended on encoding: images were more accurately discriminated from viewed pictures when rated for affect than for size. We conclude that mental images are both more emotionally arousing and more likely to be confused with real events than are verbal descriptions, although source accuracy for images varies according to how they are encoded.
A fully decompressed synthetic bacteriophage øX174 genome assembled and archived in yeast.
Jaschke, Paul R; Lieberman, Erica K; Rodriguez, Jon; Sierra, Adrian; Endy, Drew
2012-12-20
The 5386 nucleotide bacteriophage øX174 genome has a complicated architecture that encodes 11 gene products via overlapping protein coding sequences spanning multiple reading frames. We designed a 6302 nucleotide synthetic surrogate, øX174.1, that fully separates all primary phage protein coding sequences along with cognate translation control elements. To specify øX174.1f, a decompressed genome the same length as wild type, we truncated the gene F coding sequence. We synthesized DNA encoding fragments of øX174.1f and used a combination of in vitro- and yeast-based assembly to produce yeast vectors encoding natural or designer bacteriophage genomes. We isolated clonal preparations of yeast plasmid DNA and transfected E. coli C strains. We recovered viable øX174 particles containing the øX174.1f genome from E. coli C strains that independently express full-length gene F. We expect that yeast can serve as a genomic 'drydock' within which to maintain and manipulate clonal lineages of other obligate lytic phage. Copyright © 2012 Elsevier Inc. All rights reserved.
Beginning with the Particular: Reimagining Professional Development as a Feminist Practice
ERIC Educational Resources Information Center
Schultz, Katherine
2011-01-01
This article analyzes the work of a long-term network of teachers, the Philadelphia Teachers Learning Cooperative, with a focus on their descriptive practices. Drawing on three years of ethnographic documentation of weekly meetings and a historical archive of meetings over 30 years, I characterize the teachers' knowledge about teaching and…
A Description of Sexual Offending Committed by Canadian Teachers
ERIC Educational Resources Information Center
Moulden, Heather M.; Firestone, Philip; Kingston, Drew A.; Wexler, Audrey F.
2010-01-01
The aim of this investigation was to describe teachers who sexually offend against youth and the circumstances related to these offenses. Archival Violent Crime Linkage Analysis System reports were obtained from the Royal Canadian Mounted Police, and demographic and criminal characteristics for the offender, as well as information about the victim…
Louisiana Public Scoping Meeting | NOAA Gulf Spill Restoration
Archive Home Louisiana Public Scoping Meeting Louisiana Public Scoping Meeting share Posted on February 28 , 2011 | Assessment and Early Restoration Restoration Area Title: Louisiana Public Scoping Meeting Location: Belle Chasse, LA Start Time: 18:30 Description: As part of the public scoping process, the co
"The BFG" and the Spaghetti Book Club: A Case Study of Children as Critics
ERIC Educational Resources Information Center
Hoffman, A. Robin
2010-01-01
Situated at the intersections of ethnography, childhood studies, literary studies, and education research, this reception study seeks to access real children's responses to a particular text, and to offer empirical description of actual reading experiences. Survey data is generated by taking advantage of an online resource: an archive of…
Underachievement in Primary Grade Students: A Review of Kindergarten Enrollment and DIBELS Scores
ERIC Educational Resources Information Center
Rice, Shawnik Marie
2013-01-01
Student underachievement in kindergarten through Grade 3 continues to be a challenge in the Philadelphia School District. The purpose of this quantitative descriptive correlation study was to examine, using record archives from one Philadelphia school, whether there is a relationship between (a) reading achievement scores for the Dynamic…
Sentence-Based Metadata: An Approach and Tool for Viewing Database Designs.
ERIC Educational Resources Information Center
Boyle, John M.; Gunge, Jakob; Bryden, John; Librowski, Kaz; Hanna, Hsin-Yi
2002-01-01
Describes MARS (Museum Archive Retrieval System), a research tool which enables organizations to exchange digital images and documents by means of a common thesaurus structure, and merge the descriptive data and metadata of their collections. Highlights include theoretical basis; searching the MARS database; and examples in European museums.…
Man-computer Inactive Data Access System (McIDAS). [design, development, fabrication, and testing
NASA Technical Reports Server (NTRS)
1973-01-01
A technical description is given of the effort to design, develop, fabricate, and test the two dimensional data processing system, McIDAS. The system has three basic sections: an access and data archive section, a control section, and a display section. Areas reported include hardware, system software, and applications software.
NUWUVI: A Southern Paiute History.
ERIC Educational Resources Information Center
Inter-Tribal Council of Nevada, Reno.
The first in a series of four histories of native Nevadans, this volume presents the story of the Southern Paiutes, or Nuwuvi. Based on interviews with tribal members and research conducted at numerous archives and record centers, the history begins with a description of the ancient culture and territory of the many Nuwuvi bands that lived,…
Building Community Around Hydrologic Data Models Within CUAHSI
NASA Astrophysics Data System (ADS)
Maidment, D.
2007-12-01
The Consortium of Universities for the Advancement of Hydrologic Science, Inc (CUAHSI) has a Hydrologic Information Systems project which aims to provide better data access and capacity for data synthesis for the nation's water information, both that collected by academic investigators and that collected by water agencies. These data include observations of streamflow, water quality, groundwater levels, weather and climate and aquatic biology. Each water agency or research investigator has a unique method of formatting their data (syntactic heterogeneity) and describing their variables (semantic heterogeneity). The result is a large agglomeration of data in many formats and descriptions whose full content is hard to interpret and analyze. CUAHSI is helping to resolve syntactic heterogeneity through the development of WaterML, a standard XML markup language for communicating water observations data through web services, and a standard relational database structure for archiving data called the Observations Data Model. Variables in these data archiving and communicating systems are indexed against a controlled vocabulary of descriptive terms to provide the capacity to synthesize common data types from disparate data sources.
Athanasopoulos, Panos; Bylund, Emanuel
2013-03-01
In this article, we explore whether cross-linguistic differences in grammatical aspect encoding may give rise to differences in memory and cognition. We compared native speakers of two languages that encode aspect differently (English and Swedish) in four tasks that examined verbal descriptions of stimuli, online triads matching, and memory-based triads matching with and without verbal interference. Results showed between-group differences in verbal descriptions and in memory-based triads matching. However, no differences were found in online triads matching and in memory-based triads matching with verbal interference. These findings need to be interpreted in the context of the overall pattern of performance, which indicated that both groups based their similarity judgments on common perceptual characteristics of motion events. These results show for the first time a cross-linguistic difference in memory as a function of differences in grammatical aspect encoding, but they also contribute to the emerging view that language fine tunes rather than shapes perceptual processes that are likely to be universal and unchanging. Copyright © 2012 Cognitive Science Society, Inc.
Foley, Mary Ann; Fried, Adina Rachel; Cowan, Emily; Bays, Rebecca Brooke
2014-01-01
In 2 experiments, the effect of collaborative encoding on memory was examined by testing 2 interactive components of co-construction processes. One component focused on the nature of the interactive exchange between collaborators: As the partners worked together to create descriptions about ways to interact with familiar objects, constraints were imposed on the interactions by requiring them to take turns (Experiment 1) or to interact without constraints (Experiment 2). The nature of the relationship between partners was manipulated as well by including 2 pair types, friends or unfamiliar peers (Experiments 1 and 2). Interactive component effects were found to influence spontaneous activations through content analyses of participants' descriptions, the patterns of false recognition errors, and the relationship between content and errors. The findings highlight the value of examining the content of participants' collaborative efforts when assessing the effects of collaborative encoding on memory and point to mechanisms mediating collaboration's effects. Because the interactions occurred within the context of an imagery generation task, the findings are also intriguing because of their implications for the use of guided imagery techniques that incorporate co-construction processes.
EBI metagenomics--a new resource for the analysis and archiving of metagenomic data.
Hunter, Sarah; Corbett, Matthew; Denise, Hubert; Fraser, Matthew; Gonzalez-Beltran, Alejandra; Hunter, Christopher; Jones, Philip; Leinonen, Rasko; McAnulla, Craig; Maguire, Eamonn; Maslen, John; Mitchell, Alex; Nuka, Gift; Oisel, Arnaud; Pesseat, Sebastien; Radhakrishnan, Rajesh; Rocca-Serra, Philippe; Scheremetjew, Maxim; Sterk, Peter; Vaughan, Daniel; Cochrane, Guy; Field, Dawn; Sansone, Susanna-Assunta
2014-01-01
Metagenomics is a relatively recently established but rapidly expanding field that uses high-throughput next-generation sequencing technologies to characterize the microbial communities inhabiting different ecosystems (including oceans, lakes, soil, tundra, plants and body sites). Metagenomics brings with it a number of challenges, including the management, analysis, storage and sharing of data. In response to these challenges, we have developed a new metagenomics resource (http://www.ebi.ac.uk/metagenomics/) that allows users to easily submit raw nucleotide reads for functional and taxonomic analysis by a state-of-the-art pipeline, and have them automatically stored (together with descriptive, standards-compliant metadata) in the European Nucleotide Archive.
EOS MLS Science Data Processing System: A Description of Architecture and Capabilities
NASA Technical Reports Server (NTRS)
Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.
2006-01-01
This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.
EBI metagenomics—a new resource for the analysis and archiving of metagenomic data
Hunter, Sarah; Corbett, Matthew; Denise, Hubert; Fraser, Matthew; Gonzalez-Beltran, Alejandra; Hunter, Christopher; Jones, Philip; Leinonen, Rasko; McAnulla, Craig; Maguire, Eamonn; Maslen, John; Mitchell, Alex; Nuka, Gift; Oisel, Arnaud; Pesseat, Sebastien; Radhakrishnan, Rajesh; Rocca-Serra, Philippe; Scheremetjew, Maxim; Sterk, Peter; Vaughan, Daniel; Cochrane, Guy; Field, Dawn; Sansone, Susanna-Assunta
2014-01-01
Metagenomics is a relatively recently established but rapidly expanding field that uses high-throughput next-generation sequencing technologies to characterize the microbial communities inhabiting different ecosystems (including oceans, lakes, soil, tundra, plants and body sites). Metagenomics brings with it a number of challenges, including the management, analysis, storage and sharing of data. In response to these challenges, we have developed a new metagenomics resource (http://www.ebi.ac.uk/metagenomics/) that allows users to easily submit raw nucleotide reads for functional and taxonomic analysis by a state-of-the-art pipeline, and have them automatically stored (together with descriptive, standards-compliant metadata) in the European Nucleotide Archive. PMID:24165880
Kelso, Kyle W.; Flocks, James G.
2015-01-01
Selection of the core site locations was based on geophysical surveys conducted around the islands from 2008 to 2010. The surveys, using acoustic systems to image and interpret the nearsurface stratigraphy, were conducted to investigate the geologic controls on island evolution. This data series serves as an archive of sediment data collected from August to September 2010, offshore of the Mississippi barrier islands. Data products, including descriptive core logs, core photographs, results of sediment grain-size analyses, sample location maps, and geographic information system (GIS) data files with accompanying formal Federal Geographic Data Committee (FDGC) metadata can be downloaded from the data products and downloads page.
The SAMI Galaxy Survey: A prototype data archive for Big Science exploration
NASA Astrophysics Data System (ADS)
Konstantopoulos, I. S.; Green, A. W.; Foster, C.; Scott, N.; Allen, J. T.; Fogarty, L. M. R.; Lorente, N. P. F.; Sweet, S. M.; Hopkins, A. M.; Bland-Hawthorn, J.; Bryant, J. J.; Croom, S. M.; Goodwin, M.; Lawrence, J. S.; Owers, M. S.; Richards, S. N.
2015-11-01
We describe the data archive and database for the SAMI Galaxy Survey, an ongoing observational program that will cover ≈3400 galaxies with integral-field (spatially-resolved) spectroscopy. Amounting to some three million spectra, this is the largest sample of its kind to date. The data archive and built-in query engine use the versatile Hierarchical Data Format (HDF5), which precludes the need for external metadata tables and hence the setup and maintenance overhead those carry. The code produces simple outputs that can easily be translated to plots and tables, and the combination of these tools makes for a light system that can handle heavy data. This article acts as a contextual companion to the SAMI Survey Database source code repository, samiDB, which is freely available online and written entirely in Python. We also discuss the decisions related to the selection of tools and the creation of data visualisation modules. It is our aim that the work presented in this article-descriptions, rationale, and source code-will be of use to scientists looking to set up a maintenance-light data archive for a Big Science data load.
NASA Astrophysics Data System (ADS)
Passas, Georgios; Freear, Steven; Fawcett, Darren
2010-08-01
Orthogonal frequency division multiplexing (OFDM)-based feed-forward space-time trellis code (FFSTTC) encoders can be synthesised as very high speed integrated circuit hardware description language (VHDL) designs. Evaluation of their FPGA implementation can lead to conclusions that help a designer to decide the optimum implementation, given the encoder structural parameters. VLSI architectures based on 1-bit multipliers and look-up tables (LUTs) are compared in terms of FPGA slices and block RAMs (area), as well as in terms of minimum clock period (speed). Area and speed graphs versus encoder memory order are provided for quadrature phase shift keying (QPSK) and 8 phase shift keying (8-PSK) modulation and two transmit antennas, revealing best implementation under these conditions. The effect of number of modulation bits and transmit antennas on the encoder implementation complexity is also investigated.
The architectonic encoding of the minor lunar standstills in the horizon of the Giza pyramids.
NASA Astrophysics Data System (ADS)
Hossam, M. K. Aboulfotouh
The paper is an attempt to show the architectonic method of the ancient Egyptian designers for encoding the horizontal-projections of the moon's declinations during two events of the minor lunar standstills, in the design of the site-plan of the horizon of the Giza pyramids, using the methods of descriptive geometry. It shows that the distance of the eastern side of the second Giza pyramid from the north-south axis of the great pyramid encodes a projection of a lunar declination, when earth's obliquity-angle was ~24.10°. Besides, it shows that the angle of inclination of the causeway of the second Giza pyramid, of ~13.54° south of the cardinal east, encodes the projection of another lunar declination when earth's obliquity-angle reaches ~22.986°. In addition, it shows the encoded coordinate system in the site-plan of the horizon of the Giza pyramids.
Information Requirements for Integrating Spatially Discrete, Feature-Based Earth Observations
NASA Astrophysics Data System (ADS)
Horsburgh, J. S.; Aufdenkampe, A. K.; Lehnert, K. A.; Mayorga, E.; Hsu, L.; Song, L.; Zaslavsky, I.; Valentine, D. L.
2014-12-01
Several cyberinfrastructures have emerged for sharing observational data collected at densely sampled and/or highly instrumented field sites. These include the CUAHSI Hydrologic Information System (HIS), the Critical Zone Observatory Integrated Data Management System (CZOData), the Integrated Earth Data Applications (IEDA) and EarthChem system, and the Integrated Ocean Observing System (IOOS). These systems rely on standard data encodings and, in some cases, standard semantics for classes of geoscience data. Their focus is on sharing data on the Internet via web services in domain specific encodings or markup languages. While they have made progress in making data available, it still takes investigators significant effort to discover and access datasets from multiple repositories because of inconsistencies in the way domain systems describe, encode, and share data. Yet, there are many scenarios that require efficient integration of these data types across different domains. For example, understanding a soil profile's geochemical response to extreme weather events requires integration of hydrologic and atmospheric time series with geochemical data from soil samples collected over various depth intervals from soil cores or pits at different positions on a landscape. Integrated access to and analysis of data for such studies are hindered because common characteristics of data, including time, location, provenance, methods, and units are described differently within different systems. Integration requires syntactic and semantic translations that can be manual, error-prone, and lossy. We report information requirements identified as part of our work to define an information model for a broad class of earth science data - i.e., spatially-discrete, feature-based earth observations resulting from in-situ sensors and environmental samples. We sought to answer the question: "What information must accompany observational data for them to be archivable and discoverable within a publication system as well as interpretable once retrieved from such a system for analysis and (re)use?" We also describe development of multiple functional schemas (i.e., physical implementations for data storage, transfer, and archival) for the information model that capture the requirements reported here.
The Hopkins Ultraviolet Telescope: The Final Archive
NASA Astrophysics Data System (ADS)
Dixon, William V.; Blair, William P.; Kruk, Jeffrey W.; Romelfanger, Mary L.
2013-04-01
The Hopkins Ultraviolet Telescope (HUT) was a 0.9 m telescope and moderate-resolution (Δλ = 3 Å) far-ultraviolet (820-1850 Å) spectrograph that flew twice on the space shuttle, in 1990 December (Astro-1, STS-35) and 1995 March (Astro-2, STS-67). The resulting spectra were originally archived in a nonstandard format that lacked important descriptive metadata. To increase their utility, we have modified the original data-reduction software to produce a new and more user-friendly data product, a time-tagged photon list similar in format to the Intermediate Data Files (IDFs) produced by the Far Ultraviolet Spectroscopic Explorer calibration pipeline. We have transferred all relevant pointing and instrument-status information from locally-archived science and engineering databases into new FITS header keywords for each data set. Using this new pipeline, we have reprocessed the entire HUT archive from both missions, producing a new set of calibrated spectral products in a modern FITS format that is fully compliant with Virtual Observatory requirements. For each exposure, we have generated quick-look plots of the fully-calibrated spectrum and associated pointing history information. Finally, we have retrieved from our archives HUT TV guider images, which provide information on aperture positioning relative to guide stars, and converted them into FITS-format image files. All of these new data products are available in the new HUT section of the Mikulski Archive for Space Telescopes (MAST), along with historical and reference documents from both missions. In this article, we document the improved data-processing steps applied to the data and show examples of the new data products.
The Hopkins Ultraviolet Telescope: The Final Archive
NASA Technical Reports Server (NTRS)
Dixon, William V.; Blair, William P.; Kruk, Jeffrey W.; Romelfanger, Mary L.
2013-01-01
The Hopkins Ultraviolet Telescope (HUT) was a 0.9 m telescope and moderate-resolution (Delta)lambda equals 3 A) far-ultraviolet (820-1850 Å) spectrograph that flew twice on the space shuttle, in 1990 December (Astro-1, STS-35) and 1995 March (Astro-2, STS-67). The resulting spectra were originally archived in a nonstandard format that lacked important descriptive metadata. To increase their utility, we have modified the original datareduction software to produce a new and more user-friendly data product, a time-tagged photon list similar in format to the Intermediate Data Files (IDFs) produced by the Far Ultraviolet Spectroscopic Explorer calibration pipeline. We have transferred all relevant pointing and instrument-status information from locally-archived science and engineering databases into new FITS header keywords for each data set. Using this new pipeline, we have reprocessed the entire HUT archive from both missions, producing a new set of calibrated spectral products in a modern FITS format that is fully compliant with Virtual Observatory requirements. For each exposure, we have generated quicklook plots of the fully-calibrated spectrum and associated pointing history information. Finally, we have retrieved from our archives HUT TV guider images, which provide information on aperture positioning relative to guide stars, and converted them into FITS-format image files. All of these new data products are available in the new HUT section of the Mikulski Archive for Space Telescopes (MAST), along with historical and reference documents from both missions. In this article, we document the improved data-processing steps applied to the data and show examples of the new data products.
Recent advances and plans in processing and geocoding of SAR data at the DFD
NASA Technical Reports Server (NTRS)
Noack, W.
1993-01-01
Because of the needs of future projects like ENVISAT and the experiences made with the current operational ERS-1 facilities, a radical change in the synthetic aperture radar (SAR) processing scenarios can be predicted for the next years. At the German PAF several new developments were initialized which are driven mainly either by user needs or by system and operational constraints ('lessons learned'). At the end there will be a major simplification and uniformation of all used computer systems. Especially the following changes are likely to be implemented at the German PAF: transcription before archiving, processing of all standard products with high throughput directly at the receiving stations, processing of special 'high-valued' products at the PAF, usage of a single type of processor hardware, implementation of a large and fast on-line data archive, and improved and unified fast data network between the processing and archiving facilities. A short description of the current operational SAR facilities as well as the future implementations are given.
Tobin, Kenneth W; Karnowski, Thomas P; Chaum, Edward
2013-08-06
A method for diagnosing diseases having retinal manifestations including retinal pathologies includes the steps of providing a CBIR system including an archive of stored digital retinal photography images and diagnosed patient data corresponding to the retinal photography images, the stored images each indexed in a CBIR database using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the stored images. A query image of the retina of a patient is obtained. Using image processing, regions or structures in the query image are identified. The regions or structures are then described using the plurality of feature vectors. At least one relevant stored image from the archive based on similarity to the regions or structures is retrieved, and an eye disease or a disease having retinal manifestations in the patient is diagnosed based on the diagnosed patient data associated with the relevant stored image(s).
Superfund Public Information System (SPIS), June 1998 (on CD-ROM). Data file
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-06-01
The Superfund Public Information System (SPIS) on CD-ROM contains Superfund data for the United States Environmental Protection Agency. The Superfund data is a collection of four databases, CERCLIS, Archive (NFRAP), RODS, and NPL Sites. Descriptions of these databases and CD contents are listed below. The FolioViews browse and retrieval engine is used as a graphical interface to the data. Users can access simple queries and can do complex searching on key words or fields. In addition, context sensitive help, a Superfund process overview, and an integrated data dictionary are available. RODS is the Records Of Decision System. RODS is usedmore » to track site clean-ups under the Superfund program to justify the type of treatment chosen at each site. RODS contains information on technology justification, site history, community participation, enforcement activities, site characteristics, scope and role of response action, and remedy. Explanation of Significant Differences (ESDs) are also available on the CD. CERCLIS is the Comprehensive Environmental Response, Compensation, and Liability Information System. It is the official repository for all Superfund site and incident data. It contains comprehensive information on hazardous waste sites, site inspections, preliminary assessments, and remedial status. The system is sponsored by the EPA`s Office of Emergency and Remedial Response, Information Management Center. Archive (NFRAP) consists of hazardous waste sites that have no further remedial action planned; only basic identifying information is provided for archive sites. The sites found in the Archive database were originally in the CERCLIS database, but were removed beginning in the fall of 1995. NPL sites (available online) are fact sheets that describe the location and history of Superfund sites. Included are descriptions of the most recent activities and past actions at the sites that have contributed to the contamination. Population estimates, land usages, and nearby resources give background on the local setting surrounding a site.« less
ERIC Educational Resources Information Center
Perry, Carol A.
2012-01-01
The purpose of this study was to examine the educational experiences and outcomes of low-skill adults in West Virginia's community and technical colleges, providing a more detailed profile of these students. Data for the variables were obtained from archival databases through a cooperative agreement between state agencies. Descriptive statistics…
Web-Resources for Astronomical Data in the Ultraviolet
NASA Astrophysics Data System (ADS)
Sachkov, M. E.; Malkov, O. Yu.
2017-12-01
In this paper we describe databases of space projects that are operating or have operated in the ultraviolet spectral region. We give brief descriptions and links to major sources for UV data on the web: archives, space mission sites, databases, catalogues. We pay special attention to the World Space Observatory—Ultraviolet mission that will be launched in 2021.
Preservation and Access to Manuscript Collections of the Czech National Library.
ERIC Educational Resources Information Center
Karen, Vladimir; Psohlavec, Stanislav
In 1996, the Czech National Library started a large-scale digitization of its extensive and invaluable collection of historical manuscripts and printed books. Each page of the selected documents is scanned using a high-resolution, full-color digital camera, processed, and archived on a CD-ROM disk. HTML coded description is added to the entire…
Toward a Model of Journal Economics in the Language Sciences. LINCS Project Document Series.
ERIC Educational Resources Information Center
Berg, Sanford; Campion, Douglas
This study outlines some considerations for an economic model of the scientific journal market. The model provides an explanation of journal market structure and the dynamics of market behavior, as well as a description of journal market development. Three types of periodicals are discussed: (1) primary, archival journals serving a current…
A decoding procedure for the Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Lim, R. S.
1978-01-01
A decoding procedure is described for the (n,k) t-error-correcting Reed-Solomon (RS) code, and an implementation of the (31,15) RS code for the I4-TENEX central system. This code can be used for error correction in large archival memory systems. The principal features of the decoder are a Galois field arithmetic unit implemented by microprogramming a microprocessor, and syndrome calculation by using the g(x) encoding shift register. Complete decoding of the (31,15) code is expected to take less than 500 microsecs. The syndrome calculation is performed by hardware using the encoding shift register and a modified Chien search. The error location polynomial is computed by using Lin's table, which is an interpretation of Berlekamp's iterative algorithm. The error location numbers are calculated by using the Chien search. Finally, the error values are computed by using Forney's method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ermi, A.M.
1997-05-01
Description of the Proposed Activity/REPORTABLE OCCURRENCE or PIAB: This ECN changes the computer systems design description support document describing the computers system used to control, monitor and archive the processes and outputs associated with the Hydrogen Mitigation Test Pump installed in SY-101. There is no new activity or procedure associated with the updating of this reference document. The updating of this computer system design description maintains an agreed upon documentation program initiated within the test program and carried into operations at time of turnover to maintain configuration control as outlined by design authority practicing guidelines. There are no new crediblemore » failure modes associated with the updating of information in a support description document. The failure analysis of each change was reviewed at the time of implementation of the Systems Change Request for all the processes changed. This document simply provides a history of implementation and current system status.« less
Veenstra, Jan A; Khammassi, Hela
2017-04-01
RYamides are arthropod neuropeptides with unknown function. In 2011 two RYamides were isolated from D. melanogaster as the ligands for the G-protein coupled receptor CG5811. The D. melanogaster gene encoding these neuropeptides is highly unusual, as there are four RYamide encoding exons in the current genome assembly, but an exon encoding a signal peptide is absent. Comparing the D. melanogaster gene structure with those from other species, including D. virilis, suggests that the gene is degenerating. RNAseq data from 1634 short sequence read archives at NCBI containing more than 34 billion spots yielded numerous individual spots that correspond to the RYamide encoding exons, of which a large number include the intron-exon boundary at the start of this exon. Although 72 different sequences have been spliced onto this RYamide encoding exon, none codes for the signal peptide of this gene. Thus, the RNAseq data for this gene reveal only noise and no signal. The very small quantities of peptide recovered during isolation and the absence of credible RNAseq data, indicates that the gene is very little expressed, while the RYamide gene structure in D. melanogaster suggests that it might be evolving into a pseudogene. Yet, the identification of the peptides it encodes clearly shows it is still functional. Using region specific antisera, we could localize numerous neurons and enteroendocrine cells in D. willistoni, D. virilis and D. pseudoobscura, but only two adult abdominal neurons in D. melanogaster. Those two neurons project to and innervate the rectal papillae, suggesting that RYamides may be involved in the regulation of water homeostasis. Copyright © 2017 Elsevier Ltd. All rights reserved.
Representation of viruses in the remediated PDB archive
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawson, Catherine L., E-mail: cathy.lawson@rutgers.edu; Dutta, Shuchismita; Westbrook, John D.
2008-08-01
A new data model for PDB entries of viruses and other biological assemblies with regular noncrystallographic symmetry is described. A new scheme has been devised to represent viruses and other biological assemblies with regular noncrystallographic symmetry in the Protein Data Bank (PDB). The scheme describes existing and anticipated PDB entries of this type using generalized descriptions of deposited and experimental coordinate frames, symmetry and frame transformations. A simplified notation has been adopted to express the symmetry generation of assemblies from deposited coordinates and matrix operations describing the required point, helical or crystallographic symmetry. Complete correct information for building full assemblies,more » subassemblies and crystal asymmetric units of all virus entries is now available in the remediated PDB archive.« less
Conversion of a traditional image archive into an image resource on compact disc.
Andrew, S M; Benbow, E W
1997-01-01
The conversion of a traditional archive of pathology images was organised on 35 mm slides into a database of images stored on compact disc (CD-ROM), and textual descriptions were added to each image record. Students on a didactic pathology course found this resource useful as an aid to revision, despite relative computer illiteracy, and it is anticipated that students on a new problem based learning course, which incorporates experience with information technology, will benefit even more readily when they use the database as an educational resource. A text and image database on CD-ROM can be updated repeatedly, and the content manipulated to reflect the content and style of the courses it supports. Images PMID:9306931
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunn, W.N.
1998-03-01
LUG and Sway brace ANalysis (LUGSAN) II is an analysis and database computer program that is designed to calculate store lug and sway brace loads for aircraft captive carriage. LUGSAN II combines the rigid body dynamics code, SWAY85, with a Macintosh Hypercard database to function both as an analysis and archival system. This report describes the LUGSAN II application program, which operates on the Macintosh System (Hypercard 2.2 or later) and includes function descriptions, layout examples, and sample sessions. Although this report is primarily a user`s manual, a brief overview of the LUGSAN II computer code is included with suggestedmore » resources for programmers.« less
Mass-storage management for distributed image/video archives
NASA Astrophysics Data System (ADS)
Franchi, Santina; Guarda, Roberto; Prampolini, Franco
1993-04-01
The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.
ARCUS Internet Media Archive (IMA): A Resource for Outreach and Education
NASA Astrophysics Data System (ADS)
Polly, Z.; Warnick, W. K.; Polly, J.
2008-12-01
The ARCUS Internet Media Archive (IMA) is a collection of photos, graphics, videos, and presentations about the Arctic that are shared through the Internet. It provides the arctic research community and the public at large with a centralized location where images and video pertaining to polar research can be browsed and retrieved for a variety of uses. The IMA currently contains almost 6,500 publicly accessible photos, including 4,000 photos from the National Science Foundation funded Teachers and Researchers Exploring and Collaborating (TREC, now PolarTREC) program, an educational research experience in which K-12 teachers participate in arctic research as a pathway to improving science education. The IMA also includes 450 video files, 270 audio files, nearly 100 graphics and logos, 28 presentations, and approximately 10,000 additional resources that are being prepared for public access. The contents of this archive are organized by file type, contributor's name, event, or by organization, with each photo or file accompanied by information on content, contributor source, and usage requirements. All the files are key-worded and all information, including file name and description, is completely searchable. ARCUS plans to continue to improve and expand the IMA with a particular focus on providing graphics depicting key arctic research results and findings as well as edited video archives of relevant scientific community meetings. To submit files or for more information and to view the ARCUS Internet Media Archive, please go to: http://media.arcus.org or email photo@arcus.org.
1977-09-01
to state as successive input bits are brought into the encoder. We can more easily follow our progress on the equivalent lattice diagram where...Pg.Pj.. STATE DIAGRAM INPUT PATH i ,i.,i ,L.. = 1001 1’ 2’’ V Fig. 12. Convolutional Encoder, State Diagram and Lattice . 39 represented...and can in fact be traced. The Viterbi algorithm can be simply described with the aid of this lattice . Note that the nodes of the lattice represent
MITRE-Bedford: Description of the ALEMBIC System as Used for MUC-4
1992-01-01
of control, and its corresponding encoding in SGML, consider the first paragraph of message TST2- MUC4 -0048 : SALVADORAN PRESIDENT-ELECT ALFREDO...markup simply encodes th e templates that the system has produced, e .g . , <p><template> <slotname>0 . MESSAGE :ID</slotname> <slotval>TST2- MUC4 -0048...verb in TST2- MUC4 -0048), this includes the active voice (e .g ., "Cristiani . condemned the terrorist killing"), the passive voice, and ancillary forms
NASA Astrophysics Data System (ADS)
Buxbaum, T. M.; Warnick, W. K.; Polly, B.; Hueffer, L. J.; Behr, S. A.
2006-12-01
The ARCUS Internet Media Archive (IMA) is a collection of photos, graphics, videos, and presentations about the Arctic that are shared through the Internet. It provides the arctic research community and the public at large with a centralized location where images and video pertaining to polar research can be browsed and retrieved for a variety of uses. The IMA currently contains almost 5,000 publicly accessible photos, including 3,000 photos from the National Science Foundation funded Teachers and Researchers Exploring and Collaborating (TREC) program, an educational research experience in which K-12 teachers participate in arctic research as a pathway to improving science education. The IMA also includes 360 video files, 260 audio files, and approximately 8,000 additional resources that are being prepared for public access. The contents of this archive are organized by file type, contributor's name, event, or by organization, with each photo or file accompanied by information on content, contributor source, and usage requirements. All the files are keyworded and all information, including file name and description, is completely searchable. ARCUS plans to continue to improve and expand the IMA with a particular focus on providing graphics depicting key arctic research results and findings as well as edited video archives of relevant scientific community meetings.
Farwick, Nadine M; Klopfleisch, Robert; Gruber, Achim D; Weiss, Alexander Th A
2017-04-01
Objectives A hallmark of neoplasms is their origin from a single cell; that is, clonality. Many techniques have been developed in human medicine to utilise this feature of tumours for diagnostic purposes. One approach is X chromosome-linked clonality testing using polymorphisms of genes encoded by genes on the X chromosome. The aim of this study was to determine if the feline androgen receptor gene was suitable for X chromosome-linked clonality testing. Methods The feline androgen receptor gene was characterised and used to test clonality of feline lymphomas by PCR and polyacrylamide gel electrophoresis, using archival formalin-fixed, paraffin-embedded material. Results Clonality of the feline lymphomas under study was confirmed and the gene locus was shown to represent a suitable target in clonality testing. Conclusions and relevance Because there are some pitfalls of using X chromosome-linked clonality testing, further studies are necessary to establish this technique in the cat.
ERIC Educational Resources Information Center
Calzada Pérez, María
2013-01-01
The present paper revolves around MaxiECPC, one of the various sub-corpora that make up ECPC (the European Comparable and Parallel Corpora), an electronic archive of speeches delivered at different parliaments (i.e. the European Parliament-EP; the Spanish Congreso de los Diputados-CD; and the British House of Commons-HC) from 1996 to 2009. In…
Fallon, Nevada FORGE Thermal-Hydrological-Mechanical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blankenship, Doug; Sonnenthal, Eric
Archive contains thermal-mechanical simulation input/output files. Included are files which fall into the following categories: ( 1 ) Spreadsheets with various input parameter calculations ( 2 ) Final Simulation Inputs ( 3 ) Native-State Thermal-Hydrological Model Input File Folders ( 4 ) Native-State Thermal-Hydrological-Mechanical Model Input Files ( 5 ) THM Model Stimulation Cases See 'File Descriptions.xlsx' resource below for additional information on individual files.
ERIC Educational Resources Information Center
Jubb, Michael
Prepared under contract with the International Council on Archives (ICA), this guide provides descriptions of all classes of public records held by the British Public Record Office (PRO) which are likely to contain scientific or technical information. The PRO is responsible for keeping and making available to the public those records of the…
Sediment data collected in 2013 from the northern Chandeleur Islands, Louisiana
Buster, Noreen A.; Kelso, Kyle W.; Bernier, Julie C.; Flocks, James G.; Miselis, Jennifer L.; DeWitt, Nancy T.
2014-01-01
This data series serves as an archive of sediment data collected in July 2013 from the Chandeleur Islands sand berm and adjacent barrier-island environments. Data products include descriptive core logs, core photographs and x-radiographs, results of sediment grain-size analyses, sample location maps, and Geographic Information System data files with accompanying formal Federal Geographic Data Committee metadata.
The Archives of the Department of Terrestrial Magnetism: Documenting 100 Years of Carnegie Science
NASA Astrophysics Data System (ADS)
Hardy, S. J.
2005-12-01
The archives of the Department of Terrestrial Magnetism (DTM) of the Carnegie Institution of Washington document more than a century of geophysical and astronomical investigations. Primary source materials available for historical research include field and laboratory notebooks, equipment designs, plans for observatories and research vessels, scientists' correspondence, and thousands of expedition and instrument photographs. Yet despite its history, DTM long lacked a systematic approach to managing its documentary heritage. A preliminary records survey conducted in 2001 identified more than 1,000 linear feet of historically-valuable records languishing in dusty, poorly-accessible storerooms. Intellectual control at that time was minimal. With support from the National Historical Publications and Records Commission, the "Carnegie Legacy Project" was initiated in 2003 to preserve, organize, and facilitate access to DTM's archival records, as well as those of the Carnegie Institution's administrative headquarters and Geophysical Laboratory. Professional archivists were hired to process the 100-year backlog of records. Policies and procedures were established to ensure that all work conformed to national archival standards. Records were appraised, organized, and rehoused in acid-free containers, and finding aids were created for the project web site. Standardized descriptions of each collection were contributed to the WorldCat bibliographic database and the AIP International Catalog of Sources for History of Physics. Historic photographs and documents were digitized for online exhibitions to raise awareness of the archives among researchers and the general public. The success of the Legacy Project depended on collaboration between archivists, librarians, historians, data specialists, and scientists. This presentation will discuss key aspects (funding, staffing, preservation, access, outreach) of the Legacy Project and is aimed at personnel in observatories, research institutes, and other organizations interested in establishing their own archival programs.
Mammarella, Irene C; Meneghetti, Chiara; Pazzaglia, Francesca; Cornoldi, Cesare
2014-01-01
The present study investigated the difficulties encountered by children with non-verbal learning disability (NLD) and reading disability (RD) when processing spatial information derived from descriptions, based on the assumption that both groups should find it more difficult than matched controls, but for different reasons, i.e., due to a memory encoding difficulty in cases of RD and to spatial information comprehension problems in cases of NLD. Spatial descriptions from both survey and route perspectives were presented to 9-12-year-old children divided into three groups: NLD (N = 12); RD (N = 12), and typically developing controls (TD; N = 15); then participants completed a sentence verification task and a memory for locations task. The sentence verification task was presented in two conditions: in one the children could refer to the text while answering the questions (i.e., text present condition), and in the other the text was withdrawn (i.e., text absent condition). Results showed that the RD group benefited from the text present condition, but was impaired to the same extent as the NLD group in the text absent condition, suggesting that the NLD children's difficulty is due mainly to their poor comprehension of spatial descriptions, while the RD children's difficulty is due more to a memory encoding problem. These results are discussed in terms of their implications in the neuropsychological profiles of children with NLD or RD, and the processes involved in spatial descriptions.
2010-09-01
analysis process is to categorize the goal according to (Gagné, 2005) domains of learning . These domains are: verbal information, intellectual...to terrain features. The ability to provide a clear verbal description of a unique feature is a learned task that may be separate from the...and experts differently. The process of verbally encoding information on location and providing this description may detract from the primary task of
NASA Astrophysics Data System (ADS)
Omidvari, Negar; Schulz, Volkmar
2015-06-01
This paper evaluates the performance of a new type of PET detectors called sensitivity encoded silicon photomultiplier (SeSP), which allows a direct coupling of small-pitch crystal arrays to the detector with a reduction in the number of readout channels. Four SeSP devices with two separate encoding schemes of 1D and 2D were investigated in this study. Furthermore, both encoding schemes were manufactured in two different sizes of 4 ×4 mm2 and 7. 73 ×7. 9 mm2, in order to investigate the effect of size on detector parameters. All devices were coupled to LYSO crystal arrays with 1 mm pitch size and 10 mm height, with optical isolation between crystals. The characterization was done for the key parameters of crystal-identification, energy resolution, and time resolution as a function of triggering threshold and over-voltage (OV). Position information was archived using the center of gravity (CoG) algorithm and a least squares approach (LSQA) in combination with a mean light matrix around the photo-peak. The positioning results proved the capability of all four SeSP devices in precisely identifying all crystals coupled to the sensors. Energy resolution was measured at different bias voltages, varying from 12% to 18% (FWHM) and paired coincidence time resolution (pCTR) of 384 ps to 1.1 ns was obtained for different SeSP devices at about 18 °C room temperature. However, the best time resolution was achieved at the highest over-voltage, resulting in a noise ratio of 99.08%.
Minding the Gap: Narrative Descriptions about Mental States Attenuate Parochial Empathy
Bruneau, Emile G.; Cikara, Mina; Saxe, Rebecca
2015-01-01
In three experiments, we examine parochial empathy (feeling more empathy for in-group than out-group members) across novel group boundaries, and test whether we can mitigate parochial empathy with brief narrative descriptions. In the absence of individuating information, participants consistently report more empathy for members of their own assigned group than a competitive out-group. However, individualized descriptions of in-group and out-group targets significantly reduce parochial empathy by interfering with encoding of targets’ group membership. Finally, the descriptions that most effectively decrease parochial empathy are those that describe targets’ mental states. These results support the role of individuating information in ameliorating parochial empathy, suggest a mechanism for their action, and show that descriptions emphasizing targets’ mental states are particularly effective. PMID:26505194
Fils, D.; Cervato, C.; Reed, J.; Diver, P.; Tang, X.; Bohling, G.; Greer, D.
2009-01-01
CHRONOS's purpose is to transform Earth history research by seamlessly integrating stratigraphic databases and tools into a virtual on-line stratigraphic record. In this paper, we describe the various components of CHRONOS's distributed data system, including the encoding of semantic and descriptive data into a service-based architecture. We give examples of how we have integrated well-tested resources available from the open-source and geoinformatic communities, like the GeoSciML schema and the simple knowledge organization system (SKOS), into the services-oriented architecture to encode timescale and phylogenetic synonymy data. We also describe on-going efforts to use geospatially enhanced data syndication and informally including semantic information by embedding it directly into the XHTML Document Object Model (DOM). XHTML DOM allows machine-discoverable descriptive data such as licensing and citation information to be incorporated directly into data sets retrieved by users. ?? 2008 Elsevier Ltd. All rights reserved.
Towards Data Repository Interoperability: The Data Conservancy Data Packaging Specification
NASA Astrophysics Data System (ADS)
DiLauro, T.; Duerr, R.; Thessen, A. E.; Rippin, M.; Pralle, B.; Choudhury, G. S.
2013-12-01
A modern data archive must support a variety of functions and services for a broad set of stakeholders over a variety of content. Data producers need to deposit this content; data consumers need to find and access it; journal publishers need to link to and from it; funders need to ensure that it is protected and its value enhanced; research institutions need to track it; and the archive itself needs to manage and preserve it. But there is not an optimal information model that supports all of these tasks. The attributes needed to manage format transformations for long-term preservation are different from, for example, those needed to understand provenance relationships among the various entities modeled in the archive. Exposing all possible properties to every function burdens users and makes it difficult to maintain a separation of concerns among the functional components. The Data Conservancy Software (DCS) manages these overlapping information needs by defining strict interfaces between components and providing mappers between the layers of the architecture. Still, work remains to make deposit more intuitive. Currently, depositing content into a DCS instance requires either very simple objects (e.g., one file equals one data item), significant manual effort, or detailed knowledge of DCS-internal data model serializations. And if one were to deposit that content into another type of archive, it would be necessary to repeat this effort. To allow data producers and consumers to interact with data in a more natural manner, the Data Conservancy[1] is developing a packaging approach that eases this burden and allows a semantic overlay atop the directory/folder and file metaphor that is more familiar. The standards-based packaging scheme augments the payload and validation capabilities of Bagit[2] with the relationship and resource description capabilities of the Open Archives Initiative (OAI) Object Reuse and Exchange (ORE)[3] model. In the absence of the ORE resource description, the DCS instance will be able to provide default mappings for the directories and files within the package payload and enable support for deposited content at a lower level of service. Internally, the DCS will map these hybrid package serializations to its own internal business objects and their properties. Thus, this approach is highly extensible, as other packaging formats could be mapped in a similar manner. In addition, this scheme supports establishing the fixity of the payload while still supporting update of the semantic overlay data. This allows a data producer with scarce resources or an archivist who acquires a researcher's data to package the data for deposit with the intention of augmenting the resource description in the future. The Data Conservancy is partnering with the Sustainable Environment Actionable Data[4] project to test the interoperability of this new packaging mechanism. [1] Data Conservancy: http://dataconservancy.org/ [2] BagIt: https://datatracker.ietf.org/doc/draft-kunze-bagit/ [3] OAI-ORE: http://www.openarchives.org/ore/1.0/ [4] SEAD: http://sead-data.net/
An XML-based Generic Tool for Information Retrieval in Solar Databases
NASA Astrophysics Data System (ADS)
Scholl, Isabelle F.; Legay, Eric; Linsolas, Romain
This paper presents the current architecture of the `Solar Web Project' now in its development phase. This tool will provide scientists interested in solar data with a single web-based interface for browsing distributed and heterogeneous catalogs of solar observations. The main goal is to have a generic application that can be easily extended to new sets of data or to new missions with a low level of maintenance. It is developed with Java and XML is used as a powerful configuration language. The server, independent of any database scheme, can communicate with a client (the user interface) and several local or remote archive access systems (such as existing web pages, ftp sites or SQL databases). Archive access systems are externally described in XML files. The user interface is also dynamically generated from an XML file containing the window building rules and a simplified database description. This project is developed at MEDOC (Multi-Experiment Data and Operations Centre), located at the Institut d'Astrophysique Spatiale (Orsay, France). Successful tests have been conducted with other solar archive access systems.
2011-01-01
Background The Biodiversity Heritage Library (BHL) is a large digital archive of legacy biological literature, comprising over 31 million pages scanned from books, monographs, and journals. During the digitisation process basic metadata about the scanned items is recorded, but not article-level metadata. Given that the article is the standard unit of citation, this makes it difficult to locate cited literature in BHL. Adding the ability to easily find articles in BHL would greatly enhance the value of the archive. Description A service was developed to locate articles in BHL based on matching article metadata to BHL metadata using approximate string matching, regular expressions, and string alignment. This article locating service is exposed as a standard OpenURL resolver on the BioStor web site http://biostor.org/openurl/. This resolver can be used on the web, or called by bibliographic tools that support OpenURL. Conclusions BioStor provides tools for extracting, annotating, and visualising articles from the Biodiversity Heritage Library. BioStor is available from http://biostor.org/. PMID:21605356
High bit rate convolutional channel encoder/decoder
NASA Technical Reports Server (NTRS)
1977-01-01
A detailed description of the design approach and tradeoffs encountered during the development of the 50 MBPS decoder system is presented. A functional analysis of each of the major logical functions is given, and the system's major components are listed.
A hierarchical SVG image abstraction layer for medical imaging
NASA Astrophysics Data System (ADS)
Kim, Edward; Huang, Xiaolei; Tan, Gang; Long, L. Rodney; Antani, Sameer
2010-03-01
As medical imaging rapidly expands, there is an increasing need to structure and organize image data for efficient analysis, storage and retrieval. In response, a large fraction of research in the areas of content-based image retrieval (CBIR) and picture archiving and communication systems (PACS) has focused on structuring information to bridge the "semantic gap", a disparity between machine and human image understanding. An additional consideration in medical images is the organization and integration of clinical diagnostic information. As a step towards bridging the semantic gap, we design and implement a hierarchical image abstraction layer using an XML based language, Scalable Vector Graphics (SVG). Our method encodes features from the raw image and clinical information into an extensible "layer" that can be stored in a SVG document and efficiently searched. Any feature extracted from the raw image including, color, texture, orientation, size, neighbor information, etc., can be combined in our abstraction with high level descriptions or classifications. And our representation can natively characterize an image in a hierarchical tree structure to support multiple levels of segmentation. Furthermore, being a world wide web consortium (W3C) standard, SVG is able to be displayed by most web browsers, interacted with by ECMAScript (standardized scripting language, e.g. JavaScript, JScript), and indexed and retrieved by XML databases and XQuery. Using these open source technologies enables straightforward integration into existing systems. From our results, we show that the flexibility and extensibility of our abstraction facilitates effective storage and retrieval of medical images.
Predictive minimum description length principle approach to inferring gene regulatory networks.
Chaitankar, Vijender; Zhang, Chaoyang; Ghosh, Preetam; Gong, Ping; Perkins, Edward J; Deng, Youping
2011-01-01
Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold that defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we propose a new inference algorithm that incorporates mutual information (MI), conditional mutual information (CMI), and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm is evaluated using both synthetic time series data sets and a biological time series data set (Saccharomyces cerevisiae). The results show that the proposed algorithm produced fewer false edges and significantly improved the precision when compared to existing MDL algorithm.
Policy Windows, Public Opinion, and Policy Ideas: The Evolution of No Child Left Behind
ERIC Educational Resources Information Center
Jaiani, Vasil; Whitford, Andrew B.
2011-01-01
Purpose: The purpose of this paper is to examine the policy process that led to the passage of the No Child Left Behind (NCLB) Act in the United States and the Bush Administration's role in this process. Design/methodology/approach: The research design is historical and archival. A description of the NCLB Act is given and the major provisions and…
The DMSP Space Weather Sensors Data Archive Listing (1982-2013) and File Formats Descriptions
2014-08-01
environment sensors including the auroral particle spectrometer (SSJ), the fluxgate magnetometer (SSM), the topside thermal plasma monitor (SSIES... Fluxgate Magnetometer (SSM) for the Defense Meteorological Satellite Program (DMSP) Block 5D-2, Flight 7, Instrument Papers, AFGL-TR-84-0225; ADA155229...Flux) SSM The fluxgate magnetometer . (Special Sensor, Magnetometer ) SSULI The ultraviolet limb imager SSUSI The ultraviolet spectrographic imager
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted files, or the addition of new or the deletion of old data products. Next, ADAPT routines analyzed the query results and issued updates to the metadata stored in the UCLA CDAWEB and SPDF metadata registries. In this way, the SPASE metadata registries generated by ADAPT can be relied on to provide up to date and complete access to Heliophysics CDF data resources on a daily basis.
NASA Technical Reports Server (NTRS)
Feist, B.; Bleacher, J. E.; Petro, N. E.; Niles, P. B.
2018-01-01
During the Apollo exploration of the lunar surface, thousands of still images, 16 mm videos, TV footage, samples, and surface experiments were captured and collected. In addition, observations and descriptions of what was observed was radioed to Mission Control as part of standard communications and subsequently transcribed. The archive of this material represents perhaps the best recorded set of geologic field campaigns and will serve as the example of how to conduct field work on other planetary bodies for decades to come. However, that archive of material exists in disparate locations and formats with varying levels of completeness, making it not easily cross-referenceable. While video and audio exist for the missions, it is not time synchronized, and images taken during the missions are not time or location tagged. Sample data, while robust, is not easily available in a context of where the samples were collected, their descriptions by the astronauts are not connected to them, or the video footage of their collection (if available). A more than five year undertaking to reconstruct and reconcile the Apollo 17 mission archive, from launch through splashdown, has generated an integrated record of the entire mission, resulting in searchable, synchronized image, voice, and video data, with geologic context provided at the time each sample was collected. Through www.apollo17.org the documentation of the field investigation conducted by the Apollo 17 crew is presented in chronologic sequence, with additional context provided by high-resolution Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images and a corresponding digital terrain model (DTM) of the Taurus-Littrow Valley.
Imaged document information location and extraction using an optical correlator
NASA Astrophysics Data System (ADS)
Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.
1999-12-01
Today, the paper document is fast becoming a thing of the past. With the rapid development of fast, inexpensive computing and storage devices, many government and private organizations are archiving their documents in electronic form (e.g., personnel records, medical records, patents, etc.). Many of these organizations are converting their paper archives to electronic images, which are then stored in a computer database. Because of this, there is a need to efficiently organize this data into comprehensive and accessible information resources and provide for rapid access to the information contained within these imaged documents. To meet this need, Litton PRC and Litton Data Systems Division are developing a system, the Imaged Document Optical Correlation and Conversion System (IDOCCS), to provide a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provide a means for the search and retrieval of information from imaged documents. IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives and has the potential to determine the types of languages contained within a document. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited, e.g., imaged documents containing an agency's seal or logo can be singled out. In this paper, we present a description of IDOCCS as well as preliminary performance results and theoretical projections.
NASA Technical Reports Server (NTRS)
1974-01-01
The feasibility is evaluated of an evolutionary development for use of a single-axis gimbal star tracker from prior two-axis gimbal star tracker based system applications. Detailed evaluation of the star tracker gimbal encoder is considered. A brief system description is given including the aspects of tracker evolution and encoder evaluation. System analysis includes evaluation of star availability and mounting constraints for the geosynchronous orbit application, and a covariance simulation analysis to evaluate performance potential. Star availability and covariance analysis digital computer programs are included.
NASA Astrophysics Data System (ADS)
Moore, R.; Faerman, M.; Minster, J.; Day, S. M.; Ely, G.
2003-12-01
A community digital library provides support for ingestion, organization, description, preservation, and access of digital entities. The technologies that traditionally provide these capabilities are digital libraries (ingestion, organization, description), persistent archives (preservation) and data grids (access). We present a design for the SCEC community digital library that incorporates aspects of all three systems. Multiple groups have created integrated environments that sustain large-scale scientific data collections. By examining these projects, the following stages of implementation can be identified: \\begin{itemize} Definition of semantic terms to associate with relevant information. This includes definition of uniform content descriptors to describe physical quantities relevant to the scientific discipline, and creation of concept spaces to define how the uniform content descriptors are logically related. Organization of digital entities into logical collections that make it simple to browse and manage related material. Definition of services that are used to access and manipulate material in the collection. Creation of a preservation environment for the long-term management of the collection. Each community is faced with heterogeneity that is introduced when data is distributed across multiple sites, or when multiple sets of collection semantics are used, and or when multiple scientific sub-disciplines are federated. We will present the relevant standards that simplify the implementation of the SCEC community library, the resource requirements for different types of data sets that drive the implementation, and the digital library processes that the SCEC community library will support. The SCEC community library can be viewed as the set of processing steps that are required to build the appropriate SCEC reference data sets (SCEC approved encoding format, SCEC approved descriptive metadata, SCEC approved collection organization, and SCEC managed storage location). Each digital entity that is ingested into the SCEC community library is processed and validated for conformance to SCEC standards. These steps generate provenance, descriptive, administrative, structural, and behavioral metadata. Using data grid technology, the descriptive metadata can be registered onto a logical name space that is controlled and managed by the SCEC digital library. A version of the SCEC community digital library is being implemented in the Storage Resource Broker. The SRB system provides almost all the features enumerated above. The peer-to-peer federation of metadata catalogs is planned for release in September, 2003. The SRB system is in production use in multiple projects, from high-energy physics, to astronomy, to earth systems science, to bio-informatics. The SCEC community library will be based on the definition of standard metadata attributes, the creation of logical collections within the SRB, the creation of access services, and the demonstration of a preservation environment. The use of the SRB for the SCEC digital library will sustain the expected collection size and collection capabilities.
Opinion: Why we need a centralized repository for isotopic data
Pauli, Jonathan N.; Newsome, Seth D.; Cook, Joseph A.; Harrod, Chris; Steffan, Shawn A.; Baker, Christopher J. O.; Ben-David, Merav; Bloom, David; Bowen, Gabriel J.; Cerling, Thure E.; Cicero, Carla; Cook, Craig; Dohm, Michelle; Dharampal, Prarthana S.; Graves, Gary; Gropp, Robert; Hobson, Keith A.; Jordan, Chris; MacFadden, Bruce; Pilaar Birch, Suzanne; Poelen, Jorrit; Ratnasingham, Sujeevan; Russell, Laura; Stricker, Craig A.; Uhen, Mark D.; Yarnes, Christopher T.; Hayden, Brian
2017-01-01
Stable isotopes encode and integrate the origin of matter; thus, their analysis offers tremendous potential to address questions across diverse scientific disciplines (1, 2). Indeed, the broad applicability of stable isotopes, coupled with advancements in high-throughput analysis, have created a scientific field that is growing exponentially, and generating data at a rate paralleling the explosive rise of DNA sequencing and genomics (3). Centralized data repositories, such as GenBank, have become increasingly important as a means for archiving information, and “Big Data” analytics of these resources are revolutionizing science and everyday life.
The human-induced pluripotent stem cell initiative—data resources for cellular genetics
Streeter, Ian; Harrison, Peter W.; Faulconbridge, Adam; Flicek, Paul; Parkinson, Helen; Clarke, Laura
2017-01-01
The Human Induced Pluripotent Stem Cell Initiative (HipSci) isf establishing a large catalogue of human iPSC lines, arguably the most well characterized collection to date. The HipSci portal enables researchers to choose the right cell line for their experiment, and makes HipSci's rich catalogue of assay data easy to discover and reuse. Each cell line has genomic, transcriptomic, proteomic and cellular phenotyping data. Data are deposited in the appropriate EMBL-EBI archives, including the European Nucleotide Archive (ENA), European Genome-phenome Archive (EGA), ArrayExpress and PRoteomics IDEntifications (PRIDE) databases. The project will make 500 cell lines from healthy individuals, and from 150 patients with rare genetic diseases; these will be available through the European Collection of Authenticated Cell Cultures (ECACC). As of August 2016, 238 cell lines are available for purchase. Project data is presented through the HipSci data portal (http://www.hipsci.org/lines) and is downloadable from the associated FTP site (ftp://ftp.hipsci.ebi.ac.uk/vol1/ftp). The data portal presents a summary matrix of the HipSci cell lines, showing available data types. Each line has its own page containing descriptive metadata, quality information, and links to archived assay data. Analysis results are also available in a Track Hub, allowing visualization in the context of public genomic annotations (http://www.hipsci.org/data/trackhubs). PMID:27733501
Internationalization of healthcare applications: a generic approach for PACS workstations.
Hussein, R; Engelmann, U; Schroeter, A; Meinzer, H P
2004-01-01
Along with the revolution of information technology and the increasing use of computers world-wide, software providers recognize the emerging need for internationalized, or global, software applications. The importance of internationalization comes from its benefits such as addressing a broader audience, making the software applications more accessible, easier to use, more flexible to support and providing users with more consistent information. In addition, some governmental agencies, e.g., in Spain, accept only fully localized software. Although the healthcare communication standards, namely, Digital Imaging and Communication in Medicine (DICOM) and Health Level Seven (HL7) support wide areas of internationalization, most of the implementers are still protective about supporting the complex languages. This paper describes a generic internationalization approach for Picture Archiving and Communication System (PACS) workstations. The Unicode standard is used to internationalize the application user interface. An encoding converter was developed to encode and decode the data between the rendering module (in Unicode encoding) and the DICOM data (in ISO 8859 encoding). An integration gateway was required to integrate the internationalized PACS components with the different PACS installations. To introduce a pragmatic example, the described approach was applied to the CHILI PACS workstation. The approach has enabled the application to handle the different internationalization aspects transparently, such as supporting complex languages, switching between different languages at runtime, and supporting multilingual clinical reports. In the healthcare enterprises, internationalized applications play an essential role in supporting a seamless flow of information between the heterogeneous multivendor information systems.
Does language guide event perception? Evidence from eye movements
Papafragou, Anna; Hulbert, Justin; Trueswell, John
2008-01-01
Languages differ in how they encode motion. When describing bounded motion, English speakers typically use verbs that convey information about manner (e.g., slide, skip, walk) rather than path (e.g., approach, ascend), whereas Greek speakers do the opposite. We investigated whether this strong cross-language difference influences how people allocate attention during motion perception. We compared eye movements from Greek and English speakers as they viewed motion events while (a) preparing verbal descriptions, or (b) memorizing the events. During the verbal description task, speakers’ eyes rapidly focused on the event components typically encoded in their native language, generating significant cross-language differences even during the first second of motion onset. However, when freely inspecting ongoing events, as in the memorization task, people allocated attention similarly regardless of the language they speak. Differences between language groups arose only after the motion stopped, such that participants spontaneously studied those aspects of the scene that their language does not routinely encode in verbs. These findings offer a novel perspective on the relation between language and perceptual/cognitive processes. They indicate that attention allocation during event perception is not affected by the perceiver’s native language; effects of language arise only when linguistic forms are recruited to achieve the task, such as when committing facts to memory. PMID:18395705
Magnetic resonance image compression using scalar-vector quantization
NASA Astrophysics Data System (ADS)
Mohsenian, Nader; Shahri, Homayoun
1995-12-01
A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.
Description of data on the Nimbus 7 LIMS map archive tape: Water vapor and nitrogen dioxide
NASA Technical Reports Server (NTRS)
Haggard, Kenneth V.; Marshall, B. T.; Kurzeja, Robert J.; Remsberg, Ellis E.; Russell, James M., III
1988-01-01
Described is the process by which the analysis of the Limb Infrared Monitor of the Stratosphere (LIMS) experiment data were used to produce estimates of synoptic maps of water vapor and nitrogen dioxide. In addition to a detailed description of the analysis procedure, also discussed are several interesting features in the data which are used to demonstrate how the analysis procedure produced the final maps and how one can estimate the uncertainties in the maps. In addition, features in the analysis are noted that would influence how one might use, or interpret, the results. These include subjects such as smoothing and the interpretation of wave components.
Descriptive Biomarkers for Assessing Breast Cancer Risk
2011-10-01
studies are being conducted with the archived milk samples. 15. SUBJECT TERMS Breast milk; recruitment; promoter hypermethylation; RASSF1, GSTP1 ...difference between the two populations is the greater number of outlier scores in the Biopsy Group, particularly for GSTP1 . This might be expected given...Biopsy and Reference Groups. The percentage of outlier scores is significantly greater in the Biopsy Group than in the Reference Group for GSTP1
THE NEW ONLINE METADATA EDITOR FOR GENERATING STRUCTURED METADATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devarakonda, Ranjeet; Shrestha, Biva; Palanisamy, Giri
Nobody is better suited to describe data than the scientist who created it. This description about a data is called Metadata. In general terms, Metadata represents the who, what, when, where, why and how of the dataset [1]. eXtensible Markup Language (XML) is the preferred output format for metadata, as it makes it portable and, more importantly, suitable for system discoverability. The newly developed ORNL Metadata Editor (OME) is a Web-based tool that allows users to create and maintain XML files containing key information, or metadata, about the research. Metadata include information about the specific projects, parameters, time periods, andmore » locations associated with the data. Such information helps put the research findings in context. In addition, the metadata produced using OME will allow other researchers to find these data via Metadata clearinghouses like Mercury [2][4]. OME is part of ORNL s Mercury software fleet [2][3]. It was jointly developed to support projects funded by the United States Geological Survey (USGS), U.S. Department of Energy (DOE), National Aeronautics and Space Administration (NASA) and National Oceanic and Atmospheric Administration (NOAA). OME s architecture provides a customizable interface to support project-specific requirements. Using this new architecture, the ORNL team developed OME instances for USGS s Core Science Analytics, Synthesis, and Libraries (CSAS&L), DOE s Next Generation Ecosystem Experiments (NGEE) and Atmospheric Radiation Measurement (ARM) Program, and the international Surface Ocean Carbon Dioxide ATlas (SOCAT). Researchers simply use the ORNL Metadata Editor to enter relevant metadata into a Web-based form. From the information on the form, the Metadata Editor can create an XML file on the server that the editor is installed or to the user s personal computer. Researchers can also use the ORNL Metadata Editor to modify existing XML metadata files. As an example, an NGEE Arctic scientist use OME to register their datasets to the NGEE data archive and allows the NGEE archive to publish these datasets via a data search portal (http://ngee.ornl.gov/data). These highly descriptive metadata created using OME allows the Archive to enable advanced data search options using keyword, geo-spatial, temporal and ontology filters. Similarly, ARM OME allows scientists or principal investigators (PIs) to submit their data products to the ARM data archive. How would OME help Big Data Centers like the Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC)? The ORNL DAAC is one of NASA s Earth Observing System Data and Information System (EOSDIS) data centers managed by the Earth Science Data and Information System (ESDIS) Project. The ORNL DAAC archives data produced by NASA's Terrestrial Ecology Program. The DAAC provides data and information relevant to biogeochemical dynamics, ecological data, and environmental processes, critical for understanding the dynamics relating to the biological, geological, and chemical components of the Earth's environment. Typically data produced, archived and analyzed is at a scale of multiple petabytes, which makes the discoverability of the data very challenging. Without proper metadata associated with the data, it is difficult to find the data you are looking for and equally difficult to use and understand the data. OME will allow data centers like the NGEE and ORNL DAAC to produce meaningful, high quality, standards-based, descriptive information about their data products in-turn helping with the data discoverability and interoperability. Useful Links: USGS OME: http://mercury.ornl.gov/OME/ NGEE OME: http://ngee-arctic.ornl.gov/ngeemetadata/ ARM OME: http://archive2.ornl.gov/armome/ Contact: Ranjeet Devarakonda (devarakondar@ornl.gov) References: [1] Federal Geographic Data Committee. Content standard for digital geospatial metadata. Federal Geographic Data Committee, 1998. [2] Devarakonda, Ranjeet, et al. "Mercury: reusable metadata management, data discovery and access system." Earth Science Informatics 3.1-2 (2010): 87-94. [3] Wilson, B. E., Palanisamy, G., Devarakonda, R., Rhyne, B. T., Lindsley, C., & Green, J. (2010). Mercury Toolset for Spatiotemporal Metadata. [4] Pouchard, L. C., Branstetter, M. L., Cook, R. B., Devarakonda, R., Green, J., Palanisamy, G., ... & Noy, N. F. (2013). A Linked Science investigation: enhancing climate change data discovery with semantic technologies. Earth science informatics, 6(3), 175-185.« less
ERIC Educational Resources Information Center
Casazza, Ornella; Franchi, Paolo
1985-01-01
Description of encoding of art works and digitization of paintings to preserve and restore them reviews experiments which used chromatic selection and abstraction as a painting restoration method. This method utilizes the numeric processing resulting from digitization to restore a painting and computer simulation to shorten the restoration…
Dainer-Best, Justin; Lee, Hae Yeon; Shumake, Jason D; Yeager, David S; Beevers, Christopher G
2018-06-07
Although the self-referent encoding task (SRET) is commonly used to measure self-referent cognition in depression, many different SRET metrics can be obtained. The current study used best subsets regression with cross-validation and independent test samples to identify the SRET metrics most reliably associated with depression symptoms in three large samples: a college student sample (n = 572), a sample of adults from Amazon Mechanical Turk (n = 293), and an adolescent sample from a school field study (n = 408). Across all 3 samples, SRET metrics associated most strongly with depression severity included number of words endorsed as self-descriptive and rate of accumulation of information required to decide whether adjectives were self-descriptive (i.e., drift rate). These metrics had strong intratask and split-half reliability and high test-retest reliability across a 1-week period. Recall of SRET stimuli and traditional reaction time (RT) metrics were not robustly associated with depression severity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Soldan, Anja; Mangels, Jennifer A; Cooper, Lynn A
2006-03-01
This study was designed to differentiate between structural description and bias accounts of performance in the possible/impossible object-decision test. Two event-related potential (ERP) studies examined how the visual system processes structurally possible and impossible objects. Specifically, the authors investigated the effects of object repetition on a series of early posterior components during structural (Experiment 1) and functional (Experiment 2) encoding and the relationship of these effects to behavioral measures of priming. In both experiments, the authors found repetition enhancement of the posterior N1 and N2 for possible objects only. In addition, the magnitude of the N1 repetition effect for possible objects was correlated with priming for possible objects. Although the behavioral results were more ambiguous, these ERP results fail to support bias models that hold that both possible and impossible objects are processed similarly in the visual system. Instead, they support the view that priming is supported by a structural description system that encodes the global 3-dimensional structure of an object.
Cluster Ion Spectrometry (CIS) Data Archiving in the CAA
NASA Astrophysics Data System (ADS)
Dandouras, I. S.; Barthe, A.; Penou, E.; Brunato, S.; Reme, H.; Kistler, L. M.; Blagau, A.; Facsko, G.; Kronberg, E.; Laakso, H. E.
2009-12-01
The Cluster Active Archive (CAA) aims at preserving the four Cluster spacecraft data, so that they are usable in the long-term by the scientific community as well as by the instrument team PIs and Co-Is. This implies that the data are filed together with the descriptive and documentary elements making it possible to select and interpret them. The CIS (Cluster Ion Spectrometry) experiment is a comprehensive ionic plasma spectrometry package onboard the four Cluster spacecraft, capable of obtaining full three-dimensional ion distributions (about 0 to 40 keV/e) with a time resolution of one spacecraft spin (4 sec) and with mass-per-charge composition determination. The CIS package consists of two different instruments, a Hot Ion Analyser (HIA) and a time-of-flight ion Composition Distribution Function (CODIF) analyser. For the archival of the CIS data a multi-level approach has been adopted. The CAA archival includes processed raw data (Level 1 data), moments of the ion distribution functions (Level 2 data), and calibrated high-resolution data in a variety of physical units (Level 3 data). The latter are 3-D ion distribution functions and 2-D pitch-angle distributions. In addition, a software package has been developed to allow the CAA user to interactively calculate partial or total moments of the ion distributions. Instrument cross-calibration has been an important activity in preparing the data for archival. The CIS data archive includes also experiment documentation, graphical products for browsing through the data, and data caveats. In addition, data quality indexes are under preparation, to help the user. Given the complexity of an ion spectrometer, and the variety of its operational modes, each one being optimised for a different magnetospheric region or measurement objective, consultation of the data caveats by the end user will always be a necessary step in the data analysis.
Between Oais and Agile a Dynamic Data Management Approach
NASA Astrophysics Data System (ADS)
Bennett, V. L.; Conway, E. A.; Waterfall, A. M.; Pepler, S.
2015-12-01
In this paper we decribe an approach to the integration of existing archival activities which lies between compliance with the more rigid OAIS/TRAC standards and a more flexible "Agile" approach to the curation and preservation of Earth Observation data. We provide a high level overview of existing practice and discuss how these procedures can be extended and supported through the description of preservation state. The aim of which is to facilitate the dynamic controlled management of scientific data through its lifecycle. While processes are considered they are not statically defined but rather driven by human interactions in the form of risk management/review procedure that produce actionable plans, which are responsive to change. We then proceed by describing the feasibility testing of extended risk management and planning procedures which integrate current practices. This was done through the CEDA Archival Format Audit which inspected British Atmospheric Data Centre and NERC Earth Observation Data Centre Archival holdings. These holdings are extensive, comprising of around 2 Petabytes of data and 137 million individual files, which were analysed and characterised in terms of format, based risk. We are then able to present an overview of the format based risk burden faced by a large scale archive attempting to maintain the usability of heterogeneous environmental data sets We continue by presenting a dynamic data management information model and provide discussion of the following core model entities and their relationships: Aspirational entities, which include Data Entity definitions and their associated Preservation Objectives. Risk entities, which act as drivers for change within the data lifecycle. These include Acquisitional Risks, Technical Risks, Strategic Risks and External Risks Plan entities, which detail the actions to bring about change within an archive. These include Acquisition Plans, Preservation Plans and Monitoring plans which support responsive interactions with the community. The Result entities describe the outcomes of the plans. This includes Acquisitions. Mitigations and Accepted Risks. With risk acceptance permitting imperfect but functional solutions that can be realistically supported within an archives resource levels
Pöggeler, S
2000-06-01
In order to analyze the involvement of pheromones in cell recognition and mating in a homothallic fungus, two putative pheromone precursor genes, named ppg1 and ppg2, were isolated from a genomic library of Sordaria macrospora. The ppg1 gene is predicted to encode a precursor pheromone that is processed by a Kex2-like protease to yield a pheromone that is structurally similar to the alpha-factor of the yeast Saccharomyces cerevisiae. The ppg2 gene encodes a 24-amino-acid polypeptide that contains a putative farnesylated and carboxy methylated C-terminal cysteine residue. The sequences of the predicted pheromones display strong structural similarity to those encoded by putative pheromones of heterothallic filamentous ascomycetes. Both genes are expressed during the life cycle of S. macrospora. This is the first description of pheromone precursor genes encoded by a homothallic fungus. Southern-hybridization experiments indicated that ppg1 and ppg2 homologues are also present in other homothallic ascomycetes.
NASA Astrophysics Data System (ADS)
Buxbaum, T. M.; Warnick, W. K.; Polly, B.; Breen, K. J.
2007-12-01
The ARCUS Internet Media Archive (IMA) is a collection of photos, graphics, videos, and presentations about the Arctic and Antarctic that are shared through the Internet. It provides the polar research community and the public at large with a centralized location where images and video pertaining to polar research can be browsed and retrieved for a variety of uses. The IMA currently contains almost 6,500 publicly accessible photos, including 4,000 photos from the National Science Foundation (NSF) funded Teachers and Researchers Exploring and Collaborating (TREC) program, an educational research experience in which K-12 teachers participate in arctic research as a pathway to improving science education. The IMA is also the future home of all electronic media from the NSF funded PolarTREC program, a continuation of TREC that now takes place in both the Arctic and Antarctic. The IMA includes 450 video files, 270 audio files, nearly 100 graphics and logos, 28 presentations, and approximately 10,000 additional resources that are being prepared for public access. The contents of this archive are organized by file type, photographer's name, event, or by organization, with each photo or file accompanied by information on content, contributor source, and usage requirements. All the files are keyworded and all information, including file name and description, is completely searchable. ARCUS plans to continue to improve and expand the IMA with a particular focus on providing graphics depicting key arctic research results and findings as well as edited video archives of relevant scientific community meetings. To submit files or for more information and to view the ARCUS Internet Media Archive, please go to: http://media.arcus.org or email photo@arcus.org.
Superfund Public Information System (SPIS), January 1999
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1999-01-01
The Superfund Public Information System (SPIS) on CD-ROM contains Superfund data for the United States Environmental Protection Agency. The Superfund data is a collection of three databases: Records of Decision (RODS); Comprehensive Environmental, Response, Compensation, and Liability Information System (CERCLIS); and Archive (NFRAP). Descriptions of these databases and CD contents are listed below. Data content: The CD contains the complete text of the official ROD documents signed and issued by EPA from fiscal years 1982--1996; 147 RODs for fiscal year 1997; and seven RODs for fiscal year 1998. The CD also contains 89 Explanation of Significant Difference (ESD) documents, asmore » well as 48 ROD Amendments. CERCLIS and Archive (NFRAP) data is through January 19, 1999. RODS is the Records Of Decision System. RODS is used to track site clean-ups under the Superfund program to justify the type of treatment chosen at each site. RODS contains information on technology justification, site history, community participation, enforcement activities, site characteristics, scope and role of response action, and remedy. Explanation of Significant Differences (ESDs) are also available on the CD. CERCLIS is the Comprehensive Environmental Response, Compensation, and Liability Information System. It is the official repository for all Superfund site and incident data. It contains comprehensive information on hazardous waste sites, site inspections, preliminary assessments, and remedial status. The system is sponsored by the EPA`s Office of Emergency and Remedial Response, Information Management Center. Archive (NFRAP) consists of hazardous waste sites that have no further remedial action planned; only basic identifying information is provided for archive sites. The sites found in the Archive database were originally in the CERCLIS database, but were removed beginning in the fall of 1995.« less
Baran, Michael C; Moseley, Hunter N B; Sahota, Gurmukh; Montelione, Gaetano T
2002-10-01
Modern protein NMR spectroscopy laboratories have a rapidly growing need for an easily queried local archival system of raw experimental NMR datasets. SPINS (Standardized ProteIn Nmr Storage) is an object-oriented relational database that provides facilities for high-volume NMR data archival, organization of analyses, and dissemination of results to the public domain by automatic preparation of the header files required for submission of data to the BioMagResBank (BMRB). The current version of SPINS coordinates the process from data collection to BMRB deposition of raw NMR data by standardizing and integrating the storage and retrieval of these data in a local laboratory file system. Additional facilities include a data mining query tool, graphical database administration tools, and a NMRStar v2. 1.1 file generator. SPINS also includes a user-friendly internet-based graphical user interface, which is optionally integrated with Varian VNMR NMR data collection software. This paper provides an overview of the data model underlying the SPINS database system, a description of its implementation in Oracle, and an outline of future plans for the SPINS project.
NASA Technical Reports Server (NTRS)
Matthews, Elaine
1989-01-01
Global digital data bases on the distribution and environmental characteristics of natural wetlands, compiled by Matthews and Fung (1987), were archived for public use. These data bases were developed to evaluate the role of wetlands in the annual emission of methane from terrestrial sources. Five global 1 deg latitude by 1 deg longitude arrays are included on the archived tape. The arrays are: (1) wetland data source, (2) wetland type, (3) fractional inundation, (4) vegetation type, and (5) soil type. The first three data bases on wetland locations were published by Matthews and Fung (1987). The last two arrays contain ancillary information about these wetland locations: vegetation type is from the data of Matthews (1983) and soil type from the data of Zobler (1986). Users should consult original publications for complete discussion of the data bases. This short paper is designed only to document the tape, and briefly explain the data sets and their initial application to estimating the annual emission of methane from natural wetlands. Included is information about array characteristics such as dimensions, read formats, record lengths, blocksizes and value ranges, and descriptions and translation tables for the individual data bases.
Garsi, Jerome-Philippe; Samson, Eric; Chablais, Laetitia; Zhivin, Sergey; Niogret, Christine; Laurier, Dominique; Guseva Canu, Irina
2014-12-01
This article discusses the availability and completeness of medical data on workers from the AREVA NC Pierrelatte nuclear plant and their possible use in epidemiological research on cardiovascular and metabolic disorders related to internal exposure to uranium. We created a computer database from files on 394 eligible workers included in an ongoing nested case-control study from a larger cohort of 2897 French nuclear workers. For each worker, we collected records of previous employment, job positions, job descriptions, medical visits, and blood test results from medical history. The dataset counts 9,471 medical examinations and 12,735 blood test results. For almost all of the parameters relevant for research on cardiovascular risk, data completeness and availability is over 90%, but it varies with time and improves in the latest time period. In the absence of biobanks, collecting and computerising available good-quality occupational medicine archive data constitutes a valuable alternative for epidemiological and aetiological research in occupational health. Biobanks rarely contain biological samples over an entire worker's carrier and medical data from nuclear industry archives might make up for unavailable biomarkers that could provide information on cardiovascular and metabolic diseases.
Workflows for ingest of research data into digital archives - tests with Archivematica
NASA Astrophysics Data System (ADS)
Kirchner, I.; Bertelmann, R.; Gebauer, P.; Hasler, T.; Hirt, M.; Klump, J. F.; Peters-Kotting, W.; Rusch, B.; Ulbricht, D.
2013-12-01
Publication of research data and future re-use of measured data require the long-term preservation of digital objects. The ISO OAIS reference model defines responsibilities for long-term preservation of digital objects and although there is software available to support preservation of digital data, there are still problems remaining to be solved. A key task in preservation is to make the datasets ready for ingest into the archive, which is called the creation of Submission Information Packages (SIPs) in the OAIS model. This includes the creation of appropriate preservation metadata. Scientists need to be trained to deal with different types of data and to heighten their awareness for quality metadata. Other problems arise during the assembly of SIPs and during ingest into the archive because file format validators may produce conflicting output for identical data files and these conflicts are difficult to resolve automatically. Also, validation and identification tools are notorious for their poor performance. In the project EWIG Zuse-Institute Berlin acts as an infrastructure facility, while the Institute for Meteorology at FU Berlin and the German research Centre for Geosciences GFZ act as two different data producers. The aim of the project is to develop workflows for the transfer of research data into digital archives and the future re-use of data from long-term archives with emphasis on data from the geosciences. The technical work is supplemented by interviews with data practitioners at several institutions to identify problems in digital preservation workflows and by the development of university teaching materials to train students in the curation of research data and metadata. The free and open-source software Archivematica [1] is used as digital preservation system. The creation and ingest of SIPs has to meet several archival standards and be compatible to the Metadata Encoding and Transmission Standard (METS). The two data producers use different software in their workflows to test the assembly of SIPs and ingest of SIPs into the archive. GFZ Potsdam uses a combination of eSciDoc [2], panMetaDocs [3], and bagit [4] to collect research data and assemble SIPs for ingest into Archivematica, while the Institute for Meteorology at FU Berlin evaluates a variety of software solutions to describe data and publications and to generate SIPs. [1] http://www.archivematica.org [2] http://www.escidoc.org [3] http://panmetadocs.sf.net [4] http://sourceforge.net/projects/loc-xferutils/
Multiple Systems of Spatial Memory: Evidence from Described Scenes
ERIC Educational Resources Information Center
Avraamides, Marios N.; Kelly, Jonathan W.
2010-01-01
Recent models in spatial cognition posit that distinct memory systems are responsible for maintaining transient and enduring spatial relations. The authors used perspective-taking performance to assess the presence of these enduring and transient spatial memories for locations encoded through verbal descriptions. Across 3 experiments, spatial…
MetaSRA: normalized human sample-specific metadata for the Sequence Read Archive.
Bernstein, Matthew N; Doan, AnHai; Dewey, Colin N
2017-09-15
The NCBI's Sequence Read Archive (SRA) promises great biological insight if one could analyze the data in the aggregate; however, the data remain largely underutilized, in part, due to the poor structure of the metadata associated with each sample. The rules governing submissions to the SRA do not dictate a standardized set of terms that should be used to describe the biological samples from which the sequencing data are derived. As a result, the metadata include many synonyms, spelling variants and references to outside sources of information. Furthermore, manual annotation of the data remains intractable due to the large number of samples in the archive. For these reasons, it has been difficult to perform large-scale analyses that study the relationships between biomolecular processes and phenotype across diverse diseases, tissues and cell types present in the SRA. We present MetaSRA, a database of normalized SRA human sample-specific metadata following a schema inspired by the metadata organization of the ENCODE project. This schema involves mapping samples to terms in biomedical ontologies, labeling each sample with a sample-type category, and extracting real-valued properties. We automated these tasks via a novel computational pipeline. The MetaSRA is available at metasra.biostat.wisc.edu via both a searchable web interface and bulk downloads. Software implementing our computational pipeline is available at http://github.com/deweylab/metasra-pipeline. cdewey@biostat.wisc.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Integration of remote sensing and GIS: Data and data access
Ehlers, M.; Greenlee, D.D.; Smith, T.; Star, J.
1991-01-01
CT: Theintegration of remote sensing tools and technology with the spatial analysis orientation of geographic information systems is a complex task. In this paper, we focus on the issues of making data available and useful to the user. In part, this involves a set of problems which reflect on the physical and logical structures used to encode the data. At the same time, however, the mechanisms and protocols which provide information about the data, and which maintain the data through time, have become increasingly important. We discuss these latter issues from the viewpoint of the functions which must be provided by archives of spatial data.
The human-induced pluripotent stem cell initiative-data resources for cellular genetics.
Streeter, Ian; Harrison, Peter W; Faulconbridge, Adam; Flicek, Paul; Parkinson, Helen; Clarke, Laura
2017-01-04
The Human Induced Pluripotent Stem Cell Initiative (HipSci) isf establishing a large catalogue of human iPSC lines, arguably the most well characterized collection to date. The HipSci portal enables researchers to choose the right cell line for their experiment, and makes HipSci's rich catalogue of assay data easy to discover and reuse. Each cell line has genomic, transcriptomic, proteomic and cellular phenotyping data. Data are deposited in the appropriate EMBL-EBI archives, including the European Nucleotide Archive (ENA), European Genome-phenome Archive (EGA), ArrayExpress and PRoteomics IDEntifications (PRIDE) databases. The project will make 500 cell lines from healthy individuals, and from 150 patients with rare genetic diseases; these will be available through the European Collection of Authenticated Cell Cultures (ECACC). As of August 2016, 238 cell lines are available for purchase. Project data is presented through the HipSci data portal (http://www.hipsci.org/lines) and is downloadable from the associated FTP site (ftp://ftp.hipsci.ebi.ac.uk/vol1/ftp). The data portal presents a summary matrix of the HipSci cell lines, showing available data types. Each line has its own page containing descriptive metadata, quality information, and links to archived assay data. Analysis results are also available in a Track Hub, allowing visualization in the context of public genomic annotations (http://www.hipsci.org/data/trackhubs). © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Observing the operational significance of discord consumption
NASA Astrophysics Data System (ADS)
Gu, Mile; Chrzanowski, Helen M.; Assad, Syed M.; Symul, Thomas; Modi, Kavan; Ralph, Timothy C.; Vedral, Vlatko; Lam, Ping Koy
2012-09-01
Coherent interactions that generate negligible entanglement can still exhibit unique quantum behaviour. This observation has motivated a search beyond entanglement for a complete description of all quantum correlations. Quantum discord is a promising candidate. Here, we demonstrate that under certain measurement constraints, discord between bipartite systems can be consumed to encode information that can only be accessed by coherent quantum interactions. The inability to access this information by any other means allows us to use discord to directly quantify this `quantum advantage'. We experimentally encode information within the discordant correlations of two separable Gaussian states. The amount of extra information recovered by coherent interaction is quantified and directly linked with the discord consumed during encoding. No entanglement exists at any point of this experiment. Thus we introduce and demonstrate an operational method to use discord as a physical resource.
NASA Technical Reports Server (NTRS)
Noll, Carey E.; Torrence, Mark H.; Pollack, Nathan H.; Tyahla, Lori J.
2013-01-01
The ILRS website, http://ilrs.gsfc.nasa.gov, is the central source of information for all aspects of the service. The website provides information on the organization and operation of the ILRS and descriptions of ILRS components data, and products. Furthermore, the website provides an entry point to the archive of these data products available through the data centers. Links are provided to extensive information on the ILRS network stations including performance assesments and data quality evaluations. Descriptions of suported satellite missions (current, future, and past) are provided to aid in station acquisition and data analysis. The website was reently redesigned. Content was reviewed during the update process, ensuring information is current and useful. This poster will provide specific examples of key sections, applicaitons, and webpages.
NASA Astrophysics Data System (ADS)
Boumehrez, Farouk; Brai, Radhia; Doghmane, Noureddine; Mansouri, Khaled
2018-01-01
Recently, video streaming has attracted much attention and interest due to its capability to process and transmit large data. We propose a quality of experience (QoE) model relying on high efficiency video coding (HEVC) encoder adaptation scheme, in turn based on the multiple description coding (MDC) for video streaming. The main contributions of the paper are (1) a performance evaluation of the new and emerging video coding standard HEVC/H.265, which is based on the variation of quantization parameter (QP) values depending on different video contents to deduce their influence on the sequence to be transmitted, (2) QoE support multimedia applications in wireless networks are investigated, so we inspect the packet loss impact on the QoE of transmitted video sequences, (3) HEVC encoder parameter adaptation scheme based on MDC is modeled with the encoder parameter and objective QoE model. A comparative study revealed that the proposed MDC approach is effective for improving the transmission with a peak signal-to-noise ratio (PSNR) gain of about 2 to 3 dB. Results show that a good choice of QP value can compensate for transmission channel effects and improve received video quality, although HEVC/H.265 is also sensitive to packet loss. The obtained results show the efficiency of our proposed method in terms of PSNR and mean-opinion-score.
Construction of optimal resources for concatenated quantum protocols
NASA Astrophysics Data System (ADS)
Pirker, A.; Wallnöfer, J.; Briegel, H. J.; Dür, W.
2017-06-01
We consider the explicit construction of resource states for measurement-based quantum information processing. We concentrate on special-purpose resource states that are capable to perform a certain operation or task, where we consider unitary Clifford circuits as well as non-trace-preserving completely positive maps, more specifically probabilistic operations including Clifford operations and Pauli measurements. We concentrate on 1 →m and m →1 operations, i.e., operations that map one input qubit to m output qubits or vice versa. Examples of such operations include encoding and decoding in quantum error correction, entanglement purification, or entanglement swapping. We provide a general framework to construct optimal resource states for complex tasks that are combinations of these elementary building blocks. All resource states only contain input and output qubits, and are hence of minimal size. We obtain a stabilizer description of the resulting resource states, which we also translate into a circuit pattern to experimentally generate these states. In particular, we derive recurrence relations at the level of stabilizers as key analytical tool to generate explicit (graph) descriptions of families of resource states. This allows us to explicitly construct resource states for encoding, decoding, and syndrome readout for concatenated quantum error correction codes, code switchers, multiple rounds of entanglement purification, quantum repeaters, and combinations thereof (such as resource states for entanglement purification of encoded states).
Numeric promoter description - A comparative view on concepts and general application.
Beier, Rico; Labudde, Dirk
2016-01-01
Nucleic acid molecules play a key role in a variety of biological processes. Starting from storage and transfer tasks, this also comprises the triggering of biological processes, regulatory effects and the active influence gained by target binding. Based on the experimental output (in this case promoter sequences), further in silico analyses aid in gaining new insights into these processes and interactions. The numerical description of nucleic acids thereby constitutes a bridge between the concrete biological issues and the analytical methods. Hence, this study compares 26 descriptor sets obtained by applying well-known numerical description concepts to an established dataset of 38 DNA promoter sequences. The suitability of the description sets was evaluated by computing partial least squares regression models and assessing the model accuracy. We conclude that the major importance regarding the descriptive power is attached to positional information rather than to explicitly incorporated physico-chemical information, since a sufficient amount of implicit physico-chemical information is already encoded in the nucleobase classification. The regression models especially benefited from employing the information that is encoded in the sequential and structural neighborhood of the nucleobases. Thus, the analyses of n-grams (short fragments of length n) suggested that they are valuable descriptors for DNA target interactions. A mixed n-gram descriptor set thereby yielded the best description of the promoter sequences. The corresponding regression model was checked and found to be plausible as it was able to reproduce the characteristic binding motifs of promoter sequences in a reasonable degree. As most functional nucleic acids are based on the principle of molecular recognition, the findings are not restricted to promoter sequences, but can rather be transferred to other kinds of functional nucleic acids. Thus, the concepts presented in this study could provide advantages for future nucleic acid-based technologies, like biosensoring, therapeutics and molecular imaging. Copyright © 2015 Elsevier Inc. All rights reserved.
The archives of the glacier survey of the Austrian Alpine Club
NASA Astrophysics Data System (ADS)
Fischer, Andrea; Bendler, Gebhard
2016-04-01
The archive of the Austrian Alpine Club holds masses of material on glaciers and their former extent. The material includes descriptions and sketches of the summits conquered by early mountaineers, mapping campaigns and data from early scientific expeditions as well as data on glacier length change. To date a large proportion of the glaciological information in the material has not been catalogued or analysed. As cold ice, containing relevant climate information, might still exist at the highest peaks of Austria, a pilot project was started to collect some of the data of two test sites in Tyrol, in Silvretta and Ötztal Alps, to reveal former summit shapes and glacier tongue positions. Additional information on the number and position of crevasses as well as firn extent is often evident from the material. Challenging tasks not yet tackled are compiling a catalogue of the material and defining an analysis scheme.
PsychVACS: a system for asynchronous telepsychiatry.
Odor, Alberto; Yellowlees, Peter; Hilty, Donald; Parish, Michelle Burke; Nafiz, Najia; Iosif, Ana-Maria
2011-05-01
To describe the technical development of an asynchronous telepsychiatry application, the Psychiatric Video Archiving and Communication System. A client-server application was developed in Visual Basic.Net with Microsoft(®) SQL database as the backend. It includes the capability of storing video-recorded psychiatric interviews and manages the workflow of the system with automated messaging. Psychiatric Video Archiving and Communication System has been used to conduct the first ever series of asynchronous telepsychiatry consultations worldwide. A review of the software application and the process as part of this project has led to a number of improvements that are being implemented in the next version, which is being written in Java. This is the first description of the use of video recorded data in an asynchronous telemedicine application. Primary care providers and consulting psychiatrists have found it easy to work with and a valuable resource to increase the availability of psychiatric consultation in remote rural locations.
Evaluating standard terminologies for encoding allergy information.
Goss, Foster R; Zhou, Li; Plasek, Joseph M; Broverman, Carol; Robinson, George; Middleton, Blackford; Rocha, Roberto A
2013-01-01
Allergy documentation and exchange are vital to ensuring patient safety. This study aims to analyze and compare various existing standard terminologies for representing allergy information. Five terminologies were identified, including the Systemized Nomenclature of Medical Clinical Terms (SNOMED CT), National Drug File-Reference Terminology (NDF-RT), Medication Dictionary for Regulatory Activities (MedDRA), Unique Ingredient Identifier (UNII), and RxNorm. A qualitative analysis was conducted to compare desirable characteristics of each terminology, including content coverage, concept orientation, formal definitions, multiple granularities, vocabulary structure, subset capability, and maintainability. A quantitative analysis was also performed to compare the content coverage of each terminology for (1) common food, drug, and environmental allergens and (2) descriptive concepts for common drug allergies, adverse reactions (AR), and no known allergies. Our qualitative results show that SNOMED CT fulfilled the greatest number of desirable characteristics, followed by NDF-RT, RxNorm, UNII, and MedDRA. Our quantitative results demonstrate that RxNorm had the highest concept coverage for representing drug allergens, followed by UNII, SNOMED CT, NDF-RT, and MedDRA. For food and environmental allergens, UNII demonstrated the highest concept coverage, followed by SNOMED CT. For representing descriptive allergy concepts and adverse reactions, SNOMED CT and NDF-RT showed the highest coverage. Only SNOMED CT was capable of representing unique concepts for encoding no known allergies. The proper terminology for encoding a patient's allergy is complex, as multiple elements need to be captured to form a fully structured clinical finding. Our results suggest that while gaps still exist, a combination of SNOMED CT and RxNorm can satisfy most criteria for encoding common allergies and provide sufficient content coverage.
Evaluating standard terminologies for encoding allergy information
Goss, Foster R; Zhou, Li; Plasek, Joseph M; Broverman, Carol; Robinson, George; Middleton, Blackford; Rocha, Roberto A
2013-01-01
Objective Allergy documentation and exchange are vital to ensuring patient safety. This study aims to analyze and compare various existing standard terminologies for representing allergy information. Methods Five terminologies were identified, including the Systemized Nomenclature of Medical Clinical Terms (SNOMED CT), National Drug File–Reference Terminology (NDF-RT), Medication Dictionary for Regulatory Activities (MedDRA), Unique Ingredient Identifier (UNII), and RxNorm. A qualitative analysis was conducted to compare desirable characteristics of each terminology, including content coverage, concept orientation, formal definitions, multiple granularities, vocabulary structure, subset capability, and maintainability. A quantitative analysis was also performed to compare the content coverage of each terminology for (1) common food, drug, and environmental allergens and (2) descriptive concepts for common drug allergies, adverse reactions (AR), and no known allergies. Results Our qualitative results show that SNOMED CT fulfilled the greatest number of desirable characteristics, followed by NDF-RT, RxNorm, UNII, and MedDRA. Our quantitative results demonstrate that RxNorm had the highest concept coverage for representing drug allergens, followed by UNII, SNOMED CT, NDF-RT, and MedDRA. For food and environmental allergens, UNII demonstrated the highest concept coverage, followed by SNOMED CT. For representing descriptive allergy concepts and adverse reactions, SNOMED CT and NDF-RT showed the highest coverage. Only SNOMED CT was capable of representing unique concepts for encoding no known allergies. Conclusions The proper terminology for encoding a patient's allergy is complex, as multiple elements need to be captured to form a fully structured clinical finding. Our results suggest that while gaps still exist, a combination of SNOMED CT and RxNorm can satisfy most criteria for encoding common allergies and provide sufficient content coverage. PMID:23396542
Social Networking on the Semantic Web
ERIC Educational Resources Information Center
Finin, Tim; Ding, Li; Zhou, Lina; Joshi, Anupam
2005-01-01
Purpose: Aims to investigate the way that the semantic web is being used to represent and process social network information. Design/methodology/approach: The Swoogle semantic web search engine was used to construct several large data sets of Resource Description Framework (RDF) documents with social network information that were encoded using the…
Thermonuclear Propaganda: Presentations of Nuclear Strategy in the Early Atomic Age
2014-06-01
comics .17 One scholar of atomic culture noted the ambiguity of the duality of the atomic age as a central tenant to building the “most powerful of all...2004). 18 Ferenc Morton Szasz, Atomic Comics : Cartoonists Confront the Nuclear World (Reno, NV: University of Nevada Press, 2012), 135. 19 Ibid...research.archives.gov/description/36952. 28 Osgood, Total Cold War; Szasz, Atomic Comics ; Zeman and Amundson, Atomic Culture, 3-4. 10 the most modern
Quark enables semi-reference-based compression of RNA-seq data.
Sarkar, Hirak; Patro, Rob
2017-11-01
The past decade has seen an exponential increase in biological sequencing capacity, and there has been a simultaneous effort to help organize and archive some of the vast quantities of sequencing data that are being generated. Although these developments are tremendous from the perspective of maximizing the scientific utility of available data, they come with heavy costs. The storage and transmission of such vast amounts of sequencing data is expensive. We present Quark, a semi-reference-based compression tool designed for RNA-seq data. Quark makes use of a reference sequence when encoding reads, but produces a representation that can be decoded independently, without the need for a reference. This allows Quark to achieve markedly better compression rates than existing reference-free schemes, while still relieving the burden of assuming a specific, shared reference sequence between the encoder and decoder. We demonstrate that Quark achieves state-of-the-art compression rates, and that, typically, only a small fraction of the reference sequence must be encoded along with the reads to allow reference-free decompression. Quark is implemented in C ++11, and is available under a GPLv3 license at www.github.com/COMBINE-lab/quark. rob.patro@cs.stonybrook.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Long-term Science Data Curation Using a Digital Object Model and Open-Source Frameworks
NASA Astrophysics Data System (ADS)
Pan, J.; Lenhardt, W.; Wilson, B. E.; Palanisamy, G.; Cook, R. B.
2010-12-01
Scientific digital content, including Earth Science observations and model output, has become more heterogeneous in format and more distributed across the Internet. In addition, data and metadata are becoming necessarily linked internally and externally on the Web. As a result, such content has become more difficult for providers to manage and preserve and for users to locate, understand, and consume. Specifically, it is increasingly harder to deliver relevant metadata and data processing lineage information along with the actual content consistently. Readme files, data quality information, production provenance, and other descriptive metadata are often separated in the storage level as well as in the data search and retrieval interfaces available to a user. Critical archival metadata, such as auditing trails and integrity checks, are often even more difficult for users to access, if they exist at all. We investigate the use of several open-source software frameworks to address these challenges. We use Fedora Commons Framework and its digital object abstraction as the repository, Drupal CMS as the user-interface, and the Islandora module as the connector from Drupal to Fedora Repository. With the digital object model, metadata of data description and data provenance can be associated with data content in a formal manner, so are external references and other arbitrary auxiliary information. Changes are formally audited on an object, and digital contents are versioned and have checksums automatically computed. Further, relationships among objects are formally expressed with RDF triples. Data replication, recovery, metadata export are supported with standard protocols, such as OAI-PMH. We provide a tentative comparative analysis of the chosen software stack with the Open Archival Information System (OAIS) reference model, along with our initial results with the existing terrestrial ecology data collections at NASA’s ORNL Distributed Active Archive Center for Biogeochemical Dynamics (ORNL DAAC).
New tools for redox biology: From imaging to manipulation.
Bilan, Dmitry S; Belousov, Vsevolod V
2017-08-01
Redox reactions play a key role in maintaining essential biological processes. Deviations in redox pathways result in the development of various pathologies at cellular and organismal levels. Until recently, studies on transformations in the intracellular redox state have been significantly hampered in living systems. The genetically encoded indicators, based on fluorescent proteins, have provided new opportunities in biomedical research. The existing indicators already enable monitoring of cellular redox parameters in different processes including embryogenesis, aging, inflammation, tissue regeneration, and pathogenesis of various diseases. In this review, we summarize information about all genetically encoded redox indicators developed to date. We provide the description of each indicator and discuss its advantages and limitations, as well as points that need to be considered when choosing an indicator for a particular experiment. One chapter is devoted to the important discoveries that have been made by using genetically encoded redox indicators. Copyright © 2016 Elsevier Inc. All rights reserved.
Conceptual and perceptual encoding instructions differently affect event recall.
García-Bajos, Elvira; Migueles, Malen; Aizpurua, Alaitz
2014-11-01
When recalling an event, people usually retrieve the main facts and a reduced proportion of specific details. The objective of this experiment was to study the effects of conceptually and perceptually driven encoding in the recall of conceptual and perceptual information of an event. The materials selected for the experiment were two movie trailers. To enhance the encoding instructions, after watching the first trailer participants answered conceptual or perceptual questions about the event, while a control group answered general knowledge questions. After watching the second trailer, all of the participants completed a closed-ended recall task consisting of conceptual and perceptual items. Conceptual information was better recalled than perceptual details and participants made more perceptual than conceptual commission errors. Conceptually driven processing enhanced the recall of conceptual information, while perceptually driven processing not only did not improve the recall of descriptive details, but also damaged the standard conceptual/perceptual recall relationship.
Multi-year encoding of daily rainfall and streamflow via the fractal-multifractal method
NASA Astrophysics Data System (ADS)
Puente, C. E.; Maskey, M.; Sivakumar, B.
2017-12-01
A deterministic geometric approach, the fractal-multifractal (FM) method, which has been proven to be faithful in encoding daily geophysical sets over a year, is used to describe records over multiple years at a time. Looking for FM parameter trends over longer periods, the present study shows FM descriptions of daily rainfall and streamflow gathered over five consecutive years optimizing deviations on accumulated sets. The results for 100 and 60 sets of five years for rainfall streamflow, respectively, near Sacramento, California illustrate that: (a) encoding of both types of data sets may be accomplished with relatively small errors; and (b) predicting the geometry of both variables appears to be possible, even five years ahead, training neural networks on the respective FM parameters. It is emphasized that the FM approach not only captures the accumulated sets over successive pentades but also preserves other statistical attributes including the overall "texture" of the records.
The semantic web and computer vision: old AI meets new AI
NASA Astrophysics Data System (ADS)
Mundy, J. L.; Dong, Y.; Gilliam, A.; Wagner, R.
2018-04-01
There has been vast process in linking semantic information across the billions of web pages through the use of ontologies encoded in the Web Ontology Language (OWL) based on the Resource Description Framework (RDF). A prime example is the Wikipedia where the knowledge contained in its more than four million pages is encoded in an ontological database called DBPedia http://wiki.dbpedia.org/. Web-based query tools can retrieve semantic information from DBPedia encoded in interlinked ontologies that can be accessed using natural language. This paper will show how this vast context can be used to automate the process of querying images and other geospatial data in support of report changes in structures and activities. Computer vision algorithms are selected and provided with context based on natural language requests for monitoring and analysis. The resulting reports provide semantically linked observations from images and 3D surface models.
Boo, Ga Hun; Le Gall, Line; Miller, Kathy Ann; Freshwater, D Wilson; Wernberg, Thomas; Terada, Ryuta; Yoon, Kyung Ju; Boo, Sung Min
2016-08-01
Although the Gelidiales are economically important marine red algae producing agar and agarose, the phylogeny of this order remains poorly resolved. The present study provides a molecular phylogeny based on a novel marker, nuclear-encoded CesA, plus plastid-encoded psaA, psbA, rbcL, and mitochondria-encoded cox1 from subsets of 107 species from all ten genera within the Gelidiales. Analyses of individual and combined datasets support the monophyly of three currently recognized families, and reveal a new clade. On the basis of these results, the new family Orthogonacladiaceae is described to accommodate Aphanta and a new genus Orthogonacladia that includes species previously classified as Gelidium madagascariense and Pterocladia rectangularis. Acanthopeltis is merged with Gelidium, which has nomenclatural priority. Nuclear-encoded CesA was found to be useful for improving the resolution of phylogenetic relationships within the Gelidiales and is likely to be valuable for the inference of phylogenetic relationship among other red algal taxa. Copyright © 2016 Elsevier Inc. All rights reserved.
Neural Responses to the Production and Comprehension of Syntax in Identical Utterances
ERIC Educational Resources Information Center
Indefrey, Peter; Hellwig, Frauke; Herzog, Hans; Seitz, Rudiger J.; Hagoort, Peter
2004-01-01
Following up on an earlier positron emission tomography (PET) experiment (Indefrey et al., 2001), we used a scene description paradigm to investigate whether a posterior inferior frontal region subserving syntactic encoding for speaking is also involved in syntactic parsing during listening. In the language production part of the experiment,…
Let's Have a Coffee with the Standard Model of Particle Physics!
ERIC Educational Resources Information Center
Woithe, Julia; Wiener, Gerfried J.; Van der Veken, Frederik F.
2017-01-01
The Standard Model of particle physics is one of the most successful theories in physics and describes the fundamental interactions between elementary particles. It is encoded in a compact description, the so-called "Lagrangian," which even fits on t-shirts and coffee mugs. This mathematical formulation, however, is complex and only…
Zimmermann, Karel; Gibrat, Jean-François
2010-01-04
Sequence comparisons make use of a one-letter representation for amino acids, the necessary quantitative information being supplied by the substitution matrices. This paper deals with the problem of finding a representation that provides a comprehensive description of amino acid intrinsic properties consistent with the substitution matrices. We present a Euclidian vector representation of the amino acids, obtained by the singular value decomposition of the substitution matrices. The substitution matrix entries correspond to the dot product of amino acid vectors. We apply this vector encoding to the study of the relative importance of various amino acid physicochemical properties upon the substitution matrices. We also characterize and compare the PAM and BLOSUM series substitution matrices. This vector encoding introduces a Euclidian metric in the amino acid space, consistent with substitution matrices. Such a numerical description of the amino acid is useful when intrinsic properties of amino acids are necessary, for instance, building sequence profiles or finding consensus sequences, using machine learning algorithms such as Support Vector Machine and Neural Networks algorithms.
Catalog of earthquake hypocenters at Alaskan volcanoes: January 1 through December 31, 2006
Dixon, James P.; Stihler, Scott D.; Power, John A.; Searcy, Cheryl
2008-01-01
Between January 1 and December 31, 2006, AVO located 8,666 earthquakes of which 7,783 occurred on or near the 33 volcanoes monitored within Alaska. Monitoring highlights in 2006 include: an eruption of Augustine Volcano, a volcanic-tectonic earthquake swarm at Mount Martin, elevated seismicity and volcanic unrest at Fourpeaked Mountain, and elevated seismicity and low-level tremor at Mount Veniaminof and Korovin Volcano. A new seismic subnetwork was installed on Fourpeaked Mountain. This catalog includes: (1) descriptions and locations of seismic instrumentation deployed in the field during 2006, (2) a description of earthquake detection, recording, analysis, and data archival systems, (3) a description of seismic velocity models used for earthquake locations, (4) a summary of earthquakes located in 2006, and (5) an accompanying UNIX tar-file with a summary of earthquake origin times, hypocenters, magnitudes, phase arrival times, location quality statistics, daily station usage statistics, and all files used to determine the earthquake locations in 2006.
On detecting variables using ROTSE-IIId archival data
NASA Astrophysics Data System (ADS)
Yesilyaprak, C.; Yerli, S. K.; Aksaker, N.; Gucsav, B. B.; Kiziloglu, U.; Dikicioglu, E.; Coker, D.; Aydin, E.; Ozeren, F. F.
ROTSE (Robotic Optical Transient Search Experiment) telescopes can also be used for variable star detection. As explained in the system description tep{2003PASP..115..132A}, they have a good sky coverage and they allow a fast data acquisition. The optical magnitude range varies between 7^m to 19^m. Thirty percent of the telescope time of north-eastern leg of the network, namely ROTSE-IIId (located at TUBITAK National Observatory, Bakirlitepe, Turkey http://www.tug.tubitak.gov.tr/) is owned by Turkish researchers. Since its first light (May 2004) considerably a large amount of data has been collected (around 2 TB) from the Turkish time and roughly one million objects have been identified from the reduced data. A robust pipeline has been constructed to discover new variables, transients and planetary nebulae from this archival data. In the detection process, different statistical methods were applied to the archive. We have detected thousands of variable stars by applying roughly four different tests to light curve of each star. In this work a summary of the pipeline is presented. It uses a high performance computing (HPC) algorithm which performs inhomogeneous ensemble photometry of the data on a 36 core cluster. This study is supported by TUBITAK (Scientific and Technological Research Council of Turkey) with the grant number TBAG-108T475.
Structural and Functional Concepts in Current Mouse Phenotyping and Archiving Facilities
Kollmus, Heike; Post, Rainer; Brielmeier, Markus; Fernández, Julia; Fuchs, Helmut; McKerlie, Colin; Montoliu, Lluis; Otaegui, Pedro J; Rebelo, Manuel; Riedesel, Hermann; Ruberte, Jesús; Sedlacek, Radislav; de Angelis, Martin Hrabě; Schughart, Klaus
2012-01-01
Collecting and analyzing available information on the building plans, concepts, and workflow from existing animal facilities is an essential prerequisite for most centers that are planning and designing the construction of a new animal experimental research unit. Here, we have collected and analyzed such information in the context of the European project Infrafrontier, which aims to develop a common European infrastructure for high-throughput systemic phenotyping, archiving, and dissemination of mouse models. A team of experts visited 9 research facilities and 3 commercial breeders in Europe, Canada, the United States, and Singapore. During the visits, detailed data of each facility were collected and subsequently represented in standardized floor plans and descriptive tables. These data showed that because the local needs of scientists and their projects, property issues, and national and regional laws require very specific solutions, a common strategy for the construction of such facilities does not exist. However, several basic concepts were apparent that can be described by standardized floor plans showing the principle functional units and their interconnection. Here, we provide detailed information of how individual facilities addressed their specific needs by using different concepts of connecting the principle units. Our analysis likely will be valuable to research centers that are planning to design new mouse phenotyping and archiving facilities. PMID:23043807
Evolutionary tree reconstruction
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Kanefsky, Bob
1990-01-01
It is described how Minimum Description Length (MDL) can be applied to the problem of DNA and protein evolutionary tree reconstruction. If there is a set of mutations that transform a common ancestor into a set of the known sequences, and this description is shorter than the information to encode the known sequences directly, then strong evidence for an evolutionary relationship has been found. A heuristic algorithm is described that searches for the simplest tree (smallest MDL) that finds close to optimal trees on the test data. Various ways of extending the MDL theory to more complex evolutionary relationships are discussed.
Genetic Inventory Task Final Report. Volume 2
NASA Technical Reports Server (NTRS)
Venkateswaran, Kasthuri; LaDuc, Myron T.; Vaishampayan, Parag
2012-01-01
Contaminant terrestrial microbiota could profoundly impact the scientific integrity of extraterrestrial life-detection experiments. It is therefore important to know what organisms persist on spacecraft surfaces so that their presence can be eliminated or discriminated from authentic extraterrestrial biosignatures. Although there is a growing understanding of the biodiversity associated with spacecraft and cleanroom surfaces, it remains challenging to assess the risk of these microbes confounding life-detection or sample-return experiments. A key challenge is to provide a comprehensive inventory of microbes present on spacecraft surfaces. To assess the phylogenetic breadth of microorganisms on spacecraft and associated surfaces, the Genetic Inventory team used three technologies: conventional cloning techniques, PhyloChip DNA microarrays, and 454 tag-encoded pyrosequencing, together with a methodology to systematically collect, process, and archive nucleic acids. These three analysis methods yielded considerably different results: Traditional approaches provided the least comprehensive assessment of microbial diversity, while PhyloChip and pyrosequencing illuminated more diverse microbial populations. The overall results stress the importance of selecting sample collection and processing approaches based on the desired target and required level of detection. The DNA archive generated in this study can be made available to future researchers as genetic-inventory-oriented technologies further mature.
Description of interventions is under-reported in physical therapy clinical trials.
Hariohm, K; Jeyanthi, S; Kumar, J Saravan; Prakash, V
Amongst several barriers to the application of quality clinical evidence and clinical guidelines into routine daily practice, poor description of interventions reported in clinical trials has received less attention. Although some studies have investigated the completeness of descriptions of non-pharmacological interventions in randomized trials, studies that exclusively analyzed physical therapy interventions reported in published trials are scarce. To evaluate the quality of descriptions of interventions in both experimental and control groups in randomized controlled trials published in four core physical therapy journals. We included all randomized controlled trials published from the Physical Therapy Journal, Journal of Physiotherapy, Clinical Rehabilitation, and Archives of Physical Medicine and Rehabilitation between June 2012 and December 2013. Each randomized controlled trial (RCT) was analyzed and coded for description of interventions using the checklist developed by Schroter et al. Out of 100 RCTs selected, only 35 RCTs (35%) fully described the interventions in both the intervention and control groups. Control group interventions were poorly described in the remaining RCTs (65%). Interventions, especially in the control group, are poorly described in the clinical trials published in leading physical therapy journals. A complete description of the intervention in a published report is crucial for physical therapists to be able to use the intervention in clinical practice. Copyright © 2017 Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia. Publicado por Elsevier Editora Ltda. All rights reserved.
1975-11-01
FLAT. MINE THROW I at NTS was a 120-ton ammonium nitrate / fuel oil (ANFO) shot, buried in alluvium, that was designed to be correlated to the...885 577L) Takahashl, S.K. Buried Fuel Capsule — Comparison of Three-Dimensional Computer Data with Experimental Data, NCEL TR-798. Port Hueneme: Naval...Perrone ATTN: Technical Library ATTN: Code 464, Jacob L. Warner Of f icer-in-Charge Civil Engineering Laboratory Naval Construction Battalion
2016-07-18
Research Laboratory Space Vehicles Directorate 3550 Aberdeen Avenue SE Kirtland AFB, NM 87117-5776 8. PERFORMING ORGANIZATION REPORT NUMBER AFRL -RV...Satellite Program Space Weather Sensors (1 Dec 2000 – 30 Nov 2014), AFRL -RV-PS-TR-2016-0053, Air Force Research Laboratory, Kirtland AFB, NM, Jan 2015. [2...Archive Listing (1982-2013) and File Formats Descriptions, AFRL -RV-PR-TR-2014-0174, Air Force Research Laboratory, Kirtland AFB, NM, Aug 2014. [3
Identification and Evaluation of Submerged Anomalies, Mobile Harbor, Alabama.
1984-10-01
Bay Waters , 1864-1865 APPENDIX B: Description of Maps in National ill Archives Collection V LIST OF FIGURES Figure Page cover Torpedo Raft in Mobile Bay...Anomaly D-E 51 13 Magnetometer Chart, Anomaly F 53 14 Sketch of Steel Wreckage Found at Anomaly F 54 15 Approaches to Mobile City by Water (Merrill...Osage (1863-65) 84 30 CSS Albemarle, Prototype for the Huntsville 86 31 Magnolia, CSA-Utilized Vessel 109 32 Approaches to Mobile City by Water (1864
1971-01-01
ARBOR, MICHIGAN A8104 © y Technical Progress Report // 4 1 January through 30 June 1971 ooO\\ Prepared in connection with research supported by...34? SSr^ ^ IRA ^ WEIS data from Janu- y . 1^66 though The data and source of each item is included in the analytic deck Also included is a descriptive...WEIGHTED, WEIGHTED, CHINESE: EXrrtA-SYSTEM ALL SIDES IN QUARREL BXTHA-nEGIONAL EXTRA-SYSTEH EXTRAS EG 10NAL (!1 EMB ER) E X T H A. J Y S T
Kästner, Ingrid
2013-01-01
In 1914, from 6th May to 18th October, the International Exposition of book Industry and Graphic Arts (BUGRA) took place in Leipzig, then the world capital of books. Karl Sudhoff, director of the Leipzig Institute of the History of Medicine, was appointed by the executive committee of the BURGA to organize the special exhibition "Three Millennia of Graphic Arts in the Service of Science". The paper shows, following Sudhoff's own descriptions and new archival sources, the conceptual design and the contents of this exposition set up by Sudhoff.
Scientometric Analysis of the Journals of the Academy of Medical Sciences in Bosnia and Herzegovina
Masic, Izet; Begic, Edin; Zunic, Lejla
2016-01-01
Introduction: Currently in Bosnia and Herzegovina there are 25 journals in the field of biomedicine, 6 of them are indexed in Medline/PubMed base (Medical Archives, Materia Socio-Medica, Acta Informatica Medica, Acta Medica Academica, Bosnian Journal of Basic Medical Sciences (BJBMS) and Medical Glasnik), and one (BJBMS) is indexed in Science Citation Index Expanded (SCIE)/Web of Science base. Aim: The aim of this study was to show the scope of work of the journals that were published by Academy of Medical Sciences of Bosnia and Herzegovina - Medical Archives, Materia Socio-Medica and Acta Informatica Medica. Material and Methods: The research presents a meta-analysis of three journals, or their issues, during the calendar year 2015 (retrospective and descriptive character). Results: During 2015 calendar year a total of 286 articles were published (in Medical Archives 104 (36.3%), in Materia Socio-Medica 99 (34.6%), and in Acta Informatica Medica 83 (29%)). Original articles are present in the highest number in all three journals (in Medical Archives 80.7%, in Materia Socio Medica 77.7%, and in Acta Informatica Medica 68.6%). In Medical Archives, 90.3% of the articles were related to the field of clinical medicine. In Materia Socio-Medica, the domain of clinical medicine and public health was the most represented. Preclinical areas are most frequent in Acta Informatica Medica. The period of 50-60 days for a decision on the admission of article is most common in all three journals, with trend of shortening of that period. Articles came from 19 countries, mostly from Bosnia and Herzegovina, then from Iran, Kosovo, Saudi Arabia and Greece. Conclusion: In Medical Archives original articles in the field of clinical medicine (usually internal and surgical disciplines) are most often present, and that is the case in last four years. The number of articles in Materia Socio-Medica and Acta Informatica Medica is growing from year to year. In Materia Socio-Medica there is a trend of growth of articles in the field of public health, while the most common articles in Acta Informatica Medica are about medical informatics. PMID:27041805
Scientometric Analysis of the Journals of the Academy of Medical Sciences in Bosnia and Herzegovina.
Masic, Izet; Begic, Edin; Zunic, Lejla
2016-02-01
Currently in Bosnia and Herzegovina there are 25 journals in the field of biomedicine, 6 of them are indexed in Medline/PubMed base (Medical Archives, Materia Socio-Medica, Acta Informatica Medica, Acta Medica Academica, Bosnian Journal of Basic Medical Sciences (BJBMS) and Medical Glasnik), and one (BJBMS) is indexed in Science Citation Index Expanded (SCIE)/Web of Science base. The aim of this study was to show the scope of work of the journals that were published by Academy of Medical Sciences of Bosnia and Herzegovina - Medical Archives, Materia Socio-Medica and Acta Informatica Medica. The research presents a meta-analysis of three journals, or their issues, during the calendar year 2015 (retrospective and descriptive character). During 2015 calendar year a total of 286 articles were published (in Medical Archives 104 (36.3%), in Materia Socio-Medica 99 (34.6%), and in Acta Informatica Medica 83 (29%)). Original articles are present in the highest number in all three journals (in Medical Archives 80.7%, in Materia Socio Medica 77.7%, and in Acta Informatica Medica 68.6%). In Medical Archives, 90.3% of the articles were related to the field of clinical medicine. In Materia Socio-Medica, the domain of clinical medicine and public health was the most represented. Preclinical areas are most frequent in Acta Informatica Medica. The period of 50-60 days for a decision on the admission of article is most common in all three journals, with trend of shortening of that period. Articles came from 19 countries, mostly from Bosnia and Herzegovina, then from Iran, Kosovo, Saudi Arabia and Greece. In Medical Archives original articles in the field of clinical medicine (usually internal and surgical disciplines) are most often present, and that is the case in last four years. The number of articles in Materia Socio-Medica and Acta Informatica Medica is growing from year to year. In Materia Socio-Medica there is a trend of growth of articles in the field of public health, while the most common articles in Acta Informatica Medica are about medical informatics.
Rogers, R F; Rose, W C; Schwaber, J S
1996-10-01
1. We seek to understand the baroreceptor signal processing that occurs centrally, beginning with the transformation of the signal at the first stage of processing. Because quantitative descriptions of the encoding of mean arterial pressure and its derivative with respect to time by baroreceptive second-order neurons have been unavailable, we characterized the responses of nucleus tractus solitarius (NTS) neurons that receive direct myelinated baroreceptor inputs to combinations of these two stimulus variables. 2. In anesthetized, paralyzed, artificially ventilated rabbits, the carotid sinus was vascularly isolated and the carotid sinus nerve was dissected free from surrounding tissue. Single-unit extracellular recordings were made from NTS neurons that received direct (with the use of physiological criteria) synaptic inputs from carotid sinus baroreceptors with myelinated axons. The vast majority of these neurons did not receive ipsilateral aortic nerve convergent inputs. With the use of a computer-controlled linear motor, a piecewise linear pressure waveform containing 32 combinations of pressure and its rate of change with respect to time (dP/dt) was delivered to the ipsilateral carotid sinus. 3. The average NTS firing frequency during the different stimulus combinations of pressure and dP/dt was a nonlinear and interdependent function of both variables. Most notable was the "extinctive" encoding of carotid sinus pressure by these neurons. This was characterized by an increase in firing frequency going from low to medium mean pressures (analyzed at certain positive dP/dt values) followed by a decrease in activity during high-pressure stimuli. All second-order neurons analyzed had their maximal firing rates when dP/dt was positive. 4. All neurons had their maximal firing frequency locations ("receptive field centers") at just 3 of 32 possible pressure-dP/dt coordinates. The responses of a small population of neurons were used to generate a composite description of the encoding of pressure and dP/dt. When combined as a composite of individually normalized values, the encoding of carotid sinus pressure and dP/dt may be approximated with the use of two-dimensional Gaussian functions. 5. We conclude that the population of NTS neurons recorded most faithfully encodes the rate and direction of (mean) pressure change, as opposed to providing the CNS with an unambiguous encoding of absolute pressure. Instead, the activity of these neurons, individually or as a population, serves as an estimate for the first derivative of the myelinated baroreceptor signal's encoding of mean pressure. We therefore speculate that the output of these individual neurons is useful in dynamic, rather than static, arterial pressure control.
JP3D compressed-domain watermarking of volumetric medical data sets
NASA Astrophysics Data System (ADS)
Ouled Zaid, Azza; Makhloufi, Achraf; Olivier, Christian
2010-01-01
Increasing transmission of medical data across multiple user systems raises concerns for medical image watermarking. Additionaly, the use of volumetric images triggers the need for efficient compression techniques in picture archiving and communication systems (PACS), or telemedicine applications. This paper describes an hybrid data hiding/compression system, adapted to volumetric medical imaging. The central contribution is to integrate blind watermarking, based on turbo trellis-coded quantization (TCQ), to JP3D encoder. Results of our method applied to Magnetic Resonance (MR) and Computed Tomography (CT) medical images have shown that our watermarking scheme is robust to JP3D compression attacks and can provide relative high data embedding rate whereas keep a relative lower distortion.
Environmental resources of selected areas of Hawaii: Cultural environment and aesthetic resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trettin, L.D.; Petrich, C.H.; Saulsbury, J.W.
This report has been prepared to make available and archive the background scientific data and related information collected on the cultural environment and aesthetic resources during the preparation of the environmental impact statement (EIS) for Phases 3 and 4 of the Hawaii Geothermal Project (HGP) as defined by the state of Hawaii in its April 1989 proposal to Congress. The cultural environment in the Geothermal Resource Zone (GRZ) and associated study area consists of Native Hawaiian cultural and religious practices and both Native Hawaiian and non-Native Hawaiian cultural resources. This report consists of three sections: (1) a description of Nativemore » Hawaiian cultural and religious rights, practices, and values; (2) a description of historic, prehistoric, and traditional Native Hawaiian sites; and (3) a description of other (non-native) sites that could be affected by development in the study area. Within each section, the level of descriptive detail varies according to the information currently available. The description of the cultural environment is most specific in its coverage of the Geothermal Resource Subzones in the Puna District of the island of Hawaii and the study area of South Maui. Ethnographic and archaeological reports by Cultural Advocacy Network Developing Options and International Archaeological Research Institute, Inc., respectively, supplement the descriptions of these two areas with new information collected specifically for this study. Less detailed descriptions of additional study areas on Oahu, Maui, Molokai, and the island of Hawaii are based on existing archaeological surveys.« less
Use of Archival Sources to Improve Water-Related Hazard Assessments at Volcán de Agua, Guatemala
NASA Astrophysics Data System (ADS)
Hutchison, A. A.; Cashman, K. V.; Rust, A.; Williams, C. A.
2013-12-01
This interdisciplinary study focuses on the use of archival sources from the 18th Century Spanish Empire to develop a greater understanding of mudflow trigger mechanisms at Volcán de Agua in Guatemala. Currently, hazard assessments of debris flows at Volcán de Agua are largely based on studies of analogous events, such as the mudflow at Casita Volcano in 1998 caused by excessive rainfall generated by Hurricane Mitch. A preliminary investigation of Spanish archival sources, however, indicates that a damaging mudflow from the volcano in 1717 may have been triggered by activity at the neighbouring Volcán de Fuego. A VEI 4 eruption of Fuego in late August 1717 was followed by 33 days of localized 'retumbos' and then a major local earthquake with accompanying mudflows from several 'bocas' on the southwest flank of Agua. Of particular importance for this study is an archival source from Archivos Generales de Centro América (AGCA) that consists of a series of letters, petitions and witness statements that were written and gathered following the catastrophic events of 1717. Their purpose was to argue for royal permission to relocate the capital city, which at the time was located on the lower flanks of Volcán de Agua. Within these documents there are accounts of steaming 'avenidas' of water with sulphurous smells, and quantitative descriptions that suggest fissure formation related to volcanic activity at Volcán de Fuego. Clear evidence for volcano-tectonic activity at the time, combined with the fact there is no mention of rainfall in the documents, suggest that outbursts of mud from Agua's south flank may have been caused by a volcanic perturbation of a hydrothermal system. This single example suggests that further analysis of archival documents will provide a more accurate and robust assessment of water related hazards at Volcán de Agua than currently exists.
Application of XML to Journal Table Archiving
NASA Astrophysics Data System (ADS)
Shaya, E. J.; Blackwell, J. H.; Gass, J. E.; Kargatis, V. E.; Schneider, G. L.; Weiland, J. L.; Borne, K. D.; White, R. A.; Cheung, C. Y.
1998-12-01
The Astronomical Data Center (ADC) at the NASA Goddard Space Flight Center is a major archive for machine-readable astronomical data tables. Many ADC tables are derived from published journal articles. Article tables are reformatted to be machine-readable and documentation is crafted to facilitate proper reuse by researchers. The recent switch of journals to web based electronic format has resulted in the generation of large amounts of tabular data that could be captured into machine-readable archive format at fairly low cost. The large data flow of the tables from all major North American astronomical journals (a factor of 100 greater than the present rate at the ADC) necessitates the development of rigorous standards for the exchange of data between researchers, publishers, and the archives. We have selected a suitable markup language that can fully describe the large variety of astronomical information contained in ADC tables. The eXtensible Markup Language XML is a powerful internet-ready documentation format for data. It provides a precise and clear data description language that is both machine- and human-readable. It is rapidly becoming the standard format for business and information transactions on the internet and it is an ideal common metadata exchange format. By labelling, or "marking up", all elements of the information content, documents are created that computers can easily parse. An XML archive can easily and automatically be maintained, ingested into standard databases or custom software, and even totally restructured whenever necessary. Structuring astronomical data into XML format will enable efficient and focused search capabilities via off-the-shelf software. The ADC is investigating XML's expanded hyperlinking power to enhance connectivity within the ADC data/metadata and developing XSL display scripts to enhance display of astronomical data. The ADC XML Definition Type Document can be viewed at http://messier.gsfc.nasa.gov/dtdhtml/DTD-TREE.html
EOS Data Products Handbook. Volume 2
NASA Technical Reports Server (NTRS)
Parkinson, Claire L. (Editor); Greenstone, Reynold (Editor); Closs, James (Technical Monitor)
2000-01-01
The EOS Data Products Handbook provides brief descriptions of the data products that will be produced from a range of missions of the Earth Observing System (EOS) and associated projects. Volume 1, originally published in 1997, covers the Tropical Rainfall Measuring Mission (TRMM), the Terra mission (formerly named EOS AM-1), and the Data Assimilation System, while this volume, Volume 2, covers the Active Cavity Radiometer Irradiance Monitor Satellite (ACRIMSAT), Aqua, Jason-1, Landsat 7, Meteor 3M/Stratospheric Aerosol and Gas Experiment III (SAGE III). the Quick Scatterometer (QuikScat), the Quick Total Ozone Mapping Spectrometer (Quik-TOMS), and the Vegetation Canopy Lidar (VCL) missions. Volume 2 follows closely the format of Volume 1, providing a list of products and an introduction and overview descriptions of the instruments and data processing, all introductory to the core of the book, which presents the individual data product descriptions, organized into 11 topical chapters. The product descriptions are followed by five appendices, which provide contact information for the EOS data centers that will be archiving and distributing the data sets, contact information for the science points of contact for the data products, references, acronyms and abbreviations, and a data products index.
The effects of verbal descriptions on performance in lineups and showups.
Wilson, Brent M; Seale-Carlisle, Travis M; Mickes, Laura
2018-01-01
Verbally describing a face has been found to impair subsequent recognition of that face from a photo lineup, a phenomenon known as the verbal overshadowing effect (Schooler & Engstler-Schooler, 1990). Recently, a large direct replication study successfully reproduced that original finding (Alogna et al., 2014). However, in both the original study and the replication studies, memory was tested using only target-present lineups (i.e., lineups containing the previously seen target face), making it possible to compute the correct identification rate (correct ID rate; i.e., the hit rate) but not the false identification rate (false ID rate; i.e., the false alarm rate). Thus, the lower correct ID rate for the verbal condition could reflect either reduced discriminability or a conservative criterion shift relative to the control condition. In four verbal overshadowing experiments reported here, we measured both correct ID rates and false ID rates using photo lineups (Experiments 1 and 2) or single-photo showups (Experiments 3 and 4). The experimental manipulation (verbally describing the face or not) occurred either immediately after encoding (Experiments 1 and 3) or 20-min after encoding (Experiments 2 and 4). In the immediate condition, discriminability did not differ between groups, but in the delayed condition, discriminability was lower in the verbal description group (i.e., a verbal overshadowing effect was observed). A fifth experiment found that the effect of the immediate-versus-delayed manipulation may be attributable to a change in the content of verbal descriptions, with the ratio of diagnostic to generic facial features in the descriptions decreasing as delay increases. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Brown, Ramsay A; Swanson, Larry W
2013-09-01
Systematic description and the unambiguous communication of findings and models remain among the unresolved fundamental challenges in systems neuroscience. No common descriptive frameworks exist to describe systematically the connective architecture of the nervous system, even at the grossest level of observation. Furthermore, the accelerating volume of novel data generated on neural connectivity outpaces the rate at which this data is curated into neuroinformatics databases to synthesize digitally systems-level insights from disjointed reports and observations. To help address these challenges, we propose the Neural Systems Language (NSyL). NSyL is a modeling language to be used by investigators to encode and communicate systematically reports of neural connectivity from neuroanatomy and brain imaging. NSyL engenders systematic description and communication of connectivity irrespective of the animal taxon described, experimental or observational technique implemented, or nomenclature referenced. As a language, NSyL is internally consistent, concise, and comprehensible to both humans and computers. NSyL is a promising development for systematizing the representation of neural architecture, effectively managing the increasing volume of data on neural connectivity and streamlining systems neuroscience research. Here we present similar precedent systems, how NSyL extends existing frameworks, and the reasoning behind NSyL's development. We explore NSyL's potential for balancing robustness and consistency in representation by encoding previously reported assertions of connectivity from the literature as examples. Finally, we propose and discuss the implications of a framework for how NSyL will be digitally implemented in the future to streamline curation of experimental results and bridge the gaps among anatomists, imagers, and neuroinformatics databases. Copyright © 2013 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Johansson, Roger; Holsanova, Jana; Dewhurst, Richard; Holmqvist, Kenneth
2012-01-01
Current debate in mental imagery research revolves around the perceptual and cognitive role of eye movements to "nothing" (Ferreira, Apel, & Henderson, 2008; Richardson, Altmann, Spivey, & Hoover, 2009). While it is established that eye movements are comparable when inspecting a scene (or hearing a scene description) as when…
Neural Basis of Semantic Representation and Semantic Composition
ERIC Educational Resources Information Center
Fernandino, Leonardo F.
2009-01-01
The mechanisms by which the mind encodes meaning into words and reconstructs it from them has been the subject of philosophical speculations at least since Plato and Aristotle in the 4th century B.C. Our current understanding of how the brain is involved in these processes, however, only started in the 19 th century, with precise descriptions of…
Interpretation of Verb Phrase Telicity: Sensitivity to Verb Type and Determiner Type
ERIC Educational Resources Information Center
Ogiela, Diane A.; Schmitt, Cristina; Casby, Michael W.
2014-01-01
Purpose: The authors examine how adults use linguistic information from verbs, direct objects, and particles to interpret an event description as encoding a logical endpoint to the event described (in which case, it is telic) or not (in which case, it is atelic). Current models of aspectual composition predict that quantity-sensitive verbs…
Campaign to counter a deteriorating consumer market: Philip Morris's Project Sunrise.
Givel, M
2013-02-01
From 1997 to 2000, Philip Morris implemented Project Sunrise. This paper discusses the impact of this project on national and Philip Morris's cigarette unit sales, public opinion about smoking and secondhand tobacco smoke, and national prevalence trends for tobacco use. A qualitative archival content analysis of Project Sunrise from 1997 to 2000, and a descriptive statistical analysis of cigarette unit sales and operating profits, acceptability of smoking and secondhand tobacco smoke, and national prevalence trends for tobacco use from 1996 to 2006. Qualitative data sources related to Project Sunrise found on WebCat, Pubmed.com, LexisNexis Academic and Philip Morris's website, and archived tobacco industry documents were analysed using NVivo Version 9.0. A descriptive statistical analysis of cigarette unit sales, public opinion about smoking and secondhand tobacco smoke, and national prevalence trends for tobacco use was undertaken. Project Sunrise was a high-level strategic corporate plan to maintain profits that included four possible scenarios resulting in seven interwoven strategies. However, national prevalence rates for tobacco use declined, sales of national and Philip Morris cigarettes declined, operating profits remained at substantially lower levels after 2000 from 2001 to 2006, and a large majority of Americans agreed that there were significant health dangers associated with smoking and secondhand tobacco smoke. The impact of Project Sunrise, including countering the anti-tobacco movement, was less than successful in the USA. Copyright © 2012 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Palmer, Francesca T; Flowe, Heather D; Takarangi, Melanie K T; Humphries, Joyce E
2013-02-01
Research about intoxicated witnesses and criminal suspects is surprisingly limited, considering the police believe that they are quite ubiquitous. In the present study, we assessed the involvement of intoxicated witnesses and suspects in the investigation of rape, robbery, and assault crimes by analyzing cases that were referred by the police to a prosecutor's office. Results indicated that intoxicated witnesses and suspects played an appreciable role in criminal investigations: Intoxicated witnesses were just as likely as sober ones to provide a description of the culprit and to take an identification test, suggesting criminal investigators treat intoxicated and sober witnesses similarly. Moreover, intoxicated suspects typically admitted to the police that they had consumed alcohol and/or drugs, and they were usually arrested on the same day as the crime. This archival analysis highlights the many ways in which alcohol impacts testimony during criminal investigations and underscores the need for additional research to investigate best practices for obtaining testimony from intoxicated witnesses and suspects.
[Manuscript "Many different remedies for headache treatment" from the archives of Sinj Friary].
Kujundzic, Nikola; Glibota, Milan; Inic, Suzana
2011-01-01
Manuscripts containing collections of folk recipes for treatment of deseases were written mostly by Catholic priests especially Franciscians in Croatia in the past centuries. They were used as manuals for preparation of remedies and gave directions for their use. These writtings provide valuble data for etnographers and historians of ethnomedicine. The paper describes the manuscript "Many different remedies for headache treatment" written by unknown author probably in 18. century in Sinj, Dalmatia. The manuscript was found in the archives of Sinj Friary. The collection contains 16 recipes for headache treatment. Materia medica of the manuscript is composed of drugs of plant origin. Valuable information is given about the folk names for medicinal plants as well as descriptions of the ways of preparing remedies. Latin as well as contemporaly croatian names are attributed to the plants species mentioned in the manuscript. Use of the plants for treatment of the specific deseases were compared with their use in modern fitotherapy.
The metagenomic data life-cycle: standards and best practices
ten Hoopen, Petra; Finn, Robert D.; Bongo, Lars Ailo; Corre, Erwan; Meyer, Folker; Mitchell, Alex; Pelletier, Eric; Pesole, Graziano; Santamaria, Monica; Willassen, Nils Peder
2017-01-01
Abstract Metagenomics data analyses from independent studies can only be compared if the analysis workflows are described in a harmonized way. In this overview, we have mapped the landscape of data standards available for the description of essential steps in metagenomics: (i) material sampling, (ii) material sequencing, (iii) data analysis, and (iv) data archiving and publishing. Taking examples from marine research, we summarize essential variables used to describe material sampling processes and sequencing procedures in a metagenomics experiment. These aspects of metagenomics dataset generation have been to some extent addressed by the scientific community, but greater awareness and adoption is still needed. We emphasize the lack of standards relating to reporting how metagenomics datasets are analysed and how the metagenomics data analysis outputs should be archived and published. We propose best practice as a foundation for a community standard to enable reproducibility and better sharing of metagenomics datasets, leading ultimately to greater metagenomics data reuse and repurposing. PMID:28637310
VO-Dance an IVOA tools to easy publish data into VO and it's extension on planetology request
NASA Astrophysics Data System (ADS)
Smareglia, R.; Capria, M. T.; Molinaro, M.
2012-09-01
Data publishing through the self standing portals can be joined to VO resource publishing, i.e. astronomical resources deployed through VO compliant services. Since the IVOA (International Virtual Observatory Alliance) provides many protocols and standards for the various data flavors (images, spectra, catalogues … ), and since the data center has as a goal to grow up in number of hosted archives and services providing, the idea arose to find a way to easily deploy and maintain VO resources. VO-Dance is a java web application developed at IA2 that addresses this idea creating, in a dynamical way, VO resources out of database tables or views. It is structured to be potentially DBMS and platform independent and consists of 3 main tokens, an internal DB to store resources description and model metadata information, a restful web application to deploy the resources to the VO community. It's extension to planetology request is under study to best effort INAF software development and archive efficiency.
NASA Astrophysics Data System (ADS)
Chiharu, M.
2017-12-01
One effective measure for enhancing the residents' disaster prevention awareness is to know the natural hazard which has occurred in the past at residence. Mie Disaster Mitigation Center had released the digital archive for promoting an understanding of disaster prevention on April 28, 2015. This archive is recording the past disaster information as digital catalog. An effective contribution to enhancement of the inhabitants' disaster prevention awareness is expected. It includes the following contents (1) The interview with disaster victim (the 1944 Tonankai Earthquake, The Ise Bay Typhoon and so on) (2) The information on "monument of Tsunami" (3) The description of disaster on the local history material (the school history books, municipal history books, and so on). These contents are being dropped on a map and it is being shown clearly geographically. For all age groups, this way makes it easy to understand that the past disaster information relates to their residence address.
Aguirre-Hudson, Begoña; Whitworth, Isabella; Spooner, Brian M
2011-01-01
This is an historical and descriptive account of 28 herbarium specimens, 27 lichens and an alga, found in the archives of Charles Chalcraft, a descendant of the Bedford family, who were dye manufacturers in Leeds, England, in the 19th century. The lichens comprise 13 different morphotypes collected in the Canary Islands and West Africa by the French botanist J. M. Despréaux between 1833 and 1839. The collections include samples of "Roccella fuciformis", "R. phycopsis" and "R. tinctoria" (including the fertile morphotype "R. canariensis"), "Ramalina crispatula" and "R. cupularis", two distinct morphotypes of "Sticta", "S. canariensis" and "S. dufouri", "Physconia enteroxantha", "Pseudevernia furfuracea var. ceratea" and "Pseudocyphellaria argyracea". The herbarium also includes authentic material of "Parmotrema tinctorum" and a probable syntype of "Seirophora scorigena". Most of these species are known as a source of the purple dye orchil, which was used to dye silk and wool.
Agile based "Semi-"Automated Data ingest process : ORNL DAAC example
NASA Astrophysics Data System (ADS)
Santhana Vannan, S. K.; Beaty, T.; Cook, R. B.; Devarakonda, R.; Hook, L.; Wei, Y.; Wright, D.
2015-12-01
The ORNL DAAC archives and publishes data and information relevant to biogeochemical, ecological, and environmental processes. The data archived at the ORNL DAAC must be well formatted, self-descriptive, and documented, as well as referenced in a peer-reviewed publication. The ORNL DAAC ingest team curates diverse data sets from multiple data providers simultaneously. To streamline the ingest process, the data set submission process at the ORNL DAAC has been recently updated to use an agile process and a semi-automated workflow system has been developed to provide a consistent data provider experience and to create a uniform data product. The goals of semi-automated agile ingest process are to: 1.Provide the ability to track a data set from acceptance to publication 2. Automate steps that can be automated to improve efficiencies and reduce redundancy 3.Update legacy ingest infrastructure 4.Provide a centralized system to manage the various aspects of ingest. This talk will cover the agile methodology, workflow, and tools developed through this system.
The metagenomic data life-cycle: standards and best practices
DOE Office of Scientific and Technical Information (OSTI.GOV)
ten Hoopen, Petra; Finn, Robert D.; Bongo, Lars Ailo
Metagenomics data analyses from independent studies can only be compared if the analysis workflows are described in a harmonised way. In this overview, we have mapped the landscape of data standards available for the description of essential steps in metagenomics: (1) material sampling, (2) material sequencing (3) data analysis and (4) data archiving & publishing. Taking examples from marine research, we summarise essential variables used to describe material sampling processes and sequencing procedures in a metagenomics experiment. These aspects of metagenomics dataset generation have been to some extent addressed by the scientific community but greater awareness and adoption is stillmore » needed. We emphasise the lack of standards relating to reporting how metagenomics datasets are analysed and how the metagenomics data analysis outputs should be archived and published. We propose best practice as a foundation for a community standard to enable reproducibility and better sharing of metagenomics datasets, leading ultimately to greater metagenomics data reuse and repurposing.« less
Preparing a collection of radiology examinations for distribution and retrieval.
Demner-Fushman, Dina; Kohli, Marc D; Rosenman, Marc B; Shooshan, Sonya E; Rodriguez, Laritza; Antani, Sameer; Thoma, George R; McDonald, Clement J
2016-03-01
Clinical documents made available for secondary use play an increasingly important role in discovery of clinical knowledge, development of research methods, and education. An important step in facilitating secondary use of clinical document collections is easy access to descriptions and samples that represent the content of the collections. This paper presents an approach to developing a collection of radiology examinations, including both the images and radiologist narrative reports, and making them publicly available in a searchable database. The authors collected 3996 radiology reports from the Indiana Network for Patient Care and 8121 associated images from the hospitals' picture archiving systems. The images and reports were de-identified automatically and then the automatic de-identification was manually verified. The authors coded the key findings of the reports and empirically assessed the benefits of manual coding on retrieval. The automatic de-identification of the narrative was aggressive and achieved 100% precision at the cost of rendering a few findings uninterpretable. Automatic de-identification of images was not quite as perfect. Images for two of 3996 patients (0.05%) showed protected health information. Manual encoding of findings improved retrieval precision. Stringent de-identification methods can remove all identifiers from text radiology reports. DICOM de-identification of images does not remove all identifying information and needs special attention to images scanned from film. Adding manual coding to the radiologist narrative reports significantly improved relevancy of the retrieved clinical documents. The de-identified Indiana chest X-ray collection is available for searching and downloading from the National Library of Medicine (http://openi.nlm.nih.gov/). Published by Oxford University Press on behalf of the American Medical Informatics Association 2015. This work is written by US Government employees and is in the public domain in the US.
Description of data on the Nimbus 7 LIMS map archive tape: Temperature and geopotential height
NASA Technical Reports Server (NTRS)
Haggard, K. V.; Remsberg, E. E.; Grose, W. L.; Russell, J. M., III; Marshall, B. T.; Lingenfelser, G.
1986-01-01
The process by which the analysis of the Limb Infared Monitor of the Stratosphere (LIMS) experiment data were used to produce estimates of synoptic maps of temperature and geopotential height is described. In addition to a detailed description of the analysis procedure, several interesting features in the data are discussed and these features are used to demonstrate how the analysis procedure produced the final maps and how one can estimate the uncertainties in the maps. In addition, features in the analysis are noted that would influence how one might use, or interpret, the results. These include subjects such as smoothing and the interpretation of wave components. While some suggestions are made for an improved analysis of the data, it is shown that, in general, the maps are an excellent estimation of the synoptic fields.
Murphy, Danielle A.; Ely, Heather A.; Shoemaker, Robert; Boomer, Aaron; Culver, Brady P.; Hoskins, Ian; Haimes, Josh D.; Walters, Ryan D.; Fernandez, Diane; Stahl, Joshua A.; Lee, Jeeyun; Kim, Kyoung-Mee; Lamoureux, Jennifer
2017-01-01
Targeted therapy combined with companion diagnostics has led to the advancement of next-generation sequencing (NGS) for detection of molecular alterations. However, using a diagnostic test to identify patient populations with low prevalence molecular alterations, such as gene rearrangements, poses efficiency, and cost challenges. To address this, we have developed a 2-step diagnostic test to identify NTRK1, NTRK2, NTRK3, ROS1, and ALK rearrangements in formalin-fixed paraffin-embedded clinical specimens. This test is comprised of immunohistochemistry screening using a pan-receptor tyrosine kinase cocktail of antibodies to identify samples expressing TrkA (encoded by NTRK1), TrkB (encoded by NTRK2), TrkC (encoded by NTRK3), ROS1, and ALK followed by an RNA-based anchored multiplex polymerase chain reaction NGS assay. We demonstrate that the NGS assay is accurate and reproducible in identification of gene rearrangements. Furthermore, implementation of an RNA quality control metric to assess the presence of amplifiable nucleic acid input material enables a measure of confidence when an NGS result is negative for gene rearrangements. Finally, we demonstrate that performing a pan-receptor tyrosine kinase immunohistochemistry staining enriches detection of the patient population for gene rearrangements from 4% to 9% and has a 100% negative predictive value. Together, this 2-step assay is an efficient method for detection of gene rearrangements in both clinical testing and studies of archival formalin-fixed paraffin-embedded specimens. PMID:27028240
Okamoto, Masaaki; Naito, Mariko; Miyanohara, Mayu; Imai, Susumu; Nomura, Yoshiaki; Saito, Wataru; Momoi, Yasuko; Takada, Kazuko; Miyabe-Nishiwaki, Takako; Tomonaga, Masaki; Hanada, Nobuhiro
2016-12-01
Streptococcus troglodytae TKU31 was isolated from the oral cavity of a chimpanzee (Pan troglodytes) and was found to be the most closely related species of the mutans group streptococci to Streptococcus mutans. The complete sequence of TKU31 genome consists of a single circular chromosome that is 2,097,874 base pairs long and has a G + C content of 37.18%. It possesses 2082 coding sequences (CDSs), 65 tRNAs and five rRNA operons (15 rRNAs). Two clustered regularly interspaced short palindromic repeats, six insertion sequences and two predicted prophage elements were identified. The genome of TKU31 harbors some putative virulence associated genes, including gtfB, gtfC and gtfD genes encoding glucosyltransferase and gbpA, gbpB, gbpC and gbpD genes encoding glucan-binding cell wall-anchored protein. The deduced amino acid identity of the rhamnose-glucose polysaccharide F gene (rgpF), which is one of the serotype determinants, is 91% identical with that of S. mutans LJ23 (serotype k) strain. However, two other virulence-associated genes cnm and cbm, which encode the collagen-binding proteins, were not found in the TKU31 genome. The complete genome sequence of S. troglodytae TKU31 has been deposited at DDBJ/European Nucleotide Archive/GenBank under the accession no. AP014612. © 2016 The Societies and John Wiley & Sons Australia, Ltd.
Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach
Danyali, Habibiollah; Mertins, Alfred
2011-01-01
In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653
Specification and verification of gate-level VHDL models of synchronous and asynchronous circuits
NASA Technical Reports Server (NTRS)
Russinoff, David M.
1995-01-01
We present a mathematical definition of hardware description language (HDL) that admits a semantics-preserving translation to a subset of VHDL. Our HDL includes the basic VHDL propagation delay mechanisms and gate-level circuit descriptions. We also develop formal procedures for deriving and verifying concise behavioral specifications of combinational and sequential devices. The HDL and the specification procedures have been formally encoded in the computational logic of Boyer and Moore, which provides a LISP implementation as well as a facility for mechanical proof-checking. As an application, we design, specify, and verify a circuit that achieves asynchronous communication by means of the biphase mark protocol.
The design of a petabyte archive and distribution system for the NASA ECS project
NASA Technical Reports Server (NTRS)
Caulk, Parris M.
1994-01-01
The NASA EOS Data and Information System (EOSDIS) Core System (ECS) will contain one of the largest data management systems ever built - the ECS Science and Data Processing System (SDPS). SDPS is designed to support long term Global Change Research by acquiring, producing, and storing earth science data, and by providing efficient means for accessing and manipulating that data. The first two releases of SDPS, Release A and Release B, will be operational in 1997 and 1998, respectively. Release B will be deployed at eight Distributed Active Archiving Centers (DAAC's). Individual DAAC's will archive different collections of earth science data, and will vary in archive capacity. The storage and management of these data collections is the responsibility of the SDPS Data Server subsystem. It is anticipated that by the year 2001, the Data Server subsystem at the Goddard DAAC must support a near-line data storage capacity of one petabyte. The development of SDPS is a system integration effort in which COTS products will be used in favor of custom components in very possible way. Some software and hardware capabilities required to meet ECS data volume and storage management requirements beyond 1999 are not yet supported by available COTS products. The ECS project will not undertake major custom development efforts to provide these capabilities. Instead, SDPS and its Data Server subsystem are designed to support initial implementations with current products, and provide an evolutionary framework that facilitates the introduction of advanced COTS products as they become available. This paper provides a high-level description of the Data Server subsystem design from a COTS integration standpoint, and discussed some of the major issues driving the design. The paper focuses on features of the design that will make the system scalable and adaptable to changing technologies.
Testing the Archivas Cluster (Arc) for Ozone Monitoring Instrument (OMI) Scientific Data Storage
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2005-01-01
The Ozone Monitoring Instrument (OMI) launched on NASA's Aura Spacecraft, the third of the major platforms of the EOS program on July 15,2004. In addition to the long term archive and distribution of the data from OM1 through the Goddard Earth Science Distributed Active Archive Center (GESDAAC), we are evaluating other archive mechanisms that can archive the data in a more immediately available method where it can be used for futher data production and analysis. In 2004, Archivas, Inc. was selected by NASA s Small Business Innovative Research (SBIR) program for the development of their Archivas Cluster (ArC) product. Arc is an online disk based system utilizing self-management and automation on a Linux cluster. Its goal is to produce a low cost solution coupled with the ease of management. The OM1 project is an application partner of the SBIR program, and has deployed a small cluster (5TB) based on the beta Archwas software. We performed extensive testing of the unit using production OM1 data since launch. In 2005, Archivas, Inc. was funded in SBIR Phase II for further development, which will include testing scalability with the deployment of a larger (35TB) cluster at Goddard. We plan to include Arc in the OM1 Team Leader Computing Facility (TLCF) hosting OM1 data for direct access and analysis by the OMI Science Team. This presentation will include a brief technical description of the Archivas Cluster, a summary of the SBIR Phase I beta testing results, and an overview of the OMI ground data processing architecture including its interaction with the Phase II Archivas Cluster and hosting of OMI data for the scientists.
Improving Access to NASA Earth Science Data through Collaborative Metadata Curation
NASA Astrophysics Data System (ADS)
Sisco, A. W.; Bugbee, K.; Shum, D.; Baynes, K.; Dixon, V.; Ramachandran, R.
2017-12-01
The NASA-developed Common Metadata Repository (CMR) is a high-performance metadata system that currently catalogs over 375 million Earth science metadata records. It serves as the authoritative metadata management system of NASA's Earth Observing System Data and Information System (EOSDIS), enabling NASA Earth science data to be discovered and accessed by a worldwide user community. The size of the EOSDIS data archive is steadily increasing, and the ability to manage and query this archive depends on the input of high quality metadata to the CMR. Metadata that does not provide adequate descriptive information diminishes the CMR's ability to effectively find and serve data to users. To address this issue, an innovative and collaborative review process is underway to systematically improve the completeness, consistency, and accuracy of metadata for approximately 7,000 data sets archived by NASA's twelve EOSDIS data centers, or Distributed Active Archive Centers (DAACs). The process involves automated and manual metadata assessment of both collection and granule records by a team of Earth science data specialists at NASA Marshall Space Flight Center. The team communicates results to DAAC personnel, who then make revisions and reingest improved metadata into the CMR. Implementation of this process relies on a network of interdisciplinary collaborators leveraging a variety of communication platforms and long-range planning strategies. Curating metadata at this scale and resolving metadata issues through community consensus improves the CMR's ability to serve current and future users and also introduces best practices for stewarding the next generation of Earth Observing System data. This presentation will detail the metadata curation process, its outcomes thus far, and also share the status of ongoing curation activities.
Compressed domain indexing of losslessly compressed images
NASA Astrophysics Data System (ADS)
Schaefer, Gerald
2001-12-01
Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.
NASA's Long-Term Archive (LTA) of ICESat Data at the National Snow and Ice Data Center (NSIDC)
NASA Astrophysics Data System (ADS)
Fowler, D. K.; Moses, J. F.; Dimarzio, J. P.; Webster, D.
2011-12-01
Data Stewardship, preservation, and reproducibility are becoming principal parts of a data manager's work. In an era of distributed data and information systems, where the host location ought to be transparent to the internet user, it is of vital importance that organizations make a commitment to both current and long-term goals of data management and the preservation of scientific data. NASA's EOS Data and Information System (EOSDIS) is a distributed system of discipline-specific archives and mission-specific science data processing facilities. Satellite missions and instruments go through a lifecycle that involves pre-launch calibration, on-orbit data acquisition and product generation, and final reprocessing. Data products and descriptions flow to the archives for distribution on a regular basis during the active part of the mission. However there is additional information from the product generation and science teams needed to ensure the observations will be useful for long term climate studies. Examples include ancillary input datasets, product generation software, and production history as developed by the team during the course of product generation. These data and information will need to be archived after product data processing is completed. Using inputs from the USGCRP Workshop on Long Term Archive Requirements (1998), discussions with EOS instrument teams, and input from the 2011 ESIPS Federation meeting, NASA is developing a set of Earth science data and information content requirements for long term preservation that will ultimately be used for all the EOS missions as they come to completion. Since the ICESat/GLAS mission is one of the first to come to an end, NASA and NSIDC are preparing for long-term support of the ICESat mission data now. For a long-term archive, it is imperative that there is sufficient information about how products were prepared in order to convince future researchers that the scientific results are accurate, understandable, useable, and reproducible. Our experience suggests data centers know what to preserve in most cases, i.e., the processing algorithms along with the Level 0 or Level 1a input and ancillary products used to create the higher-level products will be archived and made available to users. In other cases the data centers must seek guidance from the science team, e.g., for pre-launch, calibration/validation, and test data. All these data are an important part of product provenance, contributing to and helping establish the integrity of the scientific observations for long term climate studies. In this presentation we will describe application of information content requirements, guidance from the ICESat/GLAS Science Team and the flow of additional information from the ICESat Science team and Science Investigator-Led Processing System to the Distributed Active Archive Center.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Joel D.; Yang, Yao-Lun; II, Neal J. Evans
2016-03-15
We present the COPS-DIGIT-FOOSH (CDF) Herschel spectroscopy data product archive, and related ancillary data products, along with data fidelity assessments, and a user-created archive in collaboration with the Herschel-PACS and SPIRE ICC groups. Our products include datacubes, contour maps, automated line fitting results, and best 1D spectra products for all protostellar and disk sources observed with PACS in RangeScan mode for two observing programs: the DIGIT Open Time Key Program (KPOT-nevans-1 and SDP-nevans-1; PI: N. Evans), and the FOOSH Open Time Program (OT1-jgreen02-2; PI: J. Green). In addition, we provide our best SPIRE-FTS spectroscopic products for the COPS Open Time Program (OT2-jgreen02-6;more » PI: J. Green) and FOOSH sources. We include details of data processing, descriptions of output products, and tests of their reliability for user applications. We identify the parts of the data set to be used with caution. The resulting absolute flux calibration has improved in almost all cases. Compared to previous reductions, the resulting rotational temperatures and numbers of CO molecules have changed substantially in some sources. On average, however, the rotational temperatures have not changed substantially (<2%), but the number of warm (T{sub rot} ∼ 300 K) CO molecules has increased by about 18%.« less
NASA Astrophysics Data System (ADS)
Green, Joel D.; Yang, Yao-Lun; Evans, Neal J., II; Karska, Agata; Herczeg, Gregory; van Dishoeck, Ewine F.; Lee, Jeong-Eun; Larson, Rebecca L.; Bouwman, Jeroen
2016-03-01
We present the COPS-DIGIT-FOOSH (CDF) Herschel spectroscopy data product archive, and related ancillary data products, along with data fidelity assessments, and a user-created archive in collaboration with the Herschel-PACS and SPIRE ICC groups. Our products include datacubes, contour maps, automated line fitting results, and best 1D spectra products for all protostellar and disk sources observed with PACS in RangeScan mode for two observing programs: the DIGIT Open Time Key Program (KPOT_nevans1 and SDP_nevans_1; PI: N. Evans), and the FOOSH Open Time Program (OT1_jgreen02_2; PI: J. Green). In addition, we provide our best SPIRE-FTS spectroscopic products for the COPS Open Time Program (OT2_jgreen02_6; PI: J. Green) and FOOSH sources. We include details of data processing, descriptions of output products, and tests of their reliability for user applications. We identify the parts of the data set to be used with caution. The resulting absolute flux calibration has improved in almost all cases. Compared to previous reductions, the resulting rotational temperatures and numbers of CO molecules have changed substantially in some sources. On average, however, the rotational temperatures have not changed substantially (<2%), but the number of warm (Trot ∼ 300 K) CO molecules has increased by about 18%.
Supporting the Use of GPM-GV Field Campaign Data Beyond Project Scientists
NASA Astrophysics Data System (ADS)
Weigel, A. M.; Smith, D. K.; Sinclair, L.; Bugbee, K.
2017-12-01
The Global Precipitation Measurement (GPM) Mission Ground Validation (GV) consisted of a collection of field campaigns at various locations focusing on particular aspects of precipitation. Data collected during the GPM-GV are necessary for better understanding the instruments and algorithms used to monitor water resources, study the global hydrologic cycle, understand climate variability, and improve weather prediction. The GPM-GV field campaign data have been archived at the NASA Global Hydrology Resource Center (GHRC) Distributed Achive Archive Center (DAAC). These data consist of a heterogeneous collection of observations that require careful handling, full descriptive user guides, and helpful instructions for data use. These actions are part of the data archival process. In addition, the GHRC focuses on expanding the use of GPM-GV data beyond the validation and instrument researchers that participated in the field campaigns. To accomplish this, GHRC ties together the similarities and differences between the various field campaigns with the goal of improving user documents to be more easily read by those outside the field of research. In this poster, the authors will describe the GPM-GV datasets, discuss data use among the broader community, outline the types of problems/issues with these datasets, demonstrate what tools support data visualization and use, and highlight the outreach materials developed to educate both younger and general audiences about the data.
CRISPR-Cas encoding of a digital movie into the genomes of a population of living bacteria.
Shipman, Seth L; Nivala, Jeff; Macklis, Jeffrey D; Church, George M
2017-07-20
DNA is an excellent medium for archiving data. Recent efforts have illustrated the potential for information storage in DNA using synthesized oligonucleotides assembled in vitro. A relatively unexplored avenue of information storage in DNA is the ability to write information into the genome of a living cell by the addition of nucleotides over time. Using the Cas1-Cas2 integrase, the CRISPR-Cas microbial immune system stores the nucleotide content of invading viruses to confer adaptive immunity. When harnessed, this system has the potential to write arbitrary information into the genome. Here we use the CRISPR-Cas system to encode the pixel values of black and white images and a short movie into the genomes of a population of living bacteria. In doing so, we push the technical limits of this information storage system and optimize strategies to minimize those limitations. We also uncover underlying principles of the CRISPR-Cas adaptation system, including sequence determinants of spacer acquisition that are relevant for understanding both the basic biology of bacterial adaptation and its technological applications. This work demonstrates that this system can capture and stably store practical amounts of real data within the genomes of populations of living cells.
The FBI compression standard for digitized fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.
1996-10-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less
FBI compression standard for digitized fingerprint images
NASA Astrophysics Data System (ADS)
Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas
1996-11-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
Zigman, Peter M
2009-01-01
This article for the first time provides an edition and commentary of the letters of Friedrich Ratzel to his older colleague, teacher and mentor, Ernst Haeckel, which are kept in the archive of the Ernst-Haeckel-House (memorial museum) in Jena. Altogether fifteen letters and one postcard are presented. Haeckel's letters to Ratzel are considered to be lost. The edition is prefaced with a detailed description of Ratzel's life, career and work, as a part of the edition.
WCRP surface radiation budget shortwave data product description, version 1.1
NASA Technical Reports Server (NTRS)
Whitlock, C. H.; Charlock, T. P.; Staylor, W. F.; Pinker, R. T.; Laszlo, I.; Dipasquale, R. C.; Ritchey, N. A.
1993-01-01
Shortwave radiative fluxes which reach the Earth's surface are key elements that influence both atmospheric and oceanic circulation. The World Climate Research Program has established the Surface Radiation Budget climatology project with the ultimate goal of determining the various components of the surface radiation budget from satellite data on a global scale. This report describes the first global product that is being produced and archived as part of that effort. The interested user can obtain the monthly global data sets free of charge using e-mail procedures.
GLAS Long-Term Archive: Preservation and Stewardship for a Vital Earth Observing Mission
NASA Astrophysics Data System (ADS)
Fowler, D. K.; Moses, J. F.; Zwally, J.; Schutz, B. E.; Hancock, D.; McAllister, M.; Webster, D.; Bond, C.
2012-12-01
Data Stewardship, preservation, and reproducibility are fast becoming principal parts of a data manager's work. In an era of distributed data and information systems, it is of vital importance that organizations make a commitment to both current and long-term goals of data management and the preservation of scientific data. Satellite missions and instruments go through a lifecycle that involves pre-launch calibration, on-orbit data acquisition and product generation, and final reprocessing. Data products and descriptions flow to the archives for distribution on a regular basis during the active part of the mission. However there is additional information from the product generation and science teams needed to ensure the observations will be useful for long term climate studies. Examples include ancillary input datasets, product generation software, and production history as developed by the team during the course of product generation. These data and information will need to be archived after product data processing is completed. NASA has developed a set of Earth science data and information content requirements for long term preservation that is being used for all the EOS missions as they come to completion. Since the ICESat/GLAS mission was one of the first to end, NASA and NSIDC, in collaboration with the science team, are collecting data, software, and documentation, preparing for long-term support of the ICESat mission. For a long-term archive, it is imperative to preserve sufficient information about how products were prepared in order to ensure future researchers that the scientific results are accurate, understandable, and useable. Our experience suggests data centers know what to preserve in most cases. That is, the processing algorithms along with the Level 0 or Level 1a input and ancillary products used to create the higher-level products will be archived and made available to users. In other cases, such as pre-launch, calibration/validation, and test data, the data centers must seek guidance from the science team. All these data are essential for product provenance, contributing to and helping establish the integrity of the scientific observations for long term climate studies. In this presentation we will describe application of information gathering with guidance from the ICESat/GLAS Science Team, and the flow of additional information from the ICESat Science team and Science Investigator-Led Processing System to the NSIDC Distributed Active Archive Center. This presentation will also cover how we envision user support through the years of the Long-Term Archive.
GIS Technologies For The New Planetary Science Archive (PSA)
NASA Astrophysics Data System (ADS)
Docasal, R.; Barbarisi, I.; Rios, C.; Macfarlane, A. J.; Gonzalez, J.; Arviset, C.; De Marchi, G.; Martinez, S.; Grotheer, E.; Lim, T.; Besse, S.; Heather, D.; Fraga, D.; Barthelemy, M.
2015-12-01
Geographical information system (GIS) is becoming increasingly used for planetary science. GIS are computerised systems for the storage, retrieval, manipulation, analysis, and display of geographically referenced data. Some data stored in the Planetary Science Archive (PSA), for instance, a set of Mars Express/Venus Express data, have spatial metadata associated to them. To facilitate users in handling and visualising spatial data in GIS applications, the new PSA should support interoperability with interfaces implementing the standards approved by the Open Geospatial Consortium (OGC). These standards are followed in order to develop open interfaces and encodings that allow data to be exchanged with GIS Client Applications, well-known examples of which are Google Earth and NASA World Wind as well as open source tools such as Openlayers. The technology already exists within PostgreSQL databases to store searchable geometrical data in the form of the PostGIS extension. An existing open source maps server is GeoServer, an instance of which has been deployed for the new PSA, uses the OGC standards to allow, among others, the sharing, processing and editing of data and spatial data through the Web Feature Service (WFS) standard as well as serving georeferenced map images through the Web Map Service (WMS). The final goal of the new PSA, being developed by the European Space Astronomy Centre (ESAC) Science Data Centre (ESDC), is to create an archive which enables science exploitation of ESA's planetary missions datasets. This can be facilitated through the GIS framework, offering interfaces (both web GUI and scriptable APIs) that can be used more easily and scientifically by the community, and that will also enable the community to build added value services on top of the PSA.
ProMC: Input-output data format for HEP applications using varint encoding
NASA Astrophysics Data System (ADS)
Chekanov, S. V.; May, E.; Strand, K.; Van Gemmeren, P.
2014-10-01
A new data format for Monte Carlo (MC) events, or any structural data, including experimental data, is discussed. The format is designed to store data in a compact binary form using variable-size integer encoding as implemented in the Google's Protocol Buffers package. This approach is implemented in the PROMC library which produces smaller file sizes for MC records compared to the existing input-output libraries used in high-energy physics (HEP). Other important features of the proposed format are a separation of abstract data layouts from concrete programming implementations, self-description and random access. Data stored in PROMC files can be written, read and manipulated in a number of programming languages, such C++, JAVA, FORTRAN and PYTHON.
Hupé, Ginette J; Lewis, John E; Benda, Jan
2008-01-01
The brown ghost knifefish, Apteronotus leptorhynchus, is a model wave-type gymnotiform used extensively in neuroethological studies. As all weakly electric fish, they produce an electric field (electric organ discharge, EOD) and can detect electric signals in their environments using electroreceptors. During social interactions, A. leptorhynchus produce communication signals by modulating the frequency and amplitude of their EOD. The Type 2 chirp, a transient increase in EOD frequency, is the most common modulation type. We will first present a description of A. leptorhynchus chirp production from a behavioural perspective, followed by a discussion of the mechanisms by which chirps are encoded by electroreceptor afferents (P-units). Both the production and encoding of chirps are influenced by the difference in EOD frequency between interacting fish, the so-called beat or difference frequency (Df). Chirps are produced most often when the Df is small, whereas attacks are more common when Dfs are large. Correlation analysis has shown that chirp production induces an echo response in interacting conspecifics and that chirps are produced when attack rates are low. Here we show that both of these relationships are strongest when Dfs are large. Electrophysiological recordings from electroreceptor afferents (P-units) have suggested that small, Type 2 chirps are encoded by increases in electroreceptor synchrony at low Dfs only. How Type 2 chirps are encoded at higher Dfs, where the signals seem to exert the greatest behavioural influence, was unknown. Here, we provide evidence that at higher Dfs, chirps could be encoded by a desynchronization of the P-unit population activity.
Pinal, Diego; Zurrón, Montserrat; Díaz, Fernando
2014-01-01
information encoding, maintenance, and retrieval; these are supported by brain activity in a network of frontal, parietal and temporal regions. Manipulation of WM load and duration of the maintenance period can modulate this activity. Although such modulations have been widely studied using the event-related potentials (ERP) technique, a precise description of the time course of brain activity during encoding and retrieval is still required. Here, we used this technique and principal component analysis to assess the time course of brain activity during encoding and retrieval in a delayed match to sample task. We also investigated the effects of memory load and duration of the maintenance period on ERP activity. Brain activity was similar during information encoding and retrieval and comprised six temporal factors, which closely matched the latency and scalp distribution of some ERP components: P1, N1, P2, N2, P300, and a slow wave. Changes in memory load modulated task performance and yielded variations in frontal lobe activation. Moreover, the P300 amplitude was smaller in the high than in the low load condition during encoding and retrieval. Conversely, the slow wave amplitude was higher in the high than in the low load condition during encoding, and the same was true for the N2 amplitude during retrieval. Thus, during encoding, memory load appears to modulate the processing resources for context updating and post-categorization processes, and during retrieval it modulates resources for stimulus classification and context updating. Besides, despite the lack of differences in task performance related to duration of the maintenance period, larger N2 amplitude and stronger activation of the left temporal lobe after long than after short maintenance periods were found during information retrieval. Thus, results regarding the duration of maintenance period were complex, and future work is required to test the time-based decay theory predictions.
Use of data description languages in the interchange of data
NASA Technical Reports Server (NTRS)
Pignede, M.; Real-Planells, B.; Smith, S. R.
1994-01-01
The Consultative Committee for Space Data Systems (CCSDS) is developing Standards for the interchange of information between systems, including those operating under different environments. The objective is to perform the interchange automatically, i.e. in a computer interpretable manner. One aspect of the concept developed by CCSDS is the use of a separate data description to specify the data being transferred. Using the description, data can then be automatically parsed by the receiving computer. With a suitably expressive Data Description Language (DDL), data formats of arbitrary complexity can be handled. The advantages of this approach are: (1) that the description need only be written and distributed once to all users, and (2) new software does not need to be written for each new format, provided generic tools are available to support writing and interpretation of descriptions and the associated data instances. Consequently, the effort of 'hard coding' each new format is avoided and problems of integrating multiple implementations of a given format by different users are avoided. The approach is applicable in any context where computer parsable description of data could enhance efficiency (e.g. within a spacecraft control system, a data delivery system or an archive). The CCSDS have identified several candidate DDL's: EAST (Extended Ada Subset), TSDN (Transfer Syntax Data Notation) and MADEL (Modified ASN.1 as a Data Description Language -- a DDL based on the Abstract Syntax Notation One - ASN.1 - specified in the ISO/IEC 8824). This paper concentrates on ESA's development of MADEL. ESA have also developed a 'proof of concept' prototype of the required support tools, implemented on a PC under MS-DOS, which has successfully demonstrated the feasibility of the approach, including the capability within an application of retrieving and displaying particular data elements, given its MADEL description (i.e. a data description written in MADEL). This paper outlines the work done to date and assesses the applicability of this modified ASN.1 as a DDL. The feasibility of the approach is illustrated with several examples.
NASA Technical Reports Server (NTRS)
Hinton, David A.
2001-01-01
A ground-based system has been developed to demonstrate the feasibility of automating the process of collecting relevant weather data, predicting wake vortex behavior from a data base of aircraft, prescribing safe wake vortex spacing criteria, estimating system benefit, and comparing predicted and observed wake vortex behavior. This report describes many of the system algorithms, features, limitations, and lessons learned, as well as suggested system improvements. The system has demonstrated concept feasibility and the potential for airport benefit. Significant opportunities exist however for improved system robustness and optimization. A condensed version of the development lab book is provided along with samples of key input and output file types. This report is intended to document the technical development process and system architecture, and to augment archived internal documents that provide detailed descriptions of software and file formats.
A Data Quality Filter for PMU Measurements: Description, Experience, and Examples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Follum, James D.; Amidan, Brett G.
Networks of phasor measurement units (PMUs) continue to grow, and along with them, the amount of data available for analysis. With so much data, it is impractical to identify and remove poor quality data manually. The data quality filter described in this paper was developed for use with the Data Integrity and Situation Awareness Tool (DISAT), which analyzes PMU data to identify anomalous system behavior. The filter operates based only on the information included in the data files, without supervisory control and data acquisition (SCADA) data, state estimator values, or system topology information. Measurements are compared to preselected thresholds tomore » determine if they are reliable. Along with the filter's description, examples of data quality issues from application of the filter to nine months of archived PMU data are provided. The paper is intended to aid the reader in recognizing and properly addressing data quality issues in PMU data.« less
Kinjo, Akira R.; Bekker, Gert-Jan; Suzuki, Hirofumi; Tsuchiya, Yuko; Kawabata, Takeshi; Ikegawa, Yasuyo; Nakamura, Haruki
2017-01-01
The Protein Data Bank Japan (PDBj, http://pdbj.org), a member of the worldwide Protein Data Bank (wwPDB), accepts and processes the deposited data of experimentally determined macromolecular structures. While maintaining the archive in collaboration with other wwPDB partners, PDBj also provides a wide range of services and tools for analyzing structures and functions of proteins. We herein outline the updated web user interfaces together with RESTful web services and the backend relational database that support the former. To enhance the interoperability of the PDB data, we have previously developed PDB/RDF, PDB data in the Resource Description Framework (RDF) format, which is now a wwPDB standard called wwPDB/RDF. We have enhanced the connectivity of the wwPDB/RDF data by incorporating various external data resources. Services for searching, comparing and analyzing the ever-increasing large structures determined by hybrid methods are also described. PMID:27789697
MECDAS: A distributed data acquisition system for experiments at MAMI
NASA Astrophysics Data System (ADS)
Krygier, K. W.; Merle, K.
1994-02-01
For the coincidence experiments with the three spectrometer setup at MAMI an experiment control and data acquisition system has been built and was put successfully into final operation in 1992. MECDAS is designed as a distributed system using communication via Ethernet and optical links. As the front end, VME bus systems are used for real time purposes and direct hardware access via CAMAC, Fastbus or VMEbus. RISC workstations running UNIX are used for monitoring, data archiving and online and offline analysis of the experiment. MECDAS consists of several fixed programs and libraries, but large parts of readout and analysis can be configured by the user. Experiment specific configuration files are used to generate efficient and powerful code well adapted to special problems without additional programming. The experiment description is added to the raw collection of partially analyzed data to get self-descriptive data files.
ERIC Educational Resources Information Center
Vanmarcke, Steven; Mullin, Caitlin; Van der Hallen, Ruth; Evers, Kris; Noens, Ilse; Steyaert, Jean; Wagemans, Johan
2016-01-01
Typically developing (TD) adults are able to extract global information from natural images and to categorize them within a single glance. This study aimed at extending these findings to individuals with autism spectrum disorder (ASD) using a free description open-encoding paradigm. Participants were asked to freely describe what they saw when…
Design and Implementation of a Motor Incremental Shaft Encoder
2008-09-01
SDC Student Design Center VHDL Verilog Hardware Description Language VSC Voltage Source Converters ZCE Zero Crossing Event xiii EXECUTIVE...student to make accurate predictions of voltage source converters ( VSC ) behavior via software simulation; these simulated results could also be... VSC ), and several other off-the-shelf components, a circuit board interface between FPGA and the power source, and a desktop computer [1]. Now, the
Schulz, S; Romacker, M; Hahn, U
1998-01-01
The development of powerful and comprehensive medical ontologies that support formal reasoning on a large scale is one of the key requirements for clinical computing in the next millennium. Taxonomic medical knowledge, a major portion of these ontologies, is mainly characterized by generalization and part-whole relations between concepts. While reasoning in generalization hierarchies is quite well understood, no fully conclusive mechanism as yet exists for part-whole reasoning. The approach we take emulates part-whole reasoning via classification-based reasoning using SEP triplets, a special data structure for encoding part-whole relations that is fully embedded in the formal framework of standard description logics.
Schulz, S.; Romacker, M.; Hahn, U.
1998-01-01
The development of powerful and comprehensive medical ontologies that support formal reasoning on a large scale is one of the key requirements for clinical computing in the next millennium. Taxonomic medical knowledge, a major portion of these ontologies, is mainly characterized by generalization and part-whole relations between concepts. While reasoning in generalization hierarchies is quite well understood, no fully conclusive mechanism as yet exists for part-whole reasoning. The approach we take emulates part-whole reasoning via classification-based reasoning using SEP triplets, a special data structure for encoding part-whole relations that is fully embedded in the formal framework of standard description logics. Images Figure 3 PMID:9929335
Han, Kyunghwa; Jung, Inkyung
2018-05-01
This review article presents an assessment of trends in statistical methods and an evaluation of their appropriateness in articles published in the Archives of Plastic Surgery (APS) from 2012 to 2017. We reviewed 388 original articles published in APS between 2012 and 2017. We categorized the articles that used statistical methods according to the type of statistical method, the number of statistical methods, and the type of statistical software used. We checked whether there were errors in the description of statistical methods and results. A total of 230 articles (59.3%) published in APS between 2012 and 2017 used one or more statistical method. Within these articles, there were 261 applications of statistical methods with continuous or ordinal outcomes, and 139 applications of statistical methods with categorical outcome. The Pearson chi-square test (17.4%) and the Mann-Whitney U test (14.4%) were the most frequently used methods. Errors in describing statistical methods and results were found in 133 of the 230 articles (57.8%). Inadequate description of P-values was the most common error (39.1%). Among the 230 articles that used statistical methods, 71.7% provided details about the statistical software programs used for the analyses. SPSS was predominantly used in the articles that presented statistical analyses. We found that the use of statistical methods in APS has increased over the last 6 years. It seems that researchers have been paying more attention to the proper use of statistics in recent years. It is expected that these positive trends will continue in APS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martins, S.A.; Shinn, J.H.
1993-05-01
The Chemical Hazard Warning System (CHAWS) is designed to collect meteorological data and to display, in real time, the dispersion of hazardous chemicals that may result from an accidental release. Meteorological sensors have been placed strategically around the Lexington-Blue Grass Army Depot and are used to calculate direction and hazard distance for the release. Based on these data, arrows depicting the release direction and distance traveled are graphically displayed on a computer screen showing a site map of the facility. The objectives of CHAWS are as follows: To determine the trajectory of the center of mass of released material frommore » the measured wind field; to calculate the dispersion of the released material based on the measured lateral turbulence intensity (sigma theta); to determine the height of the mixing zone by measurement of the inversion height and wind profiles up to an altitude of about 1 km at sites that have SODAR units installed; to archive meteorological data for potential use in climatological descriptions for emergency planning; to archive air-quality data for preparation of compliance reports; and to provide access to the data for near real time hazard analysis purposes. CHAWS sites are located at the Pine Bluff Arsenal, Arkansas, Edgewood area of Aberdeen Proving Ground, Maryland, Tooele Depot, Utah, Lexington-Blue Grass Depot, Kentucky, and Johnston Island in the Pacific. The systems vary between sites with different features and various types of hardware. The basic system, however, is the same. Nonetheless, we have tailored the manuals to the equipment found at each site.« less
WIFIRE Data Model and Catalog for Wildfire Data and Tools
NASA Astrophysics Data System (ADS)
Altintas, I.; Crawl, D.; Cowart, C.; Gupta, A.; Block, J.; de Callafon, R.
2014-12-01
The WIFIRE project (wifire.ucsd.edu) is building an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. WIFIRE may be used by wildfire management authorities in the future to predict wildfire rate of spread and direction, and assess the effectiveness of high-density sensor networks in improving fire and weather predictions. WIFIRE has created a data model for wildfire resources including sensed and archived data, sensors, satellites, cameras, modeling tools, workflows and social information including Twitter feeds. This data model and associated wildfire resource catalog includes a detailed description of the HPWREN sensor network, SDG&E's Mesonet, and NASA MODIS. In addition, the WIFIRE data-model describes how to integrate the data from multiple heterogeneous sources to provide detailed fire-related information. The data catalog describes 'Observables' captured by each instrument using multiple ontologies including OGC SensorML and NASA SWEET. Observables include measurements such as wind speed, air temperature, and relative humidity, as well as their accuracy and resolution. We have implemented a REST service for publishing to and querying from the catalog using Web Application Description Language (WADL). We are creating web-based user interfaces and mobile device Apps that use the REST interface for dissemination to wildfire modeling community and project partners covering academic, private, and government laboratories while generating value to emergency officials and the general public. Additionally, the Kepler scientific workflow system is instrumented to interact with this data catalog to access real-time streaming and archived wildfire data and stream it into dynamic data-driven wildfire models at scale.
Raabe, Ellen A.; D'Anjou, Robert; Pope, Domonique K.; Robbins, Lisa L.
2011-01-01
This project combines underwater video with maps and descriptions to illustrate diverse seafloor habitats from Tampa Bay, Florida, to Mobile Bay, Alabama. A swath of seafloor was surveyed with underwater video to 100 meters (m) water depth in 1999 and 2000 as part of the Gulfstream Natural Gas System Survey. The U.S. Geological Survey (USGS) in St. Petersburg, Florida, in cooperation with Eckerd College and the Florida Department of Environmental Protection (FDEP), produced an archive of analog-to-digital underwater movies. Representative clips of seafloor habitats were selected from hundreds of hours of underwater footage. The locations of video clips were mapped to show the distribution of habitat and habitat transitions. The numerous benthic habitats in the northeastern Gulf of Mexico play a vital role in the region's economy, providing essential resources for tourism, natural gas, recreational water sports (fishing, boating, scuba diving), materials, fresh food, energy, a source of sand for beach renourishment, and more. These submerged natural resources are important to the economy but are often invisible to the general public. This product provides a glimpse of the seafloor with sample underwater video, maps, and habitat descriptions. It was developed to depict the range and location of seafloor habitats in the region but is limited by depth and by the survey track. It should not be viewed as comprehensive, but rather as a point of departure for inquiries and appreciation of marine resources and seafloor habitats. Further information is provided in the Resources section.
Experiences with making diffraction image data available: what metadata do we need to archive?
Kroon-Batenburg, Loes M J; Helliwell, John R
2014-10-01
Recently, the IUCr (International Union of Crystallography) initiated the formation of a Diffraction Data Deposition Working Group with the aim of developing standards for the representation of raw diffraction data associated with the publication of structural papers. Archiving of raw data serves several goals: to improve the record of science, to verify the reproducibility and to allow detailed checks of scientific data, safeguarding against fraud and to allow reanalysis with future improved techniques. A means of studying this issue is to submit exemplar publications with associated raw data and metadata. In a recent study of the binding of cisplatin and carboplatin to histidine in lysozyme crystals under several conditions, the possible effects of the equipment and X-ray diffraction data-processing software on the occupancies and B factors of the bound Pt compounds were compared. Initially, 35.3 GB of data were transferred from Manchester to Utrecht to be processed with EVAL. A detailed description and discussion of the availability of metadata was published in a paper that was linked to a local raw data archive at Utrecht University and also mirrored at the TARDIS raw diffraction data archive in Australia. By making these raw diffraction data sets available with the article, it is possible for the diffraction community to make their own evaluation. This led to one of the authors of XDS (K. Diederichs) to re-integrate the data from crystals that supposedly solely contained bound carboplatin, resulting in the analysis of partially occupied chlorine anomalous electron densities near the Pt-binding sites and the use of several criteria to more carefully assess the diffraction resolution limit. General arguments for archiving raw data, the possibilities of doing so and the requirement of resources are discussed. The problems associated with a partially unknown experimental setup, which preferably should be available as metadata, is discussed. Current thoughts on data compression are summarized, which could be a solution especially for pixel-device data sets with fine slicing that may otherwise present an unmanageable amount of data.
NASA Astrophysics Data System (ADS)
Nass, A.
2017-12-01
Since the late 1950s a huge number of planetary missions started to explore our solar system. The data resulting from this robotic exploration and remote sensing varies in data type, resolution and target. After data preprocessing, and referencing, the released data are available for the community on different portals and archiving systems, e.g. PDS or PSA. One major usage for these data is mapping, i.e. the extraction and filtering of information by combining and visualizing different kind of base data. Mapping itself is conducted either for mission planning (e.g. identification of landing site) or fundamental research (e.g. reconstruction of surface). The mapping results for mission planning are directly managed within the mission teams. The derived data for fundamental research - also describable as maps, diagrams, or analysis results - are mainly project-based and exclusively available in scientific papers. Within the last year, first steps have been taken to ensure a sustainable use of these derived data by finding an archiving system comparable to the data portals, i.e. reusable, well-documented, and sustainable. For the implementation three tasks are essential. Two tasks have been treated in the past 1. Comparability and interoperability has been made possible by standardized recommendations for visual, textual, and structural description of mapping data. 2. Interoperability between users, information- and graphic systems is possible by templates and guidelines for digital GIS-based mapping. These two steps are adapted e.g. within recent mapping projects for the Dawn mission. The third task hasn`t been implemented thus far: Establishing an easily detectable and accessible platform that holds already acquired information and published mapping results for future investigations or mapping projects. An archive like this would support the scientific community significantly by a constant rise of knowledge and understanding based on recent discussions within Information Science and Management, and Data Warehousing. This contribution describes the necessary map archive components that have to be considered for an efficient establishment and user-oriented accessibility. It will be described how already existing developments could be used, and which components will have to be developed yet.
VO for Education: Archive Prototype
NASA Astrophysics Data System (ADS)
Ramella, M.; Iafrate, G.; De Marco, M.; Molinaro, M.; Knapic, C.; Smareglia, R.; Cepparo, F.
2014-05-01
The number of remote control telescopes dedicated to education is increasing in many countries, leading to correspondingly larger and larger amount of stored educational data that are usually available only to local observers. Here we present the project for a new infrastructure that will allow teachers using educational telescopes to archive their data and easily publish them within the Virtual Observatory (VO) avoiding the complexity of professional tools. Students and teachers anywhere will be able to access these data with obvious benefits for the realization of grander scale collaborative projects. Educational VO data will also be an important resource for teachers not having direct access to any educational telescopes. We will use the educational telescope at our observatory in Trieste as a prototype for the future VO educational data archive resource. The publishing infrastructure will include: user authentication, content and curation validation, data validation and ingestion, VO compliant resource generation. All of these parts will be performed by means of server side applications accessible through a web graphical user interface (web GUI). Apart from user registration, that will be validated by a natural person responsible for the archive (after having verified the reliability of the user and inspected one or more test files), all the subsequent steps will be automated. This means that at the very first data submission through the webGUI, a complete resource including archive and published VO service will be generated, ready to be registered to the VO. The efforts required to the registered user will consist only in describing herself/himself at registration step and submitting the data she/he selects for publishing after each observation sessions. The infrastructure will be file format independent and the underlying data model will use a minimal set of standard VO keywords, some of which will be specific for outreach and education, possibly including VO field identification (astronomy, planetary science, solar physics). The VO published resource description will be suggested such as to allow selective access to educational data by VO aware tools, differentiating them from professional data while treating them with the same procedures, protocols and tools. The whole system will be very flexible, scalable and with the objective to leave as less work as possible to humans.
Wang, Lingling; Hatem, Ayat; Catalyurek, Umit V; Morrison, Mark; Yu, Zhongtang
2013-01-01
The ruminal microbial community is a unique source of enzymes that underpin the conversion of cellulosic biomass. In this study, the microbial consortia adherent on solid digesta in the rumen of Jersey cattle were subjected to an activity-based metagenomic study to explore the genetic diversity of carbohydrolytic enzymes in Jersey cows, with a particular focus on cellulases and xylanases. Pyrosequencing and bioinformatic analyses of 120 carbohydrate-active fosmids identified genes encoding 575 putative Carbohydrate-Active Enzymes (CAZymes) and proteins putatively related to transcriptional regulation, transporters, and signal transduction coupled with polysaccharide degradation and metabolism. Most of these genes shared little similarity to sequences archived in databases. Genes that were predicted to encode glycoside hydrolases (GH) involved in xylan and cellulose hydrolysis (e.g., GH3, 5, 9, 10, 39 and 43) were well represented. A new subfamily (S-8) of GH5 was identified from contigs assigned to Firmicutes. These subfamilies of GH5 proteins also showed significant phylum-dependent distribution. A number of polysaccharide utilization loci (PULs) were found, and two of them contained genes encoding Sus-like proteins and cellulases that have not been reported in previous metagenomic studies of samples from the rumens of cows or other herbivores. Comparison with the large metagenomic datasets previously reported of other ruminant species (or cattle breeds) and wallabies showed that the rumen microbiome of Jersey cows might contain differing CAZymes. Future studies are needed to further explore how host genetics and diets affect the diversity and distribution of CAZymes and utilization of plant cell wall materials.
A New Approach for Fingerprint Image Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazieres, Bertrand
1997-12-01
The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefactsmore » which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.« less
SPSS and SAS procedures for estimating indirect effects in simple mediation models.
Preacher, Kristopher J; Hayes, Andrew F
2004-11-01
Researchers often conduct mediation analysis in order to indirectly assess the effect of a proposed cause on some outcome through a proposed mediator. The utility of mediation analysis stems from its ability to go beyond the merely descriptive to a more functional understanding of the relationships among variables. A necessary component of mediation is a statistically and practically significant indirect effect. Although mediation hypotheses are frequently explored in psychological research, formal significance tests of indirect effects are rarely conducted. After a brief overview of mediation, we argue the importance of directly testing the significance of indirect effects and provide SPSS and SAS macros that facilitate estimation of the indirect effect with a normal theory approach and a bootstrap approach to obtaining confidence intervals, as well as the traditional approach advocated by Baron and Kenny (1986). We hope that this discussion and the macros will enhance the frequency of formal mediation tests in the psychology literature. Electronic copies of these macros may be downloaded from the Psychonomic Society's Web archive at www.psychonomic.org/archive/.
Peerbolte, Stacy L; Collins, Matthew Lloyd
2013-01-01
Emergency managers must be able to think critically in order to identify and anticipate situations, solve problems, make judgements and decisions effectively and efficiently, and assume and manage risk. Heretofore, a critical thinking skills assessment of local emergency managers had yet to be conducted that tested for correlations among age, gender, education, and years in occupation. An exploratory descriptive research design, using the Watson-Glaser Critical Thinking Appraisal-Short Form (WGCTA-S), was employed to determine the extent to which a sample of 54 local emergency managers demonstrated the critical thinking skills associated with the ability to assume and manage risk as compared to the critical thinking scores of a group of 4,790 peer-level managers drawn from an archival WGCTA-S database. This exploratory design suggests that the local emergency managers, surveyed in this study, had lower WGCTA-S critical thinking scores than their equivalents in the archival database with the exception of those in the high education and high experience group. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.
On-line access to remote sensing data with the satellite-data information system (ISIS)
NASA Astrophysics Data System (ADS)
Strunz, G.; Lotz-Iwen, H.-J.
1994-08-01
The German Remote Sensing Data Center (DFD) is developing the satellite-data information system ISIS as central interface for users to access Earth observation data. ISIS has been designed to support international scientific research as well as operational applications by offering online database access via public networks, and is integrated in the international activities dedicated to catalogue and archive interoperability. A prototype of ISIS is already in use within the German Processing and Archiving Facility for ERS-1 for the storage and retrieval of digital SAR quicklook products and for the Radarmap of Germany. An operational status of the system is envisaged for the launch of ERS-2. The paper in hand describes the underlying concepts of ISIS and the recent state of realization. It explains the overall structure of the system and the functionality of each of its components. Emphasis is put on the description of the advisory system, the catalogue retrieval, and the online access and transfer of image data. Finally, the integration into a future global environmental data network is outlined.
Big heart data: advancing health informatics through data sharing in cardiovascular imaging.
Suinesiaputra, Avan; Medrano-Gracia, Pau; Cowan, Brett R; Young, Alistair A
2015-07-01
The burden of heart disease is rapidly worsening due to the increasing prevalence of obesity and diabetes. Data sharing and open database resources for heart health informatics are important for advancing our understanding of cardiovascular function, disease progression and therapeutics. Data sharing enables valuable information, often obtained at considerable expense and effort, to be reused beyond the specific objectives of the original study. Many government funding agencies and journal publishers are requiring data reuse, and are providing mechanisms for data curation and archival. Tools and infrastructure are available to archive anonymous data from a wide range of studies, from descriptive epidemiological data to gigabytes of imaging data. Meta-analyses can be performed to combine raw data from disparate studies to obtain unique comparisons or to enhance statistical power. Open benchmark datasets are invaluable for validating data analysis algorithms and objectively comparing results. This review provides a rationale for increased data sharing and surveys recent progress in the cardiovascular domain. We also highlight the potential of recent large cardiovascular epidemiological studies enabling collaborative efforts to facilitate data sharing, algorithms benchmarking, disease modeling and statistical atlases.
Eponymous Instruments in Orthopaedic Surgery
Buraimoh, M. Ayodele; Liu, Jane Z.; Sundberg, Stephen B.; Mott, Michael P.
2017-01-01
Abstract Every day surgeons call for instruments devised by surgeon trailblazers. This article aims to give an account of commonly used eponymous instruments in orthopaedic surgery, focusing on the original intent of their designers in order to inform how we use them today. We searched PubMed, the archives of longstanding medical journals, Google, the Internet Archive, and the HathiTrust Digital Library for information regarding the inventors and the developments of 7 instruments: the Steinmann pin, Bovie electrocautery, Metzenbaum scissors, Freer elevator, Cobb periosteal elevator, Kocher clamp, and Verbrugge bone holding forceps. A combination of ingenuity, necessity, circumstance and collaboration produced the inventions of the surgical tools numbered in our review. In some cases, surgical instruments were improvements of already existing technologies. The indications and applications of the orthopaedic devices have changed little. Meanwhile, instruments originally developed for other specialties have been adapted for our use. Although some argue for a transition from eponymous to descriptive terms in medicine, there is value in recognizing those who revolutionized surgical techniques and instrumentation. Through history, we have an opportunity to be inspired and to better understand our tools. PMID:28852360
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palanisamy, Giri
The U.S. Department of Energy (DOE)’s Atmospheric Radiation Measurement (ARM) Climate Research Facility performs routine in situ and remote-sensing observations to provide a detailed and accurate description of the Earth atmosphere in diverse climate regimes. The result is a huge archive of diverse data sets containing observational and derived data, currently accumulating at a rate of 30 terabytes (TB) of data and 150,000 different files per month (http://www.archive.arm.gov/stats/). Continuing the current processing while scaling this to even larger sizes is extremely important to the ARM Facility and requires consistent metadata and data standards. The standards described in this document willmore » enable development of automated analysis and discovery tools for the ever growing data volumes. It will enable consistent analysis of the multiyear data, allow for development of automated monitoring and data health status tools, and allow future capabilities of delivering data on demand that can be tailored explicitly for the user needs. This analysis ability will only be possible if the data follows a minimum set of standards. This document proposes a hierarchy of required and recommended standards.« less
Short genome report of cellulose-producing commensal Escherichia coli 1094.
Bernal-Bayard, Joaquin; Gomez-Valero, Laura; Wessel, Aimee; Khanna, Varun; Bouchier, Christiane; Ghigo, Jean-Marc
2018-01-01
Bacterial surface colonization and biofilm formation often rely on the production of an extracellular polymeric matrix that mediates cell-cell and cell-surface contacts. In Escherichia coli and many Betaproteobacteria and Gammaproteobacteria cellulose is often the main component of the extracellular matrix. Here we report the complete genome sequence of the cellulose producing strain E. coli 1094 and compare it with five other closely related genomes within E. coli phylogenetic group A. We present a comparative analysis of the regions encoding genes responsible for cellulose biosynthesis and discuss the changes that could have led to the loss of this important adaptive advantage in several E. coli strains. Data deposition: The annotated genome sequence has been deposited at the European Nucleotide Archive under the accession number PRJEB21000.
Earth-System Scales of Biodiversity Variability in Shallow Continental Margin Seafloor Ecosystems
NASA Astrophysics Data System (ADS)
Moffitt, S. E.; White, S. M.; Hill, T. M.; Kennett, J.
2015-12-01
High-resolution paleoceanographic sedimentary sequences allow for the description of ecosystem sensitivity to earth-system scales of climate and oceanographic change. Such archives from Santa Barbara Basin, California record the ecological consequences to seafloor ecosystems of climate-forced shifts in the California Current Oxygen Minimum Zone (OMZ). Here we use core MV0508-20JPC dated to 735,000±5,000 years ago (Marine Isotope Stage 18) as a "floating window" of millennial-scale ecological variability. For this investigation, previously published archives of planktonic δ18O (Globigerina bulloides) record stadial and interstadial oscillations in surface ocean temperature. Core MV0508-20JPC is an intermittently laminated archive, strongly influenced by the California Current OMZ, with continuously preserved benthic foraminifera and discontinuously preserved micro-invertebrates, including ophiuroids, echinoderms, ostracods, gastropods, bivalves and scaphopods. Multivariate statistical approaches, such as ordinations and cluster analyses, describe climate-driven changes in both foraminiferal and micro-invertebrate assemblages. Statistical ordinations illustrate that the shallow continental margin seafloor underwent predictable phase-shifts in oxygenation and biodiversity across stadial and interstadial events. A narrow suite of severely hypoxic taxa characterized foraminiferal communities from laminated intervals, including Bolivina tumida, Globobulimina spp., and Nonionella stella. Foraminiferal communities from bioturbated intervals are diverse and >60% similar to each other, and they are associated with echinoderm, ostracod and mollusc fossils. As with climate shifts in the latest Quaternary, there is a sensitive benthic ecosystem response in mid-Pleistocene continental margins to climatically related changes in OMZ strength.
A Counterexample Guided Abstraction Refinement Framework for Verifying Concurrent C Programs
2005-05-24
source code are routinely executed. The source code is written in languages ranging from C/C++/Java to ML/ Ocaml . These languages differ not only in...from the difficulty to model computer programs—due to the complexity of programming languages as compared to hardware description languages —to...intermediate specification language lying between high-level Statechart- like formalisms and transition systems. Actions are encoded as changes in
Acousto-Optical Method of Encoding and Visualization of Underwater Space
2014-01-27
neurons which are mathematically described as coupled nonlinear oscillators that are slightly unstable. They have a property called ’ Self - Referential ... self - regulating process which is represented by Equation (5) in the ensuing description. [0083] The input/output circuitry 64 outputs signals that...other words, self -correcting dynamics of the Na and Ca ions in the membranes are closely related to the sensing and the flopping of motion actuators
Microcode Verification Project.
1980-05-01
numerical constant. The internal syntax for these minimum and maximum values is REALMIN and REALMAX. ISPSSIMP ISPSSIMP is the file simplifying bitstring...To be fair , it is quito clear that much of the ILbor Il tile verification task can be reduced If verification and. code development are carried out...basi.a of and the language we have chosen for both encoding our descriptions of machines and reasoning about the course of computations. Internally , our
Quantitative Analysis of Hepatitis C NS5A Viral Protein Dynamics on the ER Surface.
Knodel, Markus M; Nägel, Arne; Reiter, Sebastian; Vogel, Andreas; Targett-Adams, Paul; McLauchlan, John; Herrmann, Eva; Wittum, Gabriel
2018-01-08
Exploring biophysical properties of virus-encoded components and their requirement for virus replication is an exciting new area of interdisciplinary virological research. To date, spatial resolution has only rarely been analyzed in computational/biophysical descriptions of virus replication dynamics. However, it is widely acknowledged that intracellular spatial dependence is a crucial component of virus life cycles. The hepatitis C virus-encoded NS5A protein is an endoplasmatic reticulum (ER)-anchored viral protein and an essential component of the virus replication machinery. Therefore, we simulate NS5A dynamics on realistic reconstructed, curved ER surfaces by means of surface partial differential equations (sPDE) upon unstructured grids. We match the in silico NS5A diffusion constant such that the NS5A sPDE simulation data reproduce experimental NS5A fluorescence recovery after photobleaching (FRAP) time series data. This parameter estimation yields the NS5A diffusion constant. Such parameters are needed for spatial models of HCV dynamics, which we are developing in parallel but remain qualitative at this stage. Thus, our present study likely provides the first quantitative biophysical description of the movement of a viral component. Our spatio-temporal resolved ansatz paves new ways for understanding intricate spatial-defined processes central to specfic aspects of virus life cycles.
Golby, Paul; Nunez, Javier; Cockle, Paul J.; Ewer, Katie; Logan, Karen; Hogarth, Philip; Vordermeier, H. Martin; Hinds, Jason; Hewinson, R. Glyn; Gordon, Stephen V.
2011-01-01
Genome sequencing of Mycobacterium tuberculosis complex members has accelerated the search for new disease-control tools. Antigen mining is one area that has benefited enormously from access to genome data. As part of an ongoing antigen mining programme, we screened genes that were previously identified by transcriptome analysis as upregulated in response to an in vitro acid shock for their in vivo expression profile and antigenicity. We show that the genes encoding two methyltransferases, Mb1438c/Rv1403c and Mb1440c/Rv1404c, were highly upregulated in a mouse model of infection, and were antigenic in M. bovis-infected cattle. As the genes encoding these antigens were highly upregulated in vivo, we sought to define their genetic regulation. A mutant was constructed that was deleted for their putative regulator, Mb1439/Rv1404; loss of the regulator led to increased expression of the flanking methyltransferases and a defined set of distal genes. This work has therefore generated both applied and fundamental outputs, with the description of novel mycobacterial antigens that can now be moved into field trials, but also with the description of a regulatory network that is responsive to both in vivo and in vitro stimuli. PMID:18375799
Quantitative Analysis of Hepatitis C NS5A Viral Protein Dynamics on the ER Surface
Nägel, Arne; Reiter, Sebastian; Vogel, Andreas; McLauchlan, John; Herrmann, Eva; Wittum, Gabriel
2018-01-01
Exploring biophysical properties of virus-encoded components and their requirement for virus replication is an exciting new area of interdisciplinary virological research. To date, spatial resolution has only rarely been analyzed in computational/biophysical descriptions of virus replication dynamics. However, it is widely acknowledged that intracellular spatial dependence is a crucial component of virus life cycles. The hepatitis C virus-encoded NS5A protein is an endoplasmatic reticulum (ER)-anchored viral protein and an essential component of the virus replication machinery. Therefore, we simulate NS5A dynamics on realistic reconstructed, curved ER surfaces by means of surface partial differential equations (sPDE) upon unstructured grids. We match the in silico NS5A diffusion constant such that the NS5A sPDE simulation data reproduce experimental NS5A fluorescence recovery after photobleaching (FRAP) time series data. This parameter estimation yields the NS5A diffusion constant. Such parameters are needed for spatial models of HCV dynamics, which we are developing in parallel but remain qualitative at this stage. Thus, our present study likely provides the first quantitative biophysical description of the movement of a viral component. Our spatio-temporal resolved ansatz paves new ways for understanding intricate spatial-defined processes central to specfic aspects of virus life cycles. PMID:29316722
NASA Astrophysics Data System (ADS)
Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Jia, Jie; Kim, Hae-Kwang
2007-12-01
In this paper, we propose a method of 3D graphics to video encoding and streaming that are embedded into a remote interactive 3D visualization system for rapidly representing a 3D scene on mobile devices without having to download it from the server. In particular, a 3D graphics to video framework is presented that increases the visual quality of regions of interest (ROI) of the video by performing more bit allocation to ROI during H.264 video encoding. The ROI are identified by projection 3D objects to a 2D plane during rasterization. The system offers users to navigate the 3D scene and interact with objects of interests for querying their descriptions. We developed an adaptive media streaming server that can provide an adaptive video stream in term of object-based quality to the client according to the user's preferences and the variation of network bandwidth. Results show that by doing ROI mode selection, PSNR of test sample slightly change while visual quality of objects increases evidently.
Killion, Patrick J; Sherlock, Gavin; Iyer, Vishwanath R
2003-01-01
Background The power of microarray analysis can be realized only if data is systematically archived and linked to biological annotations as well as analysis algorithms. Description The Longhorn Array Database (LAD) is a MIAME compliant microarray database that operates on PostgreSQL and Linux. It is a fully open source version of the Stanford Microarray Database (SMD), one of the largest microarray databases. LAD is available at Conclusions Our development of LAD provides a simple, free, open, reliable and proven solution for storage and analysis of two-color microarray data. PMID:12930545
Formalized description and construction of semantic dictionary of graphic-text spatial relationship
NASA Astrophysics Data System (ADS)
Sun, Yizhong; Xue, Xiaolei; Zhao, Xiaoqin
2008-10-01
Graphic and text are two major elements in exhibiting of the results of urban planning and land administration. In combination, they convey the complex relationship resulting from spatial analysis and decision-making. Accurately interpreting and representing these relationships are important steps towards an intelligent GIS for urban planning. This paper employs concept-hierarchy-tree to formalize graphic-text relationships through a framework of spatial object lexicon, spatial relationship lexicon, restriction lexicon, applied pattern base, and word segmentation rule base. The methodology is further verified and shown effective on several urban planning archives.
Improved Data Access From the Northern California Earthquake Data Center
NASA Astrophysics Data System (ADS)
Neuhauser, D.; Oppenheimer, D.; Zuzlewski, S.; Klein, F.; Jensen, E.; Gee, L.; Murray, M.; Romanowicz, B.
2002-12-01
The NCEDC is a joint project of the UC Berkeley Seismological Laboratory and the USGS Menlo Park to provide a long-term archive and distribution center for geophysical data for northern California. Most data are available via the Web at http://quake.geo.berkeley.edu and research accounts are available for access to specialized datasets. Current efforts continue to expand the available datasets, enhance distribution methods, and to provide rapid access to all datasets. The NCEDC archives continuous and event-based seismic and geophysical time-series data from the BDSN, the USGS NCSN, the UNR Seismic Network, the Parkfield HRSN, and the Calpine/Unocal Geysers network. In collaboration with the USGS, the NCEDC has archived a total of 887 channels from 139 sites of the "USGS low-frequency" geophysical network (UL), including data from strainmeters, creep meters, magnetometers, water well levels, and tiltmeters. There are 336 active continuous data channels that are updated at the NCEDC on a daily basis. Geodetic data from the BARD network of over 40 continuously recording GPS sites are archived at the NCEDC in both raw and RINEX format. The NCEDC is the primary archive for survey-mode GPS and other geodetic data collected in northern California by the USGS, universities, and other agencies. All of the BARD data and GPS data archived from USGS Menlo Park surveys are now available through the GPS Seamless Archive Centers (GSAC), and by FTP directly from the NCEDC. Virtually all time-series data at the NCEDC are now available in SEED with complete instrument responses. Assembling, verifying, and maintaining the response information for these networks is a huge task, and is accomplished through the collaborative efforts of the NCEDC and the contributing agencies. Until recently, the NCSN waveform data were available only through research accounts and special request methods due to incomplete instrument responses. In the last year, the USGS compiled the necessary descriptions for for both historic and current NCSN instrumentation. The NCEDC and USGS jointly developed a procedure to create and maintain the hardware attributes and instrument responses at the NCEDC for the 3500 NCSN channels. As a result, the NCSN waveform data can now be distributed in SEED format. The NCEDC provides access to waveform data through Web forms, email requests, and programming interfaces. The SeismiQuery Web interface provides information about data holdings. NetDC allows users to retrieve inventory information, instrument responses, and waveforms in SEED format. STP provides both a Web and programming interface to retrieve data in SEED or other user-friendly formats. Through the newly formed California Integrated Seismic Network, we are working with the SCEDC to provide unified access to California earthquake data.
OceanNOMADS: Real-time and retrospective access to operational U.S. ocean prediction products
NASA Astrophysics Data System (ADS)
Harding, J. M.; Cross, S. L.; Bub, F.; Ji, M.
2011-12-01
The National Oceanic and Atmospheric Administration (NOAA) National Operational Model Archive and Distribution System (NOMADS) provides both real-time and archived atmospheric model output from servers at the National Centers for Environmental Prediction (NCEP) and National Climatic Data Center (NCDC) respectively (http://nomads.ncep.noaa.gov/txt_descriptions/marRutledge-1.pdf). The NOAA National Ocean Data Center (NODC) with NCEP is developing a complementary capability called OceanNOMADS for operational ocean prediction models. An NCEP ftp server currently provides real-time ocean forecast output (http://www.opc.ncep.noaa.gov/newNCOM/NCOM_currents.shtml) with retrospective access through NODC. A joint effort between the Northern Gulf Institute (NGI; a NOAA Cooperative Institute) and the NOAA National Coastal Data Development Center (NCDDC; a division of NODC) created the developmental version of the retrospective OceanNOMADS capability (http://www.northerngulfinstitute.org/edac/ocean_nomads.php) under the NGI Ecosystem Data Assembly Center (EDAC) project (http://www.northerngulfinstitute.org/edac/). Complementary funding support for the developmental OceanNOMADS from U.S. Integrated Ocean Observing System (IOOS) through the Southeastern University Research Association (SURA) Model Testbed (http://testbed.sura.org/) this past year provided NODC the analogue that facilitated the creation of an NCDDC production version of OceanNOMADS (http://www.ncddc.noaa.gov/ocean-nomads/). Access tool development and storage of initial archival data sets occur on the NGI/NCDDC developmental servers with transition to NODC/NCCDC production servers as the model archives mature and operational space and distribution capability grow. Navy operational global ocean forecast subsets for U.S waters comprise the initial ocean prediction fields resident on the NCDDC production server. The NGI/NCDDC developmental server currently includes the Naval Research Laboratory Inter-America Seas Nowcast/Forecast System over the Gulf of Mexico from 2004-Mar 2011, the operational Naval Oceanographic Office (NAVOCEANO) regional USEast ocean nowcast/forecast system from early 2009 to present, and the NAVOCEANO operational regional AMSEAS (Gulf of Mexico/Caribbean) ocean nowcast/forecast system from its inception 25 June 2010 to present. AMSEAS provided one of the real-time ocean forecast products accessed by NOAA's Office of Response and Restoration from the NGI/NCDDC developmental OceanNOMADS during the Deep Water Horizon oil spill last year. The developmental server also includes archived, real-time Navy coastal forecast products off coastal Japan in support of U.S./Japanese joint efforts following the 2011 tsunami. Real-time NAVOCEANO output from regional prediction systems off Southern California and around Hawaii, currently available on the NCEP ftp server, are scheduled for archival on the developmental OceanNOMADS by late 2011 along with the next generation Navy/NOAA global ocean prediction output. Accession and archival of additional regions is planned as server capacities increase.
Bracco, Laura; Bessi, Valentina; Alari, Fabiana; Sforza, Angela; Barilaro, Alessandro; Marinoni, Marinella
2011-06-01
Previous neuropsychological, lesional and functional imaging studies deal with the lateralization of memory processes, suggesting that they could be determined by the stage of processing (encoding vs retrieval) or by content (verbal vs non-verbal stimuli). The aims of the present study were: 1) to investigate if tasks that can be carried out using different strategies depending on the verbalizability of the material induce a lateralization of the mean cerebral blood flow velocity (mCBFV) in the middle cerebral arteries (MCAs), as monitored by a functional transcranial Doppler (fTCD); 2) to evaluate if these patterns of cerebral activation differ in relation to age, gender and task performance. Using TCD bilateral monitoring, we recorded mCBFV variations in 35 male and 35 female healthy, right-handed volunteers, classified as "young" (age range 21-40 years, n=35) or "old"(age range 41-60 years, n=35), performing four different cognitive tasks: encoding and recognition of Geometric Figures (GF), encoding and recall of Object Localization (OL) on a picture, encoding of a verbal Room Description (RD) and Arithmetic Skill (AS). We found a significant right lateralization for the OL recall phase, and a significant left lateralization for RD and AS. When we took into consideration gender, age and performance, there was a strong effect of age on both OL encoding and recall phase, with significant right lateralization in young volunteers not seen in the older ones. No difference in gender was detected. We found a gender×performance interaction for RD, with poor performance females showing significant left lateralization. According to our findings, hemispheric lateralization during memory encoding is material specific in both men and women, depending on the verbalizability of the material. mCBFV right lateralization during scene encoding and recall appears lost in older people, suggesting that healthy elderly could take advantage of mixed verbal and non-verbal strategies. Copyright © 2010 Elsevier Srl. All rights reserved.
Cognitive Predictors of Verbal Memory in a Mixed Clinical Pediatric Sample
Jordan, Lizabeth L.; Tyner, Callie E.; Heaton, Shelley C.
2013-01-01
Verbal memory problems, along with other cognitive difficulties, are common in children diagnosed with neurological and/or psychological disorders. Historically, these “memory problems” have been poorly characterized and often present with a heterogeneous pattern of performance across memory processes, even within a specific diagnostic group. The current study examined archival neuropsychological data from a large mixed clinical pediatric sample in order to understand whether functioning in other cognitive areas (i.e., verbal knowledge, attention, working memory, executive functioning) may explain some of the performance variability seen across verbal memory tasks of the Children’s Memory Scale (CMS). Multivariate analyses revealed that among the cognitive functions examined, only verbal knowledge explained a significant amount of variance in overall verbal memory performance. Further univariate analyses examining the component processes of verbal memory indicated that verbal knowledge is specifically related to encoding, but not the retention or retrieval stages. Future research is needed to replicate these findings in other clinical samples, to examine whether verbal knowledge predicts performance on other verbal memory tasks and to explore whether these findings also hold true for visual memory tasks. Successful replication of the current study findings would indicate that interventions targeting verbal encoding deficits should include efforts to improve verbal knowledge. PMID:25379253
Direct Data Distribution From Low-Earth Orbit
NASA Technical Reports Server (NTRS)
Budinger, James M.; Fujikawa, Gene; Kunath, Richard R.; Nguyen, Nam T.; Romanofsky, Robert R.; Spence, Rodney L.
1997-01-01
NASA Lewis Research Center (LeRC) is developing the space and ground segment technologies necessary to demonstrate a direct data distribution (1)3) system for use in space-to-ground communication links from spacecraft in low-Earth orbit (LEO) to strategically located tracking ground terminals. The key space segment technologies include a K-band (19 GHz) MMIC-based transmit phased array antenna, and a multichannel bandwidth- and power-efficient digital encoder/modulate with an aggregate data rate of 622 Mb/s. Along with small (1.8 meter), low-cost tracking terminals on the ground, the D3 system enables affordable distribution of data to the end user or archive facility through interoperability with commercial terrestrial telecommunications networks. The D3 system is applicable to both government and commercial science and communications spacecraft in LEO. The features and benefits of the D3 system concept are described. Starting with typical orbital characteristics, a set of baseline requirements for representative applications is developed, including requirements for onboard storage and tracking terminals, and sample link budgets are presented. Characteristics of the transmit array antenna and digital encoder/modulator are described. The architecture and components of the tracking terminal are described, including technologies for the next generation terminal. Candidate flights of opportunity for risk mitigation and space demonstration of the D3 features are identified.
NASA Astrophysics Data System (ADS)
Kluiving, Sjoerd; De Ridder, Tim; Van Dasselaar, Marcel; Roozen, Stan; Prins, Maarten; Van Mourik, Jan
2016-04-01
In Medieval times the city of Vlaardingen (the Netherlands) was strategically located on the confluence of three rivers, the Meuse, the Merwede and the Vlaarding. A church of early 8th century was already located here. In a short period of time Vlaardingen developed into an international trading place, the most important place in the former county of Holland. Starting from the 11th century the river Meuse threatened to flood the settlement. These floods have been registered in the archives of the fluvisol and were recognised in a multidisciplinary sedimentary analysis of these archives. To secure the future of this vulnerable soil archive currently an extensive interdisciplinary research (76 mechanical drill holes, grain size analysis (GSA), thermo-gravimetric analysis (TGA), archaeological remains, soil analysis, dating methods, micromorphology, and microfauna has started in 2011 to gain knowledge on the sedimentological and pedological subsurface of the mound as well as on the well-preserved nature of the archaeological evidence. Pedogenic features are recorded with soil descriptive, micromorphological and geochemical (XRF) analysis. The soil sequence of 5 meters thickness exhibits a complex mix of 'natural' as well as 'anthropogenic layering' and initial soil formation that enables to make a distinction for relatively stable periods between periods with active sedimentation. In this paper the results of this large-scale project are demonstrated in a number of cross-sections with interrelated geological, pedological and archaeological stratification. Distinction between natural and anthropogenic layering is made on the occurrence of chemical elements phosphor and potassium. A series of four stratigraphic / sedimentary units record the period before and after the flooding disaster. Given the many archaeological remnants and features present in the lower units, we assume that the medieval landscape was drowned while it was inhabited in the 12th century AD. After a final drowning phase in the 13th century, as a reaction to it, inhabitants started to raise the surface.
NASA Astrophysics Data System (ADS)
Chromá, Kateřina
2014-05-01
Meteorological and hydrological extremes (hydrometeorological extremes - HMEs) cause great material damage or even loss of human lives in the present time, as well as it was in the past. For the study of their temporal and spatial variability in periods with only natural forcing factors in comparison with those combining also anthropogenic effects it is essential to have the longest possible series of HMEs. In the Czech Lands (recently the Czech Republic), systematic meteorological and hydrological observations started generally in the latter half of the 19th century. Therefore, in order to create long-term series of such extremes, it is necessary to search for other sources of information. There exist different types of documentary evidence used in historical climatology and hydrology, represented by various sources such as annals, chronicles, diaries, private letters, newspapers etc. Besides them, institutional documentary evidence (of economic and administrative character) has particular importance (e.g. taxation records). Documents in family archives represent further promising source of data related to HMEs. The documents kept by the most important lord families in Moravia (e.g. Liechtensteins, Dietrichsteins) are located in Moravian Land Archives in Brno. Besides data about family members, industrial and agricultural business, military questions, travelling and social events, they contain direct or indirect information about HMEs. It concerns descriptions of catastrophic phenomena on the particular demesne (mainly with respect to damage) as well as correspondence related to tax reductions (i.e. they can overlap with taxation records of particular estates). This contribution shows the potential of family archives as a source of information about HMEs, up to now only rarely used, which may extend our knowledge about them. Several examples of such documents are presented. The study is a part of the research project "Hydrometeorological extremes in Southern Moravia derived from documentary evidence" supported by the Grant Agency of the Czech Republic, reg. no. 13-19831S.
The Golosiiv on-line plate archive database, management and maintenance
NASA Astrophysics Data System (ADS)
Pakuliak, L.; Sergeeva, T.
2007-08-01
We intend to create online version of the database of the MAO NASU plate archive as VO-compatible structures in accordance with principles, developed by the International Virtual Observatory Alliance in order to make them available for world astronomical community. The online version of the log-book database is constructed by means of MySQL+PHP. Data management system provides a user with user interface, gives a capability of detailed traditional form-filling radial search of plates, obtaining some auxiliary sampling, the listing of each collection and permits to browse the detail descriptions of collections. The administrative tool allows database administrator the data correction, enhancement with new data sets and control of the integrity and consistence of the database as a whole. The VO-compatible database is currently constructing under the demands and in the accordance with principles of international data archives and has to be strongly generalized in order to provide a possibility of data mining by means of standard interfaces and to be the best fitted to the demands of WFPDB Group for databases of the plate catalogues. On-going enhancements of database toward the WFPDB bring the problem of the verification of data to the forefront, as it demands the high degree of data reliability. The process of data verification is practically endless and inseparable from data management owing to a diversity of data errors nature, that means to a variety of ploys of their identification and fixing. The current status of MAO NASU glass archive forces the activity in both directions simultaneously: the enhancement of log-book database with new sets of observational data as well as generalized database creation and the cross-identification between them. The VO-compatible version of the database is supplying with digitized data of plates obtained with MicroTek ScanMaker 9800 XL TMA. The scanning procedure is not total but is conducted selectively in the frames of special projects.
Robertson, Tim; Döring, Markus; Guralnick, Robert; Bloom, David; Wieczorek, John; Braak, Kyle; Otegui, Javier; Russell, Laura; Desmet, Peter
2014-01-01
The planet is experiencing an ongoing global biodiversity crisis. Measuring the magnitude and rate of change more effectively requires access to organized, easily discoverable, and digitally-formatted biodiversity data, both legacy and new, from across the globe. Assembling this coherent digital representation of biodiversity requires the integration of data that have historically been analog, dispersed, and heterogeneous. The Integrated Publishing Toolkit (IPT) is a software package developed to support biodiversity dataset publication in a common format. The IPT’s two primary functions are to 1) encode existing species occurrence datasets and checklists, such as records from natural history collections or observations, in the Darwin Core standard to enhance interoperability of data, and 2) publish and archive data and metadata for broad use in a Darwin Core Archive, a set of files following a standard format. Here we discuss the key need for the IPT, how it has developed in response to community input, and how it continues to evolve to streamline and enhance the interoperability, discoverability, and mobilization of new data types beyond basic Darwin Core records. We close with a discussion how IPT has impacted the biodiversity research community, how it enhances data publishing in more traditional journal venues, along with new features implemented in the latest version of the IPT, and future plans for more enhancements. PMID:25099149
NASA Astrophysics Data System (ADS)
Schwarz, Joseph; Raffi, Gianni
2002-12-01
The Atacama Large Millimeter Array (ALMA) is a joint project involving astronomical organizations in Europe and North America. ALMA will consist of at least 64 12-meter antennas operating in the millimeter and sub-millimeter range. It will be located at an altitude of about 5000m in the Chilean Atacama desert. The primary challenge to the development of the software architecture is the fact that both its development and runtime environments will be distributed. Groups at different institutes will develop the key elements such as Proposal Preparation tools, Instrument operation, On-line calibration and reduction, and Archiving. The Proposal Preparation software will be used primarily at scientists' home institutions (or on their laptops), while Instrument Operations will execute on a set of networked computers at the ALMA Operations Support Facility. The ALMA Science Archive, itself to be replicated at several sites, will serve astronomers worldwide. Building upon the existing ALMA Common Software (ACS), the system architects will prepare a robust framework that will use XML-encoded entity objects to provide an effective solution to the persistence needs of this system, while remaining largely independent of any underlying DBMS technology. Independence of distributed subsystems will be facilitated by an XML- and CORBA-based pass-by-value mechanism for exchange of objects. Proof of concept (as well as a guide to subsystem developers) will come from a prototype whose details will be presented.
Forde, Arnell S.; Dadisman, Shawn V.; Flocks, James G.; Wiese, Dana S.
2010-01-01
In June of 2007, the U.S. Geological Survey (USGS) conducted a geophysical survey offshore of the Chandeleur Islands, Louisiana, in cooperation with the Louisiana Department of Natural Resources (LDNR) as part of the USGS Barrier Island Comprehensive Monitoring (BICM) project. This project is part of a broader study focused on Subsidence and Coastal Change (SCC). The purpose of the study was to investigate the shallow geologic framework and monitor the enviromental impacts of Hurricane Katrina (Louisiana landfall was on August 29, 2005) on the Gulf Coast's barrier island chains. This report serves as an archive of unprocessed digital 512i and 424 Chirp sub-bottom profile data, trackline maps, navigation files, Geographic Information System (GIS) files, Field Activity Collection System (FACS) logs, observer's logbook, and formal Federal Geographic Data Committee (FGDC) metadata. Gained (a relative increase in signal amplitude) digital images of the seismic profiles are also provided. Refer to the Acronyms page for expansion of acronyms and abbreviations used in this report. The USGS St. Petersburg Coastal and Marine Science Center (SPCMSC) assigns a unique identifier to each cruise or field activity. For example, 07SCC01 tells us the data were collected in 2007 for the Subsidence and Coastal Change (SCC) study and the data were collected during the first field activity for that study in that calendar year. Refer to http://walrus.wr.usgs.gov/infobank/programs/html/definition/activity.html for a detailed description of the method used to assign the field activity identification (ID). All Chirp systems use a signal of continuously varying frequency; the Chirp systems used during this survey produce high resolution, shallow penetration profile images beneath the seafloor. The towfish is a sound source and receiver, which is typically towed 1 - 2 m below the sea surface. The acoustic energy is reflected at density boundaries (such as the seafloor or sediment layers beneath the seafloor), detected by a receiver, and recorded by a PC-based seismic acquisition system. This process is repeated at timed intervals (for example, 0.125 s) and recorded for specific intervals of time (for example, 50 ms). In this way, a two-dimensional vertical image of the shallow geologic structure beneath the ship track is produced. Figure 1 displays the acquisition geometry. Refer to table 1 for a summary of acquisition parameters. See the digital FACS equipment log (11-KB PDF) for details about the acquisition equipment used. Table 2 lists trackline statistics. Scanned images of the handwritten FACS logs and handwritten science logbook (449-KB PDF) are also provided. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG-Y rev 1 format (Norris and Faichney, 2002); ASCII character encoding is used for the first 3,200 bytes of the card image header instead of the SEG-Y rev 0 (Barry and others, 1975) EBCDIC format. The SEG-Y files may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU) (Cohen and Stockwell, 2010). See the How To Download SEG-Y Data page for download instructions. The web version of this archive does not contain the SEG-Y trace files. These files are very large and would require extremely long download times. To obtain the complete DVD archive, contact USGS Information at 1-888-ASK-USGS or infoservices@usgs.gov. The printable profiles provided here are GIF images that were processed and gained using SU software; refer to the Software page for links to example SU processing scripts and USGS software for viewing the SEG-Y files (Zihlman, 1992). The processed SEG-Y data were also exported to Chesapeake Technology, Inc. (CTI) SonarWeb software to produce an interactive version of the profile that allows the user to obtain a geographic location and depth from the profile for a given cursor position. This information is displayed in the status bar of the browser.
Experiences with making diffraction image data available: what metadata do we need to archive?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroon-Batenburg, Loes M. J., E-mail: l.m.j.kroon-batenburg@uu.nl; Helliwell, John R.; Utrecht University, Padualaan 8, 3584 CH Utrecht
A local raw ‘diffraction data images’ archive was made available and some data sets were retrieved and reprocessed, which led to analysis of the anomalous difference densities of two partially occupied Cl atoms in cisplatin as well as a re-evaluation of the resolution cutoff in these diffraction data. General questions on storing raw data are discussed. It is also demonstrated that often one needs unambiguous prior knowledge to read the (binary) detector format and the setup of goniometer geometries. Recently, the IUCr (International Union of Crystallography) initiated the formation of a Diffraction Data Deposition Working Group with the aim ofmore » developing standards for the representation of raw diffraction data associated with the publication of structural papers. Archiving of raw data serves several goals: to improve the record of science, to verify the reproducibility and to allow detailed checks of scientific data, safeguarding against fraud and to allow reanalysis with future improved techniques. A means of studying this issue is to submit exemplar publications with associated raw data and metadata. In a recent study of the binding of cisplatin and carboplatin to histidine in lysozyme crystals under several conditions, the possible effects of the equipment and X-ray diffraction data-processing software on the occupancies and B factors of the bound Pt compounds were compared. Initially, 35.3 GB of data were transferred from Manchester to Utrecht to be processed with EVAL. A detailed description and discussion of the availability of metadata was published in a paper that was linked to a local raw data archive at Utrecht University and also mirrored at the TARDIS raw diffraction data archive in Australia. By making these raw diffraction data sets available with the article, it is possible for the diffraction community to make their own evaluation. This led to one of the authors of XDS (K. Diederichs) to re-integrate the data from crystals that supposedly solely contained bound carboplatin, resulting in the analysis of partially occupied chlorine anomalous electron densities near the Pt-binding sites and the use of several criteria to more carefully assess the diffraction resolution limit. General arguments for archiving raw data, the possibilities of doing so and the requirement of resources are discussed. The problems associated with a partially unknown experimental setup, which preferably should be available as metadata, is discussed. Current thoughts on data compression are summarized, which could be a solution especially for pixel-device data sets with fine slicing that may otherwise present an unmanageable amount of data.« less
NASA Astrophysics Data System (ADS)
Lehmann, Thomas M.; Guld, Mark O.; Thies, Christian; Fischer, Benedikt; Keysers, Daniel; Kohnen, Michael; Schubert, Henning; Wein, Berthold B.
2003-05-01
Picture archiving and communication systems (PACS) aim to efficiently provide the radiologists with all images in a suitable quality for diagnosis. Modern standards for digital imaging and communication in medicine (DICOM) comprise alphanumerical descriptions of study, patient, and technical parameters. Currently, this is the only information used to select relevant images within PACS. Since textual descriptions insufficiently describe the great variety of details in medical images, content-based image retrieval (CBIR) is expected to have a strong impact when integrated into PACS. However, existing CBIR approaches usually are limited to a distinct modality, organ, or diagnostic study. In this state-of-the-art report, we present first results implementing a general approach to content-based image retrieval in medical applications (IRMA) and discuss its integration into PACS environments. Usually, a PACS consists of a DICOM image server and several DICOM-compliant workstations, which are used by radiologists for reading the images and reporting the findings. Basic IRMA components are the relational database, the scheduler, and the web server, which all may be installed on the DICOM image server, and the IRMA daemons running on distributed machines, e.g., the radiologists" workstations. These workstations can also host the web-based front-ends of IRMA applications. Integrating CBIR and PACS, a special focus is put on (a) location and access transparency for data, methods, and experiments, (b) replication transparency for methods in development, (c) concurrency transparency for job processing and feature extraction, (d) system transparency at method implementation time, and (e) job distribution transparency when issuing a query. Transparent integration will have a certain impact on diagnostic quality supporting both evidence-based medicine and case-based reasoning.
Space environment data storage and access: lessons learned and recommendations for the future
NASA Astrophysics Data System (ADS)
Evans, Hugh; Heynderickx, Daniel
2012-07-01
With the ever increasing volume of space environment data available at present and planned for the near future, the demands on data storage and access methods are increasing as well. In addition, continued access to historical, archived data remains crucial. On the basis of many years of experience, the authors identify the following issues as important for continued and efficient handling of datasets now and in the future: The huge data volumes currently or very soon avaiable from a number of space missions will limi direct Internet download access to even relatively short epoch ranges of data. Therefore, data providers should establish or extend standardised data (post-) processing services so that only data query results should be downloaded. Although a single standardised data format will in all likelihood remain utopia, data providers should at least include extensive metadata with their data products, according to established standards and practices (e.g. ISTP, SPASE). Standardisation of (sets of) metadata greatly facilitates data mining and querying. The use of SQL database storage should be considered instead of, or in parallel with, classic storage of data files. The use of SQL does away with having to handle file parsing and processing, while at the same time standard access protocols can be used to (remotely) connect to such data repositories. Many data holdings are still lacking in extensive descriptions of data provenance (e.g. instrument description), content and format. Unfortunately, detailed data information is usually rejected by scientific and technical journals. Re-processing of historical archived datasets into modern formats, making them easily available and usable, is urgently required, as knowledge is being lost. A global data directory has still not been achieved; policy makers should enforce stricter rules for "broadcasting" dataset information.
Symbolic Knowledge Processing for the Acquisition of Expert Behavior: A Study in Medicine.
1984-05-01
information . It provides a model for this type of study, suggesting a different approach to the problem of learning and efficiency of knowledge -based...flow of information 2.2. Scope and description of the subsystems Three subsystems perform distinct operations using the preceding knowledge sources...which actually yields a new knowledge rCpresentation Ahere new external information is encoded in the combination and ordering of elements of the
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This volume and its appendixes supplement the Advisory Committee`s final report by reporting how we went about looking for information concerning human radiation experiments and intentional releases, a description of what we found and where we found it, and a finding aid for the information that we collected. This volume begins with an overview of federal records, including general descriptions of the types of records that have been useful and how the federal government handles these records. This is followed by an agency-by-agency account of the discovery process and descriptions of the records reviewed, together with instructions on how tomore » obtain further information from those agencies. There is also a description of other sources of information that have been important, including institutional records, print resources, and nonprint media and interviews. The third part contains brief accounts of ACHRE`s two major contemporary survey projects (these are described in greater detail in the final report and another supplemental volume) and other research activities. The final section describes how the ACHRE information-nation collections were managed and the records that ACHRE created in the course of its work; this constitutes a general finding aid for the materials deposited with the National Archives. The appendices provide brief references to federal records reviewed, descriptions of the accessions that comprise the ACHRE Research Document Collection, and descriptions of the documents selected for individual treatment. Also included are an account of the documentation available for ACHRE meetings, brief abstracts of the almost 4,000 experiments individually described by ACHRE staff, a full bibliography of secondary sources used, and other information.« less
NASA Astrophysics Data System (ADS)
Passas, Georgios; Freear, Steven; Fawcett, Darren
2010-01-01
Space-time coding (STC) is an important milestone in modern wireless communications. In this technique, more copies of the same signal are transmitted through different antennas (space) and different symbol periods (time), to improve the robustness of a wireless system by increasing its diversity gain. STCs are channel coding algorithms that can be readily implemented on a field programmable gate array (FPGA) device. This work provides some figures for the amount of required FPGA hardware resources, the speed that the algorithms can operate and the power consumption requirements of a space-time block code (STBC) encoder. Seven encoder very high-speed integrated circuit hardware description language (VHDL) designs have been coded, synthesised and tested. Each design realises a complex orthogonal space-time block code with a different transmission matrix. All VHDL designs are parameterisable in terms of sample precision. Precisions ranging from 4 bits to 32 bits have been synthesised. Alamouti's STBC encoder design [Alamouti, S.M. (1998), 'A Simple Transmit Diversity Technique for Wireless Communications', IEEE Journal on Selected Areas in Communications, 16:55-108.] proved to be the best trade-off, since it is on average 3.2 times smaller, 1.5 times faster and requires slightly less power than the next best trade-off in the comparison, which is a 3/4-rate full-diversity 3Tx-antenna STBC.
Program Predicts Time Courses of Human/Computer Interactions
NASA Technical Reports Server (NTRS)
Vera, Alonso; Howes, Andrew
2005-01-01
CPM X is a computer program that predicts sequences of, and amounts of time taken by, routine actions performed by a skilled person performing a task. Unlike programs that simulate the interaction of the person with the task environment, CPM X predicts the time course of events as consequences of encoded constraints on human behavior. The constraints determine which cognitive and environmental processes can occur simultaneously and which have sequential dependencies. The input to CPM X comprises (1) a description of a task and strategy in a hierarchical description language and (2) a description of architectural constraints in the form of rules governing interactions of fundamental cognitive, perceptual, and motor operations. The output of CPM X is a Program Evaluation Review Technique (PERT) chart that presents a schedule of predicted cognitive, motor, and perceptual operators interacting with a task environment. The CPM X program allows direct, a priori prediction of skilled user performance on complex human-machine systems, providing a way to assess critical interfaces before they are deployed in mission contexts.
Matallana-Rhoades, Audrey Mary; Corredor-Castro, Juan David; Bonilla-Escobar, Francisco Javier; Mecias-Cruz, Bony Valentina; Mejia de Beldjena, Liliana
2016-09-30
It is presented the phenotype of a new compound heterozygous mutation of the genes R384X and Q356X encoding the enzyme of 11-beta-hydroxylase. Severe virilization, peripheral hypertension, and early puberty. Managed with hormone replacement therapy (corticosteroid) and antihypertensive therapy (beta-blocker), resulting in the control of physical changes and levels of arterial tension. According to the phenotypic characteristics of the patient, it is inferred that the R384X mutation carries an additional burden on the Q356X mutation, with the latter previously described as a cause of 11-beta-hydroxylase deficiency. The description of a new genotype, as in this case, expands the understanding of the hereditary burden and deciphers the various factors that lead to this pathology as well as the other forms of congenital adrenal hyperplasia (CAH), presenting with a broad spectrum of clinical presentations. This study highlights the importance of a complete description of the patient's CAH genetic profile as well as their parents' genetic profile.
Fermilab Today - Related Content
Fermilab Today Related Content Subscribe | Contact Fermilab Today | Archive | Classifieds Search Experiment Profiles Current Archive Current Fermilab Today Archive of 2015 Archive of 2014 Archive of 2013 Archive of 2012 Archive of 2011 Archive of 2010 Archive of 2009 Archive of 2008 Archive of 2007 Archive of
NASA Astrophysics Data System (ADS)
Noren, A.; Brady, K.; Myrbo, A.; Ito, E.
2007-12-01
Lacustrine sediment cores comprise an integral archive for the determination of continental paleoclimate, for their potentially high temporal resolution and for their ability to resolve spatial variability in climate across vast sections of the globe. Researchers studying these archives now have a large, nationally-funded, public facility dedicated to the support of their efforts. The LRC LacCore Facility, funded by NSF and the University of Minnesota, provides free or low-cost assistance to any portion of research projects, depending on the specific needs of the project. A large collection of field equipment (site survey equipment, coring devices, boats/platforms, water sampling devices) for nearly any lacustrine setting is available for rental, and Livingstone-type corers and drive rods may be purchased. LacCore staff can accompany field expeditions to operate these devices and curate samples, or provide training prior to device rental. The Facility maintains strong connections to experienced shipping agents and customs brokers, which vastly improves transport and importation of samples. In the lab, high-end instrumentation (e.g., multisensor loggers, high-resolution digital linescan cameras) provides a baseline of fundamental analyses before any sample material is consumed. LacCore staff provide support and training in lithological description, including smear-slide, XRD, and SEM analyses. The LRC botanical macrofossil reference collection is a valuable resource for both core description and detailed macrofossil analysis. Dedicated equipment and space for various subsample analyses streamlines these endeavors; subsamples for several analyses may be submitted for preparation or analysis by Facility technicians for a fee (e.g., carbon and sulfur coulometry, grain size, pollen sample preparation and analysis, charcoal, biogenic silica, LOI, freeze drying). The National Lacustrine Core Repository now curates ~9km of sediment cores from expeditions around the world, and stores metadata and analytical data for all cores processed at the facility. Any researcher may submit sample requests for material in archived cores. Supplies for field (e.g., polycarbonate pipe, endcaps), lab (e.g., sample containers, pollen sample spike), and curation (e.g., D-tubes) are sold at cost. In collaboration with facility users, staff continually develop new equipment, supplies, and procedures as needed in order to provide the best and most comprehensive set of services to the research community.
A Rewritable, Random-Access DNA-Based Storage System.
Yazdi, S M Hossein Tabatabaei; Yuan, Yongbo; Ma, Jian; Zhao, Huimin; Milenkovic, Olgica
2015-09-18
We describe the first DNA-based storage architecture that enables random access to data blocks and rewriting of information stored at arbitrary locations within the blocks. The newly developed architecture overcomes drawbacks of existing read-only methods that require decoding the whole file in order to read one data fragment. Our system is based on new constrained coding techniques and accompanying DNA editing methods that ensure data reliability, specificity and sensitivity of access, and at the same time provide exceptionally high data storage capacity. As a proof of concept, we encoded parts of the Wikipedia pages of six universities in the USA, and selected and edited parts of the text written in DNA corresponding to three of these schools. The results suggest that DNA is a versatile media suitable for both ultrahigh density archival and rewritable storage applications.
A Diffusive-Particle Theory of Free Recall
Fumarola, Francesco
2017-01-01
Diffusive models of free recall have been recently introduced in the memory literature, but their potential remains largely unexplored. In this paper, a diffusive model of short-term verbal memory is considered, in which the psychological state of the subject is encoded as the instantaneous position of a particle diffusing over a semantic graph. The model is particularly suitable for studying the dependence of free-recall observables on the semantic properties of the words to be recalled. Besides predicting some well-known experimental features (forward asymmetry, semantic clustering, word-length effect), a novel prediction is obtained on the relationship between the contiguity effect and the syllabic length of words; shorter words, by way of their wider semantic range, are predicted to be characterized by stronger forward contiguity. A fresh analysis of archival free-recall data allows to confirm this prediction. PMID:29085521
A Rewritable, Random-Access DNA-Based Storage System
NASA Astrophysics Data System (ADS)
Tabatabaei Yazdi, S. M. Hossein; Yuan, Yongbo; Ma, Jian; Zhao, Huimin; Milenkovic, Olgica
2015-09-01
We describe the first DNA-based storage architecture that enables random access to data blocks and rewriting of information stored at arbitrary locations within the blocks. The newly developed architecture overcomes drawbacks of existing read-only methods that require decoding the whole file in order to read one data fragment. Our system is based on new constrained coding techniques and accompanying DNA editing methods that ensure data reliability, specificity and sensitivity of access, and at the same time provide exceptionally high data storage capacity. As a proof of concept, we encoded parts of the Wikipedia pages of six universities in the USA, and selected and edited parts of the text written in DNA corresponding to three of these schools. The results suggest that DNA is a versatile media suitable for both ultrahigh density archival and rewritable storage applications.
Indexing method of digital audiovisual medical resources with semantic Web integration.
Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre
2005-03-01
Digitalization of audiovisual resources and network capability offer many possibilities which are the subject of intensive work in scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has developed MPEG-7, a standard for describing multimedia content. The goal of this standard is to develop a rich set of standardized tools to enable efficient retrieval from digital archives or the filtering of audiovisual broadcasts on the Internet. How could this kind of technology be used in the medical context? In this paper, we propose a simpler indexing system, based on the Dublin Core standard and compliant to MPEG-7. We use MeSH and the UMLS to introduce conceptual navigation. We also present a video-platform which enables encoding and gives access to audiovisual resources in streaming mode.
Marsh, Herbert W; Martin, Andrew J; Jackson, Susan
2010-08-01
Based on the Physical Self Description Questionnaire (PSDQ) normative archive (n = 1,607 Australian adolescents), 40 of 70 items were selected to construct a new short form (PSDQ-S). The PSDQ-S was evaluated in a new cross-validation sample of 708 Australian adolescents and four additional samples: 349 Australian elite-athlete adolescents, 986 Spanish adolescents, 395 Israeli university students, 760 Australian older adults. Across these six groups, the 11 PSDQ-S factors had consistently high reliabilities and invariant factor structures. Study 1, using a missing-by-design variation of multigroup invariance tests, showed invariance across 40 PSDQ-S items and 70 PSDQ items. Study 2 demonstrated factorial invariance over a 1-year interval (test-retest correlations .57-.90; Mdn = .77), and good convergent and discriminant validity in relation to time. Study 3 showed good and nearly identical support for convergent and discriminant validity of PSDQ and PSDQ-S responses in relation to two other physical self-concept instruments.
Climate Science Performance, Data and Productivity on Titan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayer, Benjamin W; Worley, Patrick H; Gaddis, Abigail L
2015-01-01
Climate Science models are flagship codes for the largest of high performance computing (HPC) resources, both in visibility, with the newly launched Department of Energy (DOE) Accelerated Climate Model for Energy (ACME) effort, and in terms of significant fractions of system usage. The performance of the DOE ACME model is captured with application level timers and examined through a sizeable run archive. Performance and variability of compute, queue time and ancillary services are examined. As Climate Science advances in the use of HPC resources there has been an increase in the required human and data systems to achieve programs goals.more » A description of current workflow processes (hardware, software, human) and planned automation of the workflow, along with historical and projected data in motion and at rest data usage, are detailed. The combination of these two topics motivates a description of future systems requirements for DOE Climate Modeling efforts, focusing on the growth of data storage and network and disk bandwidth required to handle data at an acceptable rate.« less
NASA Technical Reports Server (NTRS)
Noll, C.; Lee, L.; Torrence, M.
2011-01-01
The International Laser Ranging Service (ILRS) website, http://ilrs.gsfc.nasa.gov, is the central source of information for all aspects of the service. The website provides information on the organization and operation of ILRS and descriptions of ILRS components, data, and products. Furthermore, the website and provides an entry point to the archive of these data and products available through the data centers. Links are provided to extensive information on the ILRS network stations including performance assessments and data quality evaluations. Descriptions of supported satellite missions (current, future, and past) are provided to aid in station acquisition and data analysis. The current format for the ILRS website has been in use since the early years of the service. Starting in 2010, the ILRS Central Bureau began efforts to redesign the look and feel for the website. The update will allow for a review of the contents, ensuring information is current and useful. This poster will detail the proposed design including specific examples of key sections and webpages.
The Gene Expression Omnibus Database.
Clough, Emily; Barrett, Tanya
2016-01-01
The Gene Expression Omnibus (GEO) database is an international public repository that archives and freely distributes high-throughput gene expression and other functional genomics data sets. Created in 2000 as a worldwide resource for gene expression studies, GEO has evolved with rapidly changing technologies and now accepts high-throughput data for many other data applications, including those that examine genome methylation, chromatin structure, and genome-protein interactions. GEO supports community-derived reporting standards that specify provision of several critical study elements including raw data, processed data, and descriptive metadata. The database not only provides access to data for tens of thousands of studies, but also offers various Web-based tools and strategies that enable users to locate data relevant to their specific interests, as well as to visualize and analyze the data. This chapter includes detailed descriptions of methods to query and download GEO data and use the analysis and visualization tools. The GEO homepage is at http://www.ncbi.nlm.nih.gov/geo/.
The Gene Expression Omnibus database
Clough, Emily; Barrett, Tanya
2016-01-01
The Gene Expression Omnibus (GEO) database is an international public repository that archives and freely distributes high-throughput gene expression and other functional genomics data sets. Created in 2000 as a worldwide resource for gene expression studies, GEO has evolved with rapidly changing technologies and now accepts high-throughput data for many other data applications, including those that examine genome methylation, chromatin structure, and genome–protein interactions. GEO supports community-derived reporting standards that specify provision of several critical study elements including raw data, processed data, and descriptive metadata. The database not only provides access to data for tens of thousands of studies, but also offers various Web-based tools and strategies that enable users to locate data relevant to their specific interests, as well as to visualize and analyze the data. This chapter includes detailed descriptions of methods to query and download GEO data and use the analysis and visualization tools. The GEO homepage is at http://www.ncbi.nlm.nih.gov/geo/. PMID:27008011
A Prototype Publishing Registry for the Virtual Observatory
NASA Astrophysics Data System (ADS)
Williamson, R.; Plante, R.
2004-07-01
In the Virtual Observatory (VO), a registry helps users locate resources, such as data and services, in a distributed environment. A general framework for VO registries is now under development within the International Virtual Observatory Alliance (IVOA) Registry Working Group. We present a prototype of one component of this framework: the publishing registry. The publishing registry allows data providers to expose metadata descriptions of their resources to the VO environment. Searchable registries can harvest the metadata from many publishing registries and make them searchable by users. We have developed a prototype publishing registry that data providers can install at their sites to publish their resources. The descriptions are exposed using the Open Archive Initiative (OAI) Protocol for Metadata Harvesting. Automating the input of metadata into registries is critical when a provider wishes to describe many resources. We illustrate various strategies for such automation, both currently in use and planned for the future. We also describe how future versions of the registry can adapt automatically to evolving metadata schemas for describing resources.
NASA Astrophysics Data System (ADS)
Valenzuela, P.; Domínguez-Cuesta, M. J.; Jiménez-Sánchez, M.; Mora García, M. A.
2015-12-01
Due to its geological and climatic conditions, landslides are very common and widespread phenomena in the Principality of Asturias (NW of Spain), causing economic losses and, sometimes, human victims. In this scenario, temporal prediction of instabilities becomes particularly important. Although previous knowledge indicates that rainfall is the main trigger, the lack of data hinders the proper temporal forecast of landslides in the region. To resolve this deficiency, a new landslide inventory is being developed: the BAPA (Base de datos de Argayos del Principado de Asturias-Principality of Asturias Landslide Database). Data collection is mainly performed through the gathering of local newspaper archives, with special emphasis on the registration of spatial and temporal information. Moreover, a BAPA App and a BAPA website (http://geol.uniovi.es/BAPA) have been developed to easily obtain additional information from authorities and private individuals. Presently, dataset covers the period 1980-2015, registering more than 2000 individual landslide events. Fifty-two per cent of the records provide accurate dates, showing the usefulness of press archives as temporal records. The use of free cartographic servers, such as Google Maps, Google Street View and Iberpix (Government of Spain), combined with the spatial descriptions and photographs contained in the press releases, makes it possible to determine the exact location in fifty-eight per cent of the records. Field work performed to date has allowed the validation of the methodology proposed to obtain spatial data. In addition, BAPA database contain information about: source, typology of landslides, triggers, damages and costs.
The Paleoclimate Uncertainty Cascade: Tracking Proxy Errors Via Proxy System Models.
NASA Astrophysics Data System (ADS)
Emile-Geay, J.; Dee, S. G.; Evans, M. N.; Adkins, J. F.
2014-12-01
Paleoclimatic observations are, by nature, imperfect recorders of climate variables. Empirical approaches to their calibration are challenged by the presence of multiple sources of uncertainty, which may confound the interpretation of signals and the identifiability of the noise. In this talk, I will demonstrate the utility of proxy system models (PSMs, Evans et al, 2013, 10.1016/j.quascirev.2013.05.024) to quantify the impact of all known sources of uncertainty. PSMs explicitly encode the mechanistic knowledge of the physical, chemical, biological and geological processes from which paleoclimatic observations arise. PSMs may be divided into sensor, archive and observation components, all of which may conspire to obscure climate signals in actual paleo-observations. As an example, we couple a PSM for the δ18O of speleothem calcite to an isotope-enabled climate model (Dee et al, submitted) to analyze the potential of this measurement as a proxy for precipitation amount. A simple soil/karst model (Partin et al, 2013, 10.1130/G34718.1) is used as sensor model, while a hiatus-permitting chronological model (Haslett & Parnell, 2008, 10.1111/j.1467-9876.2008.00623.x) is used as part of the observation model. This subdivision allows us to explicitly model the transformation from precipitation amount to speleothem calcite δ18O as a multi-stage process via a physical and chemical sensor model, and a stochastic archive model. By illustrating the PSM's behavior within the context of the climate simulations, we show how estimates of climate variability may be affected by each submodel's transformation of the signal. By specifying idealized climate signals(periodic vs. episodic, slow vs. fast) to the PSM, we investigate how frequency and amplitude patterns are modulated by sensor and archive submodels. To the extent that the PSM and the climate models are representative of real world processes, then the results may help us more accurately interpret existing paleodata, characterize their uncertainties, and design sampling strategies that exploit their strengths while mitigating their weaknesses.
Metadata Design in the New PDS4 Standards - Something for Everybody
NASA Astrophysics Data System (ADS)
Raugh, Anne C.; Hughes, John S.
2015-11-01
The Planetary Data System (PDS) archives, supports, and distributes data of diverse targets, from diverse sources, to diverse users. One of the core problems addressed by the PDS4 data standard redesign was that of metadata - how to accommodate the increasingly sophisticated demands of search interfaces, analytical software, and observational documentation into label standards without imposing limits and constraints that would impinge on the quality or quantity of metadata that any particular observer or team could supply. And yet, as an archive, PDS must have detailed documentation for the metadata in the labels it supports, or the institutional knowledge encoded into those attributes will be lost - putting the data at risk.The PDS4 metadata solution is based on a three-step approach. First, it is built on two key ISO standards: ISO 11179 "Information Technology - Metadata Registries", which provides a common framework and vocabulary for defining metadata attributes; and ISO 14721 "Space Data and Information Transfer Systems - Open Archival Information System (OAIS) Reference Model", which provides the framework for the information architecture that enforces the object-oriented paradigm for metadata modeling. Second, PDS has defined a hierarchical system that allows it to divide its metadata universe into namespaces ("data dictionaries", conceptually), and more importantly to delegate stewardship for a single namespace to a local authority. This means that a mission can develop its own data model with a high degree of autonomy and effectively extend the PDS model to accommodate its own metadata needs within the common ISO 11179 framework. Finally, within a single namespace - even the core PDS namespace - existing metadata structures can be extended and new structures added to the model as new needs are identifiedThis poster illustrates the PDS4 approach to metadata management and highlights the expected return on the development investment for PDS, users and data preparers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard, M.A.; Sommer, S.C.
1995-04-01
AUTOCASK (AUTOmatic Generation of 3-D CASK models) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for the structural analysis of shipping casks for radioactive material. Model specification is performed on the microcomputer, and the analyses are performed on an engineering workstation or mainframe computer. AUTOCASK is based on 80386/80486 compatible microcomputers. The system is composed of a series of menus, input programs, display programs, a mesh generation program, and archive programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.
Observations of Comets and Eclipses in the Andes
NASA Astrophysics Data System (ADS)
Ziółkowski, Mariusz
There is no doubt that the Incas possessed a system for observing and interpreting unusual astronomical phenomena, such as eclipses or comets. References to it, however, are scarce, often of anecdotal nature and are not collected into any coherent "Inca observation catalog". The best documented of such events is the "Ataw Wallpa's comet", seen in Cajamarca in July of 1533 and the solar eclipse, that in 1543, prevented conquistador Lucas Martínez from discovering the rich silver mines in northern Chile. Archived descriptions of the Andean population's reaction to these phenomena indicate that they were treated as extremely important omens, that should not, under any circumstances, be ignored.
An overview of the catalog manager
NASA Technical Reports Server (NTRS)
Irani, Frederick M.
1986-01-01
The Catalog Manager (CM) is being used at the Goddard Space Flight Center in conjunction with the Land Analysis System (LAS) running under the Transportable Applications Executive (TAE). CM maintains a catalog of file names for all users of the LAS system. The catalog provides a cross-reference between TAE user file names and fully qualified host-file names. It also maintains information about the content and status of each file. A brief history of CM development is given and a description of naming conventions, catalog structure and file attributes, and archive/retrieve capabilities is presented. General user operation and the LAS user scenario are also discussed.
Poltronieri, Elisabetta; Truccolo, Ivana; Di Benedetto, Corrado; Castelli, Mauro; Mazzocut, Mauro; Cognetti, Gaetana
2010-12-20
The Open Archive Initiative (OAI) refers to a movement started around the '90 s to guarantee free access to scientific information by removing the barriers to research results, especially those related to the ever increasing journal subscription prices. This new paradigm has reshaped the scholarly communication system and is closely connected to the build up of institutional repositories (IRs) conceived to the benefit of scientists and research bodies as a means to keep possession of their own literary production. The IRs are high-value tools which permit authors to gain visibility by enabling rapid access to scientific material (not only publications) thus increasing impact (citation rate) and permitting a multidimensional assessment of research findings. A survey was conducted in March 2010 to mainly explore the managing system in use for archiving the research finding adopted by the Italian Scientific Institutes for Research, Hospitalization and Health Care (IRCCS) of the oncology area within the Italian National Health Service (Servizio Sanitario Nazionale, SSN). They were asked to respond to a questionnaire intended to collect data about institutional archives, metadata formats and posting of full-text documents. The enquiry concerned also the perceived role of the institutional repository DSpace ISS, built up by the Istituto Superiore di Sanità (ISS) and based on a XML scheme for encoding metadata. Such a repository aims at acting as a unique reference point for the biomedical information produced by the Italian research institutions. An in-depth analysis has also been performed on the collection of information material addressed to patients produced by the institutions surveyed. The survey respondents were 6 out of 9. The results reveal the use of different practices and standard among the institutions concerning: the type of documentation collected, the software adopted, the use and format of metadata and the conditions of accessibility to the IRs. The Italian research institutions in the field of oncology are moving the first steps towards the philosophy of OA. The main effort should be the implementation of common procedures also in order to connect scientific publications to researchers curricula. In this framework, an important effort is represented by the project of ISS aimed to set a common interface able to allow migration of data from partner institutions to the OA compliant repository DSpace ISS.
2010-01-01
Background The Open Archive Initiative (OAI) refers to a movement started around the '90s to guarantee free access to scientific information by removing the barriers to research results, especially those related to the ever increasing journal subscription prices. This new paradigm has reshaped the scholarly communication system and is closely connected to the build up of institutional repositories (IRs) conceived to the benefit of scientists and research bodies as a means to keep possession of their own literary production. The IRs are high-value tools which permit authors to gain visibility by enabling rapid access to scientific material (not only publications) thus increasing impact (citation rate) and permitting a multidimensional assessment of research findings. Methods A survey was conducted in March 2010 to mainly explore the managing system in use for archiving the research finding adopted by the Italian Scientific Institutes for Research, Hospitalization and Health Care (IRCCS) of the oncology area within the Italian National Health Service (Servizio Sanitario Nazionale, SSN). They were asked to respond to a questionnaire intended to collect data about institutional archives, metadata formats and posting of full-text documents. The enquiry concerned also the perceived role of the institutional repository DSpace ISS, built up by the Istituto Superiore di Sanità (ISS) and based on a XML scheme for encoding metadata. Such a repository aims at acting as a unique reference point for the biomedical information produced by the Italian research institutions. An in-depth analysis has also been performed on the collection of information material addressed to patients produced by the institutions surveyed. Results The survey respondents were 6 out of 9. The results reveal the use of different practices and standard among the institutions concerning: the type of documentation collected, the software adopted, the use and format of metadata and the conditions of accessibility to the IRs. Conclusions The Italian research institutions in the field of oncology are moving the first steps towards the philosophy of OA. The main effort should be the implementation of common procedures also in order to connect scientific publications to researchers curricula. In this framework, an important effort is represented by the project of ISS aimed to set a common interface able to allow migration of data from partner institutions to the OA compliant repository DSpace ISS. PMID:21172002
NASA Astrophysics Data System (ADS)
Koppers, A. A.; Staudigel, H.; Mills, H.; Keller, M.; Wallace, A.; Bachman, N.; Helly, J.; Helly, M.; Miller, S. P.; Massell Symons, C.
2004-12-01
To bridge the gap between Earth science teachers, librarians, scientists and data archive managers, we have started the ERESE project that will create, archive and make available "Enduring Resources in Earth Science Education" through information technology (IT) portals. In the first phase of this National Science Digital Library (NSDL) project, we are focusing on the development of these ERESE resources for middle and high school teachers to be used in lesson plans with "plate tectonics" and "magnetics" as their main theme. In this presentation, we will show how these new ERESE resources are being generated, how they can be uploaded via online web wizards, how they are archived, how we make them available via the EarthRef.org Digital Archive (ERDA) and Reference Database (ERR), and how they relate to the SIOExplorer database containing data objects for all seagoing cruises carried out by the Scripps Institution of Oceanography. The EarthRef.org web resource uses the vision of a "general description" of the Earth as a geological system to provide an IT infrastructure for the Earth sciences. This emphasizes the marriage of the "scientific process" (and its results) with an educational cyber-infrastructure for teaching Earth sciences, on any level, from middle school to college and graduate levels. Eight different databases reside under EarthRef.org from which ERDA holds any digital object that has been uploaded by other scientists, teachers and students for free, while the ERR holds more than 80,000 publications. For more than 1,500 of these publications, this latter database makes available for downloading JPG/PDF images of the abstracts, data tables, methods and appendices, together with their digitized contents in Microsoft Word and Excel format. Both holdings are being used to store the ERESE objects that are being generated by a group of undergraduate students majoring in Environmental Systems (ESYS) program at the UCSD with an emphasis on the Earth Sciences. These students perform library and internet research in order to design and generate these "Enduring Resources in Earth Science Education" that they test by closely interacting with the research faculty at the Scripps Institution of Oceanography. Typical ERESE resources can be diagrams, model cartoons, maps, data sets for analyses, and glossary items and essays to explain certain Earth Science concepts and are ready to be used in the classroom.
Data Management and Archiving - a Long Process
NASA Astrophysics Data System (ADS)
Gebauer, Petra; Bertelmann, Roland; Hasler, Tim; Kirchner, Ingo; Klump, Jens; Mettig, Nora; Peters-Kottig, Wolfgang; Rusch, Beate; Ulbricht, Damian
2014-05-01
Implementing policies for research data management to the end of data archiving at university institutions takes a long time. Even though, especially in geosciences, most of the scientists are familiar to analyze different sorts of data, to present statistical results and to write publications sometimes based on big data records, only some of them manage their data in a standardized manner. Much more often they have learned how to measure and to generate large volumes of data than to document these measurements and to preserve them for the future. Changing staff and limited funding make this work more difficult, but it is essential in a progressively developing digital and networked world. Results from the project EWIG (Translates to: Developing workflow components for long-term archiving of research data in geosciences), funded by Deutsche Forschungsgemeinschaft, will help on these theme. Together with the project partners Deutsches GeoForschungsZentrum Potsdam and Konrad-Zuse-Zentrum für Informationstechnik Berlin a workflow to transfer continuously recorded data from a meteorological city monitoring network into a long-term archive was developed. This workflow includes quality assurance of the data as well as description of metadata and using tools to prepare data packages for long term archiving. It will be an exemplary model for other institutions working with similar data. The development of this workflow is closely intertwined with the educational curriculum at the Institut für Meteorologie. Designing modules to run quality checks for meteorological time series of data measured every minute and preparing metadata are tasks in actual bachelor theses. Students will also test the usability of the generated working environment. Based on these experiences a practical guideline for integrating research data management in curricula will be one of the results of this project, for postgraduates as well as for younger students. Especially at the beginning of the scientific career it is necessary to become familiar with all issues concerning data management. The outcomes of EWIG are intended to be generic enough to be easily adopted by other institutions. University lectures in meteorology were started to teach future scientific generations right from the start how to deal with all sorts of different data in a transparent way. The progress of the project EWIG can be followed on the web via ewig.gfz-potsdam.de
Neural evidence for description dependent reward processing in the framing effect.
Yu, Rongjun; Zhang, Ping
2014-01-01
Human decision making can be influenced by emotionally valenced contexts, known as the framing effect. We used event-related brain potentials to investigate how framing influences the encoding of reward. We found that the feedback related negativity (FRN), which indexes the "worse than expected" negative prediction error in the anterior cingulate cortex (ACC), was more negative for the negative frame than for the positive frame in the win domain. Consistent with previous findings that the FRN is not sensitive to "better than expected" positive prediction error, the FRN did not differentiate the positive and negative frame in the loss domain. Our results provide neural evidence that the description invariance principle which states that reward representation and decision making are not influenced by how options are presented is violated in the framing effect.
Mutual information-based facial expression recognition
NASA Astrophysics Data System (ADS)
Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah
2013-12-01
This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.
A Multiobjective Approach Applied to the Protein Structure Prediction Problem
2002-03-07
like a low energy search landscape . 2.1.1 Symbolic/Formalized Problem Domain Description. Every computer representable problem can also be embodied...method [60]. 3.4 Energy Minimization Methods The energy landscape algorithms are based on the idea that a protein’s final resting conformation is...in our GA used to search the PSP problem energy landscape ). 3.5.1 Simple GA. The main routine in a sGA, after encoding the problem, builds a
Flexible Manufacturing System Handbook. Volume II. Description of the Technology
1983-02-01
hubs, or wheels with considerable 4 FM5 Handbook, Volume II milling, drilling and/or tapping, are usually candidates for inclusion in a prismatic...0.06 inch) to transfer pallets to a machine or unload station. Wheel encoders can be used as less precise feedback for the drive system and its...must be used to control pallet transfer. The Cincinnati Milacron Variable Mission System uses this type of MHS, specifically the Eaton-Kenway Robo
Compressive Sampling based Image Coding for Resource-deficient Visual Communication.
Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen
2016-04-14
In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.
Mating-Type Genes and MAT Switching in Saccharomyces cerevisiae
Haber, James E.
2012-01-01
Mating type in Saccharomyces cerevisiae is determined by two nonhomologous alleles, MATa and MATα. These sequences encode regulators of the two different haploid mating types and of the diploids formed by their conjugation. Analysis of the MATa1, MATα1, and MATα2 alleles provided one of the earliest models of cell-type specification by transcriptional activators and repressors. Remarkably, homothallic yeast cells can switch their mating type as often as every generation by a highly choreographed, site-specific homologous recombination event that replaces one MAT allele with different DNA sequences encoding the opposite MAT allele. This replacement process involves the participation of two intact but unexpressed copies of mating-type information at the heterochromatic loci, HMLα and HMRa, which are located at opposite ends of the same chromosome-encoding MAT. The study of MAT switching has yielded important insights into the control of cell lineage, the silencing of gene expression, the formation of heterochromatin, and the regulation of accessibility of the donor sequences. Real-time analysis of MAT switching has provided the most detailed description of the molecular events that occur during the homologous recombinational repair of a programmed double-strand chromosome break. PMID:22555442
Encoding Gaussian curvature in glassy and elastomeric liquid crystal solids
Mostajeran, Cyrus; Ware, Taylor H.; White, Timothy J.
2016-01-01
We describe shape transitions of thin, solid nematic sheets with smooth, preprogrammed, in-plane director fields patterned across the surface causing spatially inhomogeneous local deformations. A metric description of the local deformations is used to study the intrinsic geometry of the resulting surfaces upon exposure to stimuli such as light and heat. We highlight specific patterns that encode constant Gaussian curvature of prescribed sign and magnitude. We present the first experimental results for such programmed solids, and they qualitatively support theory for both positive and negative Gaussian curvature morphing from flat sheets on stimulation by light or heat. We review logarithmic spiral patterns that generate cone/anti-cone surfaces, and introduce spiral director fields that encode non-localized positive and negative Gaussian curvature on punctured discs, including spherical caps and spherical spindles. Conditions are derived where these cap-like, photomechanically responsive regions can be anchored in inert substrates by designing solutions that ensure compatibility with the geometric constraints imposed by the surrounding media. This integration of such materials is a precondition for their exploitation in new devices. Finally, we consider the radial extension of such director fields to larger sheets using nematic textures defined on annular domains. PMID:27279777
Nimbus-7 Total Ozone Mapping Spectrometer (TOMS) Data Products User's Guide
NASA Technical Reports Server (NTRS)
McPeters, Richard D.; Bhartia, P. K.; Krueger, Arlin J.; Herman, Jay R.; Schlesinger, Barry M.; Wellemeyer, Charles G.; Seftor, Colin J.; Jaross, Glen; Taylor, Steven L.; Swissler, Tom;
1996-01-01
Two data products from the Total Ozone Mapping Spectrometer (TOMS) onboard Nimbus-7 have been archived at the Distributed Active Archive Center, in the form of Hierarchical Data Format files. The instrument measures backscattered Earth radiance and incoming solar irradiance; their ratio is used in ozone retrievals. Changes in the instrument sensitivity are monitored by a spectral discrimination technique using measurements of the intrinsically stable wavelength dependence of derived surface reflectivity. The algorithm to retrieve total column ozone compares measured Earth radiances at sets of three wavelengths with radiances calculated for different total ozone values, solar zenith angles, and optical paths. The initial error in the absolute scale for TOMS total ozone is 3 percent, the one standard deviation random error is 2 percent, and drift is less than 1.0 percent per decade. The Level-2 product contains the measured radiances, the derived total ozone amount, and reflectivity information for each scan position. The Level-3 product contains daily total ozone amount and reflectivity in a I - degree latitude by 1.25 degrees longitude grid. The Level-3 product also is available on CD-ROM. Detailed descriptions of both HDF data files and the CD-ROM product are provided.
Driscoll, R S
2001-04-01
Medical evacuation helicopters are taken for granted in today's military. However, the first use of helicopters for this purpose in the Korean War was not done intentionally but as a result of the necessity of moving patients rapidly over difficult Korean terrain and of the early ebbing of the main battle line. The objective of this essay is to increase the historical awareness of military medical evacuation helicopters in the Korean War during this 50th anniversary year. By describing the many challenges and experiences encountered in implementing the use of helicopters for evacuation, the reader will appreciate how a technology developed for another use helped in the success of evacuating nearly 22,000 patients while contributing to establishing a mortality rate of wounded of 2.4%. The preparation to write this essay included archival research of historical reports, records, and oral histories from the archives of the U.S. Army Center for Military History. Additionally, a search of journal articles written during and after the Korean War was conducted. The result is a comprehensive description of the use of medical evacuation helicopters in the Korean War.
A spatio-temporal landslide inventory for the NW of Spain: BAPA database
NASA Astrophysics Data System (ADS)
Valenzuela, Pablo; Domínguez-Cuesta, María José; Mora García, Manuel Antonio; Jiménez-Sánchez, Montserrat
2017-09-01
A landslide database has been created for the Principality of Asturias, NW Spain: the BAPA (Base de datos de Argayos del Principado de Asturias - Principality of Asturias Landslide Database). Data collection is mainly performed through searching local newspaper archives. Moreover, a BAPA App and a BAPA website (http://geol.uniovi.es/BAPA) have been developed to obtain additional information from citizens and institutions. Presently, the dataset covers the period 1980-2015, recording 2063 individual landslides. The use of free cartographic servers, such as Google Maps, Google Street View and Iberpix (Government of Spain), combined with the spatial descriptions and pictures contained in the press news, makes it possible to assess different levels of spatial accuracy. In the database, 59% of the records show an exact spatial location, and 51% of the records provided accurate dates, showing the usefulness of press archives as temporal records. Thus, 32% of the landslides show the highest spatial and temporal accuracy levels. The database also gathers information about the type and characteristics of the landslides, the triggering factors and the damage and costs caused. Field work was conducted to validate the methodology used in assessing the spatial location, temporal occurrence and characteristics of the landslides.
Encoding the structure of many-body localization with matrix product operators
NASA Astrophysics Data System (ADS)
Pekker, David; Clark, Bryan K.
2017-01-01
Anderson insulators are noninteracting disordered systems which have localized single-particle eigenstates. The interacting analog of Anderson insulators are the many-body localized (MBL) phases. The spectrum of the many-body eigenstates of an Anderson insulator is efficiently represented as a set of product states over the single-particle modes. We show that product states over matrix product operators of small bond dimension is the corresponding efficient description of the spectrum of an MBL insulator. In this language all of the many-body eigenstates are encoded by matrix product states (i.e., density matrix renormalization group wave functions) consisting of only two sets of low bond dimension matrices per site: the Gi matrices corresponding to the local ground state on site i and the Ei matrices corresponding to the local excited state. All 2n eigenstates can be generated from all possible combinations of these sets of matrices.
A tensorial description of particle perception in black-hole physics
NASA Astrophysics Data System (ADS)
Barbado, Luis C.; Barceló, Carlos; Garay, Luis J.; Jannes, G.
2016-09-01
In quantum field theory in curved backgrounds, one typically distinguishes between objective, tensorial quantities such as the renormalized stress-energy tensor (RSET) and subjective, nontensorial quantities such as Bogoliubov coefficients which encode perception effects associated with the specific trajectory of a detector. In this work, we propose a way to treat both objective and subjective notions on an equal tensorial footing. For that purpose, we define a new tensor which we will call the perception renormalized stress-energy tensor (PeRSET). The PeRSET is defined as the subtraction of the RSET corresponding to two different vacuum states. Based on this tensor, we can define perceived energy densities and fluxes. The PeRSET helps us to have a more organized and systematic understanding of various results in the literature regarding quantum field theory in black hole spacetimes. We illustrate the physics encoded in this tensor by working out various examples of special relevance.
Bellomo, Guido; Bosyk, Gustavo M; Holik, Federico; Zozor, Steeve
2017-11-07
Based on the problem of quantum data compression in a lossless way, we present here an operational interpretation for the family of quantum Rényi entropies. In order to do this, we appeal to a very general quantum encoding scheme that satisfies a quantum version of the Kraft-McMillan inequality. Then, in the standard situation, where one is intended to minimize the usual average length of the quantum codewords, we recover the known results, namely that the von Neumann entropy of the source bounds the average length of the optimal codes. Otherwise, we show that by invoking an exponential average length, related to an exponential penalization over large codewords, the quantum Rényi entropies arise as the natural quantities relating the optimal encoding schemes with the source description, playing an analogous role to that of von Neumann entropy.
WMT: The CSDMS Web Modeling Tool
NASA Astrophysics Data System (ADS)
Piper, M.; Hutton, E. W. H.; Overeem, I.; Syvitski, J. P.
2015-12-01
The Community Surface Dynamics Modeling System (CSDMS) has a mission to enable model use and development for research in earth surface processes. CSDMS strives to expand the use of quantitative modeling techniques, promotes best practices in coding, and advocates for the use of open-source software. To streamline and standardize access to models, CSDMS has developed the Web Modeling Tool (WMT), a RESTful web application with a client-side graphical interface and a server-side database and API that allows users to build coupled surface dynamics models in a web browser on a personal computer or a mobile device, and run them in a high-performance computing (HPC) environment. With WMT, users can: Design a model from a set of components Edit component parameters Save models to a web-accessible server Share saved models with the community Submit runs to an HPC system Download simulation results The WMT client is an Ajax application written in Java with GWT, which allows developers to employ object-oriented design principles and development tools such as Ant, Eclipse and JUnit. For deployment on the web, the GWT compiler translates Java code to optimized and obfuscated JavaScript. The WMT client is supported on Firefox, Chrome, Safari, and Internet Explorer. The WMT server, written in Python and SQLite, is a layered system, with each layer exposing a web service API: wmt-db: database of component, model, and simulation metadata and output wmt-api: configure and connect components wmt-exe: launch simulations on remote execution servers The database server provides, as JSON-encoded messages, the metadata for users to couple model components, including descriptions of component exchange items, uses and provides ports, and input parameters. Execution servers are network-accessible computational resources, ranging from HPC systems to desktop computers, containing the CSDMS software stack for running a simulation. Once a simulation completes, its output, in NetCDF, is packaged and uploaded to a data server where it is stored and from which a user can download it as a single compressed archive file.
Fermilab History and Archives Project | Norman F. Ramsey
Fermilab History and Archives Project Fermilab History and Archives Project Fermilab History and Archives Project Home About the Archives History and Archives Online Request Contact Us History & ; Archives Project Fermilab History and Archives Project Norman F. Ramsey Back to History and Archives
A novel gene network inference algorithm using predictive minimum description length approach.
Chaitankar, Vijender; Ghosh, Preetam; Perkins, Edward J; Gong, Ping; Deng, Youping; Zhang, Chaoyang
2010-05-28
Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold which defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we proposed a new inference algorithm which incorporated mutual information (MI), conditional mutual information (CMI) and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm was evaluated using both synthetic time series data sets and a biological time series data set for the yeast Saccharomyces cerevisiae. The benchmark quantities precision and recall were used as performance measures. The results show that the proposed algorithm produced less false edges and significantly improved the precision, as compared to the existing algorithm. For further analysis the performance of the algorithms was observed over different sizes of data. We have proposed a new algorithm that implements the PMDL principle for inferring gene regulatory networks from time series DNA microarray data that eliminates the need of a fine tuning parameter. The evaluation results obtained from both synthetic and actual biological data sets show that the PMDL principle is effective in determining the MI threshold and the developed algorithm improves precision of gene regulatory network inference. Based on the sensitivity analysis of all tested cases, an optimal CMI threshold value has been identified. Finally it was observed that the performance of the algorithms saturates at a certain threshold of data size.
Marwede, Dirk; Schulz, Thomas; Kahn, Thomas
2008-12-01
To validate a preliminary version of a radiological lexicon (RadLex) against terms found in thoracic CT reports and to index report content in RadLex term categories. Terms from a random sample of 200 thoracic CT reports were extracted using a text processor and matched against RadLex. Report content was manually indexed by two radiologists in consensus in term categories of Anatomic Location, Finding, Modifier, Relationship, Image Quality, and Uncertainty. Descriptive statistics were used and differences between age groups and report types were tested for significance using Kruskal-Wallis and Mann-Whitney Test (significance level <0.05). From 363 terms extracted, 304 (84%) were found and 59 (16%) were not found in RadLex. Report indexing showed a mean of 16.2 encoded items per report and 3.2 Finding per report. Term categories most frequently encoded were Modifier (1,030 of 3,244, 31.8%), Anatomic Location (813, 25.1%), Relationship (702, 21.6%) and Finding (638, 19.7%). Frequency of indexed items per report was higher in older age groups, but no significant difference was found between first study and follow up study reports. Frequency of distinct findings per report increased with patient age (p < 0.05). RadLex already covers most terms present in thoracic CT reports based on a small sample analysis from one institution. Applications for report encoding need to be developed to validate the lexicon against a larger sample of reports and address the issue of automatic relationship encoding.
A knowledge representation view on biomedical structure and function.
Schulz, Stefan; Hahn, Udo
2002-01-01
In biomedical ontologies, structural and functional considerations are of outstanding importance, and concepts which belong to these two categories are highly interdependent. At the representational level both axes must be clearly kept separate in order to support disciplined ontology engineering. Furthermore, the biaxial organization of physical structure (both by a taxonomic and partonomic order) entails intricate patterns of inference. We here propose a layered encoding of taxonomic, partonomic and functional aspects of biomedical concepts using description logics. PMID:12463912
Application guide for universal source encoding for space
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Miller, Warner H.
1993-01-01
Lossless data compression was studied for many NASA missions. The Rice algorithm was demonstrated to provide better performance than other available techniques on most scientific data. A top-level description of the Rice algorithm is first given, along with some new capabilities implemented in both software and hardware forms. Systems issues important for onboard implementation, including sensor calibration, error propagation, and data packetization, are addressed. The latter part of the guide provides twelve case study examples drawn from a broad spectrum of science instruments.
Fact Retrieval for the 1980’s,
1981-07-01
System (SANSS) which is used to identify a chemical substance , given its name or its structure, and refer the user to all CIS files that contain data...Information Interchange Data Description File Format." In 1978, this draft became the substance for the ANSI X3L5 committee, which included several...34Encoding and Decodin!g of Facts" Communications in the 1980s will require protection from eavesdropping and abuse . It is against the law to tap a telephone
Discord as a quantum resource for bi-partite communication
NASA Astrophysics Data System (ADS)
Chrzanowski, Helen M.; Gu, Mile; Assad, Syed M.; Symul, Thomas; Modi, Kavan; Ralph, Timothy C.; Vedral, Vlatko; Lam, Ping Koy
2014-12-01
Coherent interactions that generate negligible entanglement can still exhibit unique quantum behaviour. This observation has motivated a search beyond entanglement for a complete description of all quantum correlations. Quantum discord is a promising candidate. Here, we experimentally demonstrate that under certain measurement constraints, discord between bipartite systems can be consumed to encode information that can only be accessed by coherent quantum interactions. The inability to access this information by any other means allows us to use discord to directly quantify this `quantum advantage'.
Diagnosis and prediction of neuroendocrine liver metastases: a protocol of six systematic reviews.
Arigoni, Stephan; Ignjatovic, Stefan; Sager, Patrizia; Betschart, Jonas; Buerge, Tobias; Wachtl, Josephine; Tschuor, Christoph; Limani, Perparim; Puhan, Milo A; Lesurtel, Mickael; Raptis, Dimitri A; Breitenstein, Stefan
2013-12-23
Patients with hepatic metastases from neuroendocrine tumors (NETs) benefit from an early diagnosis, which is crucial for the optimal therapy and management. Diagnostic procedures include morphological and functional imaging, identification of biomarkers, and biopsy. The aim of six systematic reviews discussed in this study is to assess the predictive value of Ki67 index and other biomarkers, to compare the diagnostic accuracy of morphological and functional imaging, and to define the role of biopsy in the diagnosis and prediction of neuroendocrine tumor liver metastases. An objective group of librarians will provide an electronic search strategy to examine the following databases: MEDLINE, EMBASE and The Cochrane Library (Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials (CENTRAL), Database of Abstracts of Reviews of Effects). There will be no restriction concerning language and publication date. The qualitative and quantitative synthesis of the systematic review will be conducted with randomized controlled trials (RCT), prospective and retrospective comparative cohort studies, and case-control studies. Case series will be collected in a separate database and only used for descriptive purposes. This study is ongoing and presents a protocol of six systematic reviews to elucidate the role of histopathological and biochemical markers, biopsies of the primary tumor and the metastases as well as morphological and functional imaging modalities for the diagnosis and prediction of neuroendocrine liver metastases. These systematic reviews will assess the value and accuracy of several diagnostic modalities in patients with NET liver metastases, and will provide a basis for the development of clinical practice guidelines. The systematic reviews have been prospectively registered with the International Prospective Register of Systematic Reviews (PROSPERO): CRD42012002644; http://www.metaxis.com/prospero/full_doc.asp?RecordID=2644 (Archived by WebCite at http://www.webcitation.org/6LzCLd5sF), CRD42012002647; http://www.metaxis.com/prospero/full_doc.asp?RecordID=2647 (Archived by WebCite at http://www.webcitation.org/6LzCRnZnO), CRD42012002648; http://www.metaxis.com/prospero/full_doc.asp?RecordID=2648 (Archived by WebCite at http://www.webcitation.org/6LzCVeuVR), CRD42012002649; http://www.metaxis.com/prospero/full_doc.asp?RecordID=2649 (Archived by WebCite at http://www.webcitation.org/6LzCZzZWU), CRD42012002650; http://www.metaxis.com/prospero/full_doc.asp?RecordID=2650 (Archived by WebCite at http://www.webcitation.org/6LzDPhGb8), CRD42012002651; http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42012002651#.UrMglPRDuVo (Archived by WebCite at http://www.webcitation.org/6LzClCNff).
Mubemba, B; Thompson, P N; Odendaal, L; Coetzee, P; Venter, E H
2017-05-01
Rift Valley fever (RVF), caused by an arthropod borne Phlebovirus in the family Bunyaviridae, is a haemorrhagic disease that affects ruminants and humans. Due to the zoonotic nature of the virus, a biosafety level 3 laboratory is required for isolation of the virus. Fresh and frozen samples are the preferred sample type for isolation and acquisition of sequence data. However, these samples are scarce in addition to posing a health risk to laboratory personnel. Archived formalin-fixed, paraffin-embedded (FFPE) tissue samples are safe and readily available, however FFPE derived RNA is in most cases degraded and cross-linked in peptide bonds and it is unknown whether the sample type would be suitable as reference material for retrospective phylogenetic studies. A RT-PCR assay targeting a 490 nt portion of the structural G N glycoprotein encoding gene of the RVFV M-segment was applied to total RNA extracted from archived RVFV positive FFPE samples. Several attempts to obtain target amplicons were unsuccessful. FFPE samples were then analysed using next generation sequencing (NGS), i.e. Truseq ® (Illumina) and sequenced on the Miseq ® genome analyser (Illumina). Using reference mapping, gapped virus sequence data of varying degrees of shallow depth was aligned to a reference sequence. However, the NGS did not yield long enough contigs that consistently covered the same genome regions in all samples to allow phylogenetic analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Tulik, M
2001-08-01
Studies were carried out on wood samples collected in October 1997 from breast height of Scots pine trees (Pinus sylvestris) from site located 5 km south from the Chernobyl nuclear power plant. The radioactive contamination at the site was 3.7x10(5) kBq m(-2). These samples of secondary wood were used as an archive of information about the dynamics of a meristematic tissue cambium affected by ionising radiation from the Chernobyl reactor accident. The results show that frequency of the cambial cells events like anticlinal divisions, intrusive growth and cells elimination, was after the Chernobyl accident, about three times higher in comparison to preceding years. The most interesting finding was that after irradiation the length of tracheids increased. This increase is interpreted as an effect of intracambial competition among cells in the initial layer.
Indexing method of digital audiovisual medical resources with semantic Web integration.
Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre
2003-01-01
Digitalization of audio-visual resources combined with the performances of the networks offer many possibilities which are the subject of intensive work in the scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has been developing MPEG-7, a standard for describing multimedia content. The good of this standard is to develop a rich set of standardized tools to enable fast efficient retrieval from digital archives or filtering audiovisual broadcasts on the internet. How this kind of technologies could be used in the medical context? In this paper, we propose a simpler indexing system, based on Dublin Core standard and complaint to MPEG-7. We use MeSH and UMLS to introduce conceptual navigation. We also present a video-platform with enables to encode and give access to audio-visual resources in streaming mode.
ontologyX: a suite of R packages for working with ontological data.
Greene, Daniel; Richardson, Sylvia; Turro, Ernest
2017-04-01
Ontologies are widely used constructs for encoding and analyzing biomedical data, but the absence of simple and consistent tools has made exploratory and systematic analysis of such data unnecessarily difficult. Here we present three packages which aim to simplify such procedures. The ontologyIndex package enables arbitrary ontologies to be read into R, supports representation of ontological objects by native R types, and provides a parsimonius set of performant functions for querying ontologies. ontologySimilarity and ontologyPlot extend ontologyIndex with functionality for straightforward visualization and semantic similarity calculations, including statistical routines. ontologyIndex , ontologyPlot and ontologySimilarity are all available on the Comprehensive R Archive Network website under https://cran.r-project.org/web/packages/ . Daniel Greene dg333@cam.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Neural evidence for description dependent reward processing in the framing effect
Yu, Rongjun; Zhang, Ping
2014-01-01
Human decision making can be influenced by emotionally valenced contexts, known as the framing effect. We used event-related brain potentials to investigate how framing influences the encoding of reward. We found that the feedback related negativity (FRN), which indexes the “worse than expected” negative prediction error in the anterior cingulate cortex (ACC), was more negative for the negative frame than for the positive frame in the win domain. Consistent with previous findings that the FRN is not sensitive to “better than expected” positive prediction error, the FRN did not differentiate the positive and negative frame in the loss domain. Our results provide neural evidence that the description invariance principle which states that reward representation and decision making are not influenced by how options are presented is violated in the framing effect. PMID:24733998
Observations of red-giant variable stars by Aboriginal Australians
NASA Astrophysics Data System (ADS)
Hamacher, Duane W.
2018-04-01
Aboriginal Australians carefully observe the properties and positions of stars, including both overt and subtle changes in their brightness, for subsistence and social application. These observations are encoded in oral tradition. I examine two Aboriginal oral traditions from South Australia that describe the periodic changing brightness in three pulsating, red-giant variable stars: Betelgeuse (Alpha Orionis), Aldebaran (Alpha Tauri), and Antares (Alpha Scorpii). The Australian Aboriginal accounts stand as the only known descriptions of pulsating variable stars in any Indigenous oral tradition in the world. Researchers examining these oral traditions over the last century, including anthropologists and astronomers, missed the description of these stars as being variable in nature as the ethnographic record contained several misidentifications of stars and celestial objects. Arguably, ethnographers working on Indigenous Knowledge Systems should have academic training in both the natural and social sciences.
Description of the TCERT Vetting Reports for Data Release 25
NASA Technical Reports Server (NTRS)
Van Cleve, Jeffrey E.; Caldwell, Douglas A.
2016-01-01
This document, the Kepler Instrument Handbook (KIH), is for Kepler and K2 observers, which includes the Kepler Science Team, Guest Observers (GOs), and astronomers doing archival research on Kepler and K2 data in NASAs Astrophysics Data Analysis Program (ADAP). The KIH provides information about the design, performance, and operational constraints of the Kepler flight hardware and software, and an overview of the pixel data sets available. The KIH is meant to be read with these companion documents:1. Kepler Data Processing Handbook (KSCI-19081) or KDPH (Jenkins et al., 2016). The KDPH describes how pixels downlinked from the spacecraft are converted by the Kepler Data Processing Pipeline (henceforth just the pipeline) into the data products delivered to the MAST archive. 2. Kepler Archive Manual (KDMC-10008) or KAM (Thompson et al., 2016). The KAM describes the format and content of the data products, and how to search for them.3. Kepler Data Characteristics Handbook (KSCI-19040) or KDCH (Christiansen et al., 2016). The KDCH describes recurring non-astrophysical features of the Kepler data due to instrument signatures, spacecraft events, or solar activity, and explains how these characteristics are handled by the pipeline.4. Kepler Data Release Notes 25 (KSCI-19065) or DRN 25 (Thompson et al., 2015). DRN 25 describes signatures and events peculiar to individual quarters, and the pipeline software changes between a data release and the one preceding it.Together, these documents supply the information necessary for obtaining and understanding Kepler results, given the real properties of the hardware and the data analysis methods used, and for an independent evaluation of the methods used if so desired.
NASA Astrophysics Data System (ADS)
Dolak, Lukas; Brazdil, Rudolf; Chroma, Katerina; Valasek, Hubert; Belinova, Monika; Reznickova, Ladislava
2016-04-01
Different documentary evidence (taxation records, chronicles, insurance reports etc.) is used for reconstruction of hydrometeorological extremes (HMEs) in the Jihlava region (central part of the recent Czech Republic) in the 17th-19th centuries. The aim of the study is description of the system of tax alleviation in Moravia, presentation of utilization of early fire and hail damage insurance claims and application of the new methodological approaches for the analysis of HMEs impacts. During the period studied more than 400 HMEs were analysed for the 16 estates (past basic economic units). Late frost on 16 May 1662 on the Nove Mesto na Morave estate, which destroyed whole cereals and caused damage in the forests, is the first recorded extreme event. Downpours causing flash floods and hailstorms are the most frequently recorded natural disasters. Moreover, floods, droughts, windstorms, blizzards, late frosts and lightning strikes starting fires caused enormous damage as well. The impacts of HMEs are classified into three categories: impacts on agricultural production, material property and the socio-economic impacts. Natural disasters became the reasons of losses of human lives, property, supplies and farming equipment. HMEs caused damage to fields and meadows, depletion of livestock and triggered the secondary consequences as lack of seeds and finance, high prices, indebtedness, poverty and deterioration in field fertility. The results are discussed with respect to uncertainties associated with documentary evidences and their spatiotemporal distribution. Archival records, preserved in the Moravian Land Archives in Brno and other district archives, create a unique source of data contributing to the better understanding of extreme events and their impacts.
Climate data system supports FIRE
NASA Technical Reports Server (NTRS)
Olsen, Lola M.; Iascone, Dominick; Reph, Mary G.
1990-01-01
The NASA Climate Data System (NCDS) at Goddard Space Flight Center is serving as the FIRE Central Archive, providing a centralized data holding and data cataloging service for the FIRE project. NCDS members are carrying out their responsibilities by holding all reduced observations and data analysis products submitted by individual principal investigators in the agreed upon format, by holding all satellite data sets required for FIRE, by providing copies of any of these data sets to FIRE investigators, and by producing and updating a catalog with information about the FIRE holdings. FIRE researchers were requested to provide their reduced data sets in the Standard Data Format (SDF) to the FIRE Central Archive. This standard format is proving to be of value. An improved SDF document is now available. The document provides an example from an actual FIRE SDF data set and clearly states the guidelines for formatting data in SDF. NCDS has received SDF tapes from a number of investigators. These tapes were analyzed and comments provided to the producers. One product which is now available is William J. Syrett's sodar data product from the Stratocumulus Intensive Field Observation. Sample plots from all SDF tapes submitted to the archive will be available to FSET members. Related cloud products are also available through NCDS. Entries describing the FIRE data sets are being provided for the NCDS on-line catalog. Detailed information for the Extended Time Observations is available in the general FIRE catalog entry. Separate catalog entries are being written for the Cirrus Intensive Field Observation (IFO) and for the Marine Stratocumulus IFO. Short descriptions of each FIRE data set will be installed into the NCDS Summary Catalog.
Jeon, Jae-Hyung; Chechkin, Aleksei V; Metzler, Ralf
2014-08-14
Anomalous diffusion is frequently described by scaled Brownian motion (SBM), a Gaussian process with a power-law time dependent diffusion coefficient. Its mean squared displacement is 〈x(2)(t)〉 ≃ 2K(t)t with K(t) ≃ t(α-1) for 0 < α < 2. SBM may provide a seemingly adequate description in the case of unbounded diffusion, for which its probability density function coincides with that of fractional Brownian motion. Here we show that free SBM is weakly non-ergodic but does not exhibit a significant amplitude scatter of the time averaged mean squared displacement. More severely, we demonstrate that under confinement, the dynamics encoded by SBM is fundamentally different from both fractional Brownian motion and continuous time random walks. SBM is highly non-stationary and cannot provide a physical description for particles in a thermalised stationary system. Our findings have direct impact on the modelling of single particle tracking experiments, in particular, under confinement inside cellular compartments or when optical tweezers tracking methods are used.
Fermilab History and Archives Project | Home
Fermilab History and Archives Project Fermilab History and Archives Project Fermi National Accelerator Laboratory Home About the Archives History & Archives Online Request Contact Us Site Index SEARCH the site: History & Archives Project Fermilab History and Archives Project The History of
Dialog detection in narrative video by shot and face analysis
NASA Astrophysics Data System (ADS)
Kroon, B.; Nesvadba, J.; Hanjalic, A.
2007-01-01
The proliferation of captured personal and broadcast content in personal consumer archives necessitates comfortable access to stored audiovisual content. Intuitive retrieval and navigation solutions require however a semantic level that cannot be reached by generic multimedia content analysis alone. A fusion with film grammar rules can help to boost the reliability significantly. The current paper describes the fusion of low-level content analysis cues including face parameters and inter-shot similarities to segment commercial content into film grammar rule-based entities and subsequently classify those sequences into so-called shot reverse shots, i.e. dialog sequences. Moreover shot reverse shot specific mid-level cues are analyzed augmenting the shot reverse shot information with dialog specific descriptions.
Solar Astronomy Data Base: Packaged Information on Diskette
NASA Technical Reports Server (NTRS)
Mckinnon, John A.
1990-01-01
In its role as a library, the National Geophysical Data Center has transferred to diskette a collection of small, digital files of routinely measured solar indices for use on an IBM-compatible desktop computer. Recording these observations on diskette allows the distribution of specialized information to researchers with a wide range of expertise in computer science and solar astronomy. Every data set was made self-contained by including formats, extraction utilities, and plain-language descriptive text. Moreover, for several archives, two versions of the observations are provided - one suitable for display, the other for analysis with popular software packages. Since the files contain no control characters, each one can be modified with any text editor.
NASA Technical Reports Server (NTRS)
Henderson, F. B. (Editor); Rock, B. N. (Editor)
1983-01-01
Consideration is given to: the applications of near-infrared spectroscopy to geological reconnaissance and exploration from space; imaging systems for identifying the spectral properties of geological materials in the visible and near-infrared; and Thematic Mapper (TM) data analysis. Consideration is also given to descriptions of individual geological remote sensing systems, including: GEO-SPAS; SPOT; the Thermal Infrared Multispectral Scanner (TIMS); and the Shuttle Imaging Radars A and B (SIR-A and SIR-B). Additional topics include: the importance of geobotany in geological remote sensing; achromatic holographic stereograms from Landsat MSS data; and the availability and applications of NOAA's non-Landsat satellite data archive.
36 CFR 1280.66 - May I use the National Archives Library?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Archives Library? 1280.66 Section 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park are...
36 CFR 1280.66 - May I use the National Archives Library?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Archives Library? 1280.66 Section 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park are...
36 CFR 1280.66 - May I use the National Archives Library?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Archives Library? 1280.66 Section 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park are...
36 CFR 1280.66 - May I use the National Archives Library?
Code of Federal Regulations, 2012 CFR
2012-07-01
... Archives Library? 1280.66 Section 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park are...
Getting Personal: Personal Archives in Archival Programs and Curricula
ERIC Educational Resources Information Center
Douglas, Jennifer
2017-01-01
In 2001, Catherine Hobbs referred to silences around personal archives, suggesting that these types of archives were not given as much attention as organizational archives in the development of archival theory and methodology. The aims of this article are twofold: 1) to investigate the extent to which such silences exist in archival education…
NASA Astrophysics Data System (ADS)
Pascoe, C. L.
2017-12-01
The Coupled Model Intercomparison Project (CMIP) has coordinated climate model experiments involving multiple international modelling teams since 1995. This has led to a better understanding of past, present, and future climate. The 2017 sixth phase of the CMIP process (CMIP6) consists of a suite of common experiments, and 21 separate CMIP-Endorsed Model Intercomparison Projects (MIPs) making a total of 244 separate experiments. Precise descriptions of the suite of CMIP6 experiments have been captured in a Common Information Model (CIM) database by the Earth System Documentation Project (ES-DOC). The database contains descriptions of forcings, model configuration requirements, ensemble information and citation links, as well as text descriptions and information about the rationale for each experiment. The database was built from statements about the experiments found in the academic literature, the MIP submissions to the World Climate Research Programme (WCRP), WCRP summary tables and correspondence with the principle investigators for each MIP. The database was collated using spreadsheets which are archived in the ES-DOC Github repository and then rendered on the ES-DOC website. A diagramatic view of the workflow of building the database of experiment metadata for CMIP6 is shown in the attached figure.The CIM provides the formalism to collect detailed information from diverse sources in a standard way across all the CMIP6 MIPs. The ES-DOC documentation acts as a unified reference for CMIP6 information to be used both by data producers and consumers. This is especially important given the federated nature of the CMIP6 project. Because the CIM allows forcing constraints and other experiment attributes to be referred to by more than one experiment, we can streamline the process of collecting information from modelling groups about how they set up their models for each experiment. End users of the climate model archive will be able to ask questions enabled by the interconnectedness of the metadata such as "Which MIPs make use of experiment A?" and "Which experiments use forcing constraint B?".
36 CFR § 1280.66 - May I use the National Archives Library?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Archives Library? § 1280.66 Section § 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND... Facilities in the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park...
McAleavey, Stephen A
2014-05-01
Shear wave induced phase encoding (SWIPE) imaging generates ultrasound backscatter images of tissue-like elastic materials by using traveling shear waves to encode the lateral position of the scatters in the phase of the received echo. In contrast to conventional ultrasound B-scan imaging, SWIPE offers the potential advantages of image formation without beam focusing or steering from a single transducer element, lateral resolution independent of aperture size, and the potential to achieve relatively high lateral resolution with low frequency ultrasound. Here a Fourier series description of the phase modulated echo signal is developed, demonstrating that echo harmonics at multiples of the shear wave frequency reveal target k-space data at identical multiples of the shear wavenumber. Modulation transfer functions of SWIPE imaging systems are calculated for maximum shear wave acceleration and maximum shear constraints, and compared with a conventionally focused aperture. The relative signal-to-noise ratio of the SWIPE method versus a conventionally focused aperture is found through these calculations. Reconstructions of wire targets in a gelatin phantom using 1 and 3.5 MHz ultrasound and a cylindrical shear wave source are presented, generated from the fundamental and second harmonic of the shear wave modulation frequency, demonstrating weak dependence of lateral resolution with ultrasound frequency.
The Primate Life History Database: A unique shared ecological data resource
Strier, Karen B.; Altmann, Jeanne; Brockman, Diane K.; Bronikowski, Anne M.; Cords, Marina; Fedigan, Linda M.; Lapp, Hilmar; Liu, Xianhua; Morris, William F.; Pusey, Anne E.; Stoinski, Tara S.; Alberts, Susan C.
2011-01-01
Summary The importance of data archiving, data sharing, and public access to data has received considerable attention. Awareness is growing among scientists that collaborative databases can facilitate these activities.We provide a detailed description of the collaborative life history database developed by our Working Group at the National Evolutionary Synthesis Center (NESCent) to address questions about life history patterns and the evolution of mortality and demographic variability in wild primates.Examples from each of the seven primate species included in our database illustrate the range of data incorporated and the challenges, decision-making processes, and criteria applied to standardize data across diverse field studies. In addition to the descriptive and structural metadata associated with our database, we also describe the process metadata (how the database was designed and delivered) and the technical specifications of the database.Our database provides a useful model for other researchers interested in developing similar types of databases for other organisms, while our process metadata may be helpful to other groups of researchers interested in developing databases for other types of collaborative analyses. PMID:21698066
Moulden, Heather M; Firestone, Philip; Wexler, Audrey F
2007-08-01
The aim of this investigation was to undertake an exploratory analysis of child care providers who sexually offend against children and adolescents and the circumstances related to these offences. Archival Violent Crime Linkage Analysis System (ViCLAS) reports were obtained from the Royal Canadian Mounted Police (RCMP), and demographic and criminal characteristics for the offender, as well as information about the victim and offence, were selected for analyses. A descriptive approach was used to analyze the qualitative reports for a group of 305 Canadian sexual offenders between 1995 and 2002. Adult male (N = 163) and female ( N = 14), along with juvenile male (N = 100) and female (N = 28) child care providers who were involved in a sexual offence against a child or adolescent are described. This article provides unique information about the crimes committed by child care providers in that it is focused on crime characteristics, rather than on personality or treatment variables. Furthermore, it represents a comprehensive examination of this type of offender by including understudied groups, namely juvenile and female offenders.
Enhanced Management of and Access to Hurricane Sandy Ocean and Coastal Mapping Data
NASA Astrophysics Data System (ADS)
Eakins, B.; Neufeld, D.; Varner, J. D.; McLean, S. J.
2014-12-01
NOAA's National Geophysical Data Center (NGDC) has significantly improved the discovery and delivery of its geophysical data holdings, initially targeting ocean and coastal mapping (OCM) data in the U.S. coastal region impacted by Hurricane Sandy in 2012. We have developed a browser-based, interactive interface that permits users to refine their initial map-driven data-type choices prior to bulk download (e.g., by selecting individual surveys), including the ability to choose ancillary files, such as reports or derived products. Initial OCM data types now available in a U.S. East Coast map viewer, as well as underlying web services, include: NOS hydrographic soundings and multibeam sonar bathymetry. Future releases will include trackline geophysics, airborne topographic and bathymetric-topographic lidar, bottom sample descriptions, and digital elevation models.This effort also includes working collaboratively with other NOAA offices and partners to develop automated methods to receive and verify data, stage data for archive, and notify data providers when ingest and archive are completed. We have also developed improved metadata tools to parse XML and auto-populate OCM data catalogs, support the web-based creation and editing of ISO-compliant metadata records, and register metadata in appropriate data portals. This effort supports a variety of NOAA mission requirements, from safe navigation to coastal flood forecasting and habitat characterization.
Sackett, Penelope C.; McConnell, Vicki S.; Roach, Angela L.; Priest, Susan S.; Sass, John H.
1999-01-01
Phase III of the Long Valley Exploratory Well, the Long Valley Coring Project, obtained continuous core between the depths of 7,180 and 9,831 ft (2,188 to 2,996 meters) during the summer of 1998. This report contains a compendium of information designed to facilitate post-drilling research focussed on the study of the core. Included are a preliminary stratigraphic column compiled primarily from field observations and a general description of well lithology for the Phase III drilling interval. Also included are high-resolution digital photographs of every core box (10 feet per box) as well as scanned images of pieces of recovered core. The user can easily move from the stratigraphic column to corresponding core box photographs for any depth. From there, compressed, "unrolled" images of the individual core pieces (core scans) can be accessed. Those interested in higher-resolution core scans can go to archive CD-ROMs stored at a number of locations specified herein. All core is stored at the USGS Core Research Center in Denver, Colorado where it is available to researchers following the protocol described in this report. Preliminary examination of core provided by this report and the archive CD-ROMs should assist researchers in narrowing their choices when requesting core splits.
Framework for the quality assurance of 'omics technologies considering GLP requirements.
Kauffmann, Hans-Martin; Kamp, Hennicke; Fuchs, Regine; Chorley, Brian N; Deferme, Lize; Ebbels, Timothy; Hackermüller, Jörg; Perdichizzi, Stefania; Poole, Alan; Sauer, Ursula G; Tollefsen, Knut E; Tralau, Tewes; Yauk, Carole; van Ravenzwaay, Ben
2017-12-01
'Omics technologies are gaining importance to support regulatory toxicity studies. Prerequisites for performing 'omics studies considering GLP principles were discussed at the European Centre for Ecotoxicology and Toxicology of Chemicals (ECETOC) Workshop Applying 'omics technologies in Chemical Risk Assessment. A GLP environment comprises a standard operating procedure system, proper pre-planning and documentation, and inspections of independent quality assurance staff. To prevent uncontrolled data changes, the raw data obtained in the respective 'omics data recording systems have to be specifically defined. Further requirements include transparent and reproducible data processing steps, and safe data storage and archiving procedures. The software for data recording and processing should be validated, and data changes should be traceable or disabled. GLP-compliant quality assurance of 'omics technologies appears feasible for many GLP requirements. However, challenges include (i) defining, storing, and archiving the raw data; (ii) transparent descriptions of data processing steps; (iii) software validation; and (iv) ensuring complete reproducibility of final results with respect to raw data. Nevertheless, 'omics studies can be supported by quality measures (e.g., GLP principles) to ensure quality control, reproducibility and traceability of experiments. This enables regulators to use 'omics data in a fit-for-purpose context, which enhances their applicability for risk assessment. Copyright © 2017 Elsevier Inc. All rights reserved.
GenderMedDB: an interactive database of sex and gender-specific medical literature.
Oertelt-Prigione, Sabine; Gohlke, Björn-Oliver; Dunkel, Mathias; Preissner, Robert; Regitz-Zagrosek, Vera
2014-01-01
Searches for sex and gender-specific publications are complicated by the absence of a specific algorithm within search engines and by the lack of adequate archives to collect the retrieved results. We previously addressed this issue by initiating the first systematic archive of medical literature containing sex and/or gender-specific analyses. This initial collection has now been greatly enlarged and re-organized as a free user-friendly database with multiple functions: GenderMedDB (http://gendermeddb.charite.de). GenderMedDB retrieves the included publications from the PubMed database. Manuscripts containing sex and/or gender-specific analysis are continuously screened and the relevant findings organized systematically into disciplines and diseases. Publications are furthermore classified by research type, subject and participant numbers. More than 11,000 abstracts are currently included in the database, after screening more than 40,000 publications. The main functions of the database include searches by publication data or content analysis based on pre-defined classifications. In addition, registrants are enabled to upload relevant publications, access descriptive publication statistics and interact in an open user forum. Overall, GenderMedDB offers the advantages of a discipline-specific search engine as well as the functions of a participative tool for the gender medicine community.
Robertson, Carrie A.; Knight, Raymond A.
2014-01-01
Sexual sadism and psychopathy have been theoretically, clinically, and empirically linked to violence. Although both constructs are linked to predatory violence, few studies have sought to explore the covariation of the two constructs, and even fewer have sought to conceptualize the similarities of violence prediction in each. The current study considered all four Psychopathy Checklist-Revised (PCL-R) facets and employed well-defined, validated measures of sadism to elucidate the relation between sadism and psychopathy, as well as to determine the role of each in the prediction of non-sexual violence and sexual crime behaviors. Study 1 assessed 314 adult, male sex offenders using archival ratings, as well as the self-report Multidimensional Inventory of Development, Sex, and Aggression (the MIDSA). Study 2 used archival ratings to assess 599 adult, male sex offenders. Exploratory and confirmatory factor analyses of crime scene descriptions yielded four sexual crime behavior factors: Violence, Physical Control, Sexual Behavior, and Paraphilic. Sadism and psychopathy covaried, but were not coextensive; sadism correlated with Total PCL-R, Facet 1, and Facet 4 scores. The constructs predicted all non-sexual violence measures, but predicted different sexual crime behavior factors. The PCL-R facets collectively predicted the Violence and Paraphilic factors, whereas sadism only predicted the Violence factor. PMID:24019144
Kepler Data Validation Time Series File: Description of File Format and Content
NASA Technical Reports Server (NTRS)
Mullally, Susan E.
2016-01-01
The Kepler space mission searches its time series data for periodic, transit-like signatures. The ephemerides of these events, called Threshold Crossing Events (TCEs), are reported in the TCE tables at the NASA Exoplanet Archive (NExScI). Those TCEs are then further evaluated to create planet candidates and populate the Kepler Objects of Interest (KOI) table, also hosted at the Exoplanet Archive. The search, evaluation and export of TCEs is performed by two pipeline modules, TPS (Transit Planet Search) and DV (Data Validation). TPS searches for the strongest, believable signal and then sends that information to DV to fit a transit model, compute various statistics, and remove the transit events so that the light curve can be searched for other TCEs. More on how this search is done and on the creation of the TCE table can be found in Tenenbaum et al. (2012), Seader et al. (2015), Jenkins (2002). For each star with at least one TCE, the pipeline exports a file that contains the light curves used by TPS and DV to find and evaluate the TCE(s). This document describes the content of these DV time series files, and this introduction provides a bit of context for how the data in these files are used by the pipeline.
Tanley, Simon W M; Schreurs, Antoine M M; Helliwell, John R; Kroon-Batenburg, Loes M J
2013-02-01
The International Union of Crystallography has for many years been advocating archiving of raw data to accompany structural papers. Recently, it initiated the formation of the Diffraction Data Deposition Working Group with the aim of developing standards for the representation of these data. A means of studying this issue is to submit exemplar publications with associated raw data and metadata. A recent study on the effects of dimethyl sulfoxide on the binding of cisplatin and carboplatin to histidine in 11 different lysozyme crystals from two diffractometers led to an investigation of the possible effects of the equipment and X-ray diffraction data processing software on the calculated occupancies and B factors of the bound Pt compounds. 35.3 Gb of data were transferred from Manchester to Utrecht to be processed with EVAL. A systematic comparison shows that the largest differences in the occupancies and B factors of the bound Pt compounds are due to the software, but the equipment also has a noticeable effect. A detailed description of and discussion on the availability of metadata is given. By making these raw diffraction data sets available via a local depository, it is possible for the diffraction community to make their own evaluation as they may wish.
Forde, Arnell S.; Dadisman, Shawn V.; Wiese, Dana S.; Phelps, Daniel C.
2013-01-01
In July (19 - 26) and November (17 - 18) of 1999, the USGS, in cooperation with the Florida Geological Survey (FGS), conducted two geophysical surveys in: (1) the Atlantic Ocean offshore of Florida's east coast from Orchid to Jupiter, FL, and (2) the Gulf of Mexico offshore of Venice, FL. This report serves as an archive of unprocessed digital boomer subbottom data, trackline maps, navigation files, GIS files, Field Activity Collection System (FACS) logs, and formal Federal Geographic Data Committee (FGDC) metadata. Filtered and gained (showing a relative increase in signal amplitude) digital images of the subbottom profiles are also provided. The USGS St. Petersburg Coastal and Marine Science Center (SPCMSC) assigns a unique identifier to each cruise or field activity. For example, identifiers 99FGS01 and 99FGS02 refer to field data collected in 1999 for cooperative work with the FGS. The numbers 01 and 02 indicate the data were collected during the first and second field activities for that project in that calendar year. Refer to http://walrus.wr.usgs.gov/infobank/programs/html/definition/activity.html for a detailed description of the method used to assign the field activity identification (ID).
Robertson, Carrie A; Knight, Raymond A
2014-01-01
Sexual sadism and psychopathy have been theoretically, clinically, and empirically linked to violence. Although both constructs are linked to predatory violence, few studies have sought to explore the covariation of the two constructs, and even fewer have sought to conceptualize the similarities of violence prediction in each. The current study considered all four Psychopathy Checklist-Revised (PCL-R) facets and employed well-defined, validated measures of sadism to elucidate the relation between sadism and psychopathy, as well as to determine the role of each in the prediction of non-sexual violence and sexual crime behaviors. Study 1 assessed 314 adult, male sex offenders using archival ratings, as well as the self-report Multidimensional Inventory of Development, Sex, and Aggression (the MIDSA). Study 2 used archival ratings to assess 599 adult, male sex offenders. Exploratory and confirmatory factor analyses of crime scene descriptions yielded four sexual crime behavior factors: Violence, Physical Control, Sexual Behavior, and Paraphilic. Sadism and psychopathy covaried, but were not coextensive; sadism correlated with Total PCL-R, Facet 1, and Facet 4 scores. The constructs predicted all non-sexual violence measures, but predicted different sexual crime behavior factors. The PCL-R facets collectively predicted the Violence and Paraphilic factors, whereas sadism only predicted the Violence factor. © 2013 Wiley Periodicals, Inc.
The Systems Biology Markup Language (SBML) Level 3 Package: Layout, Version 1 Core.
Gauges, Ralph; Rost, Ursula; Sahle, Sven; Wengler, Katja; Bergmann, Frank T
2015-06-01
Many software tools provide facilities for depicting reaction network diagrams in a visual form. Two aspects of such a visual diagram can be distinguished: the layout (i.e.: the positioning and connections) of the elements in the diagram, and the graphical form of the elements (for example, the glyphs used for symbols, the properties of the lines connecting them, and so on). For software tools that also read and write models in SBML (Systems Biology Markup Language) format, a common need is to store the network diagram together with the SBML representation of the model. This in turn raises the question of how to encode the layout and the rendering of these diagrams. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding diagrams, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The Layout package for SBML Level 3 adds the necessary features to SBML so that diagram layouts can be encoded in SBML files, and a companion package called SBML Rendering specifies how the graphical rendering of elements can be encoded. The SBML Layout package is based on the principle that reaction network diagrams should be described as representations of entities such as species and reactions (with direct links to the underlying SBML elements), and not as arbitrary drawings or graphs; for this reason, existing languages for the description of vector drawings (such as SVG) or general graphs (such as GraphML) cannot be used.
The Systems Biology Markup Language (SBML) Level 3 Package: Layout, Version 1 Core.
Gauges, Ralph; Rost, Ursula; Sahle, Sven; Wengler, Katja; Bergmann, Frank Thomas
2015-09-04
Many software tools provide facilities for depicting reaction network diagrams in a visual form. Two aspects of such a visual diagram can be distinguished: the layout (i.e.: the positioning and connections) of the elements in the diagram, and the graphical form of the elements (for example, the glyphs used for symbols, the properties of the lines connecting them, and so on). For software tools that also read and write models in SBML (Systems Biology Markup Language) format, a common need is to store the network diagram together with the SBML representation of the model. This in turn raises the question of how to encode the layout and the rendering of these diagrams. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding diagrams, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The Layout package for SBML Level 3 adds the necessary features to SBML so that diagram layouts can be encoded in SBML files, and a companion package called SBML Rendering specifies how the graphical rendering of elements can be encoded. The SBML Layout package is based on the principle that reaction network diagrams should be described as representations of entities such as species and reactions (with direct links to the underlying SBML elements), and not as arbitrary drawings or graphs; for this reason, existing languages for the description of vector drawings (such as SVG) or general graphs (such as GraphML) cannot be used.
Zúñiga, Miguel Á; Mejía, Rosa E; Sánchez, Ana L; Sosa-Ochoa, Wilfredo H; Fontecha, Gustavo A
2015-08-07
The frequency of deficient variants of glucose-6-phosphate dehydrogenase (G6PDd) is particularly high in areas where malaria is endemic. The administration of antirelapse drugs, such as primaquine, has the potential to trigger an oxidative event in G6PD-deficient individuals. According to Honduras´ national scheme, malaria treatment requires the administration of chloroquine and primaquine for both Plasmodium vivax and Plasmodium falciparum infections. The present study aimed at investigating for the first time in Honduras the frequency of the two most common G6PDd variants. This was a descriptive study utilizing 398 archival DNA samples of patients that had been diagnosed with malaria due to P. vivax, P. falciparum, or both. The most common allelic variants of G6PD: G6PD A+(376G) and G6PD A-(376G/202A) were assessed by two molecular methods (PCR-RFLP and a commercial kit). The overall frequency of G6PD deficient genotypes was 16.08%. The frequency of the "African" genotype A- (Class III) was 11.9% (4.1% A- hemizygous males; 1.5% homozygous A- females; and 6.3% heterozygous A- females). A high frequency of G6PDd alleles was observed in samples from malaria patients residing in endemic regions of Northern Honduras. One case of Santamaria mutation (376G/542T) was detected. Compared to other studies in the Americas, as well as to data from predictive models, the present study identified a higher-than expected frequency of genotype A- in Honduras. Considering that the national standard of malaria treatment in the country includes primaquine, further research is necessary to ascertain the risk of PQ-triggered haemolytic reactions in sectors of the population more likely to carry G6PD mutations. Additionally, consideration should be given to utilizing point of care technologies to detect this genetic disorder prior administration of 8-aminoquinoline drugs, either primaquine or any new drug available in the near future.
The globular cluster system of NGC 1316. II. The extraordinary object SH2
NASA Astrophysics Data System (ADS)
Richtler, T.; Kumar, B.; Bassino, L. P.; Dirsch, B.; Romanowsky, A. J.
2012-07-01
Context. SH2 has been described as an isolated HII-region, located about 6.5' south of the nucleus of NGC 1316 (Fornax A), a merger remnant in the the outskirts of the Fornax cluster of galaxies. Aims: We give a first, preliminary description of the stellar content and environment of this remarkable object. Methods: We used photometric data in the Washington system and HST photometry from the Hubble Legacy Archive for a morphological description and preliminary aperture photometry. Low-resolution spectroscopy provides radial velocities of the brightest star cluster in SH2 and a nearby intermediate-age cluster. Results: SH2 is not a normal HII-region, ionized by very young stars. It contains a multitude of star clusters with ages of approximately 108 yr. A ring-like morphology is striking. SH2 seems to be connected to an intermediate-age massive globular cluster with a similar radial velocity, which itself is the main object of a group of fainter clusters. Metallicity estimates from emission lines remain ambiguous. Conclusions: The present data do not yet allow firm conclusions about the nature or origin of SH2. It might be a dwarf galaxy that has experienced a burst of extremely clustered star formation. We may witness how globular clusters are donated to a parent galaxy. Based on observations taken at the European Southern Observatory, Cerro Paranal, Chile, under the programmes 082.B-0680, on observations taken at the Interamerican Observatory, Cerro Tololo, Chile. Furthermore based on observations made with the NASA/ESA Hubble Space Telescope (HST, PI: A. Sandage, Prop.ID: 7504), and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA).
VizieR Online Data Catalog: Gaia DR1 (Gaia Collaboration, 2016)
NASA Astrophysics Data System (ADS)
Gaia Collaboration
2016-06-01
Gaia DR1 is based on observations collected between 25 July 2014 and 16 September 2015. Gaia DR1 contains positions (RA,DE) and G magnitudes for all sources with acceptable formal standard errors on positions. Positions and individual uncertainties are computed using a generic prior and Bayes' rule (detailed description in "Gaia astrometry for stars with too few observations. A Bayesian approach", Michalik et al., 2015A&A...583A..68M). The five-parameter astrometric solution - positions, parallaxes, and proper motions - for stars in common between the Tycho-2 Catalogue and Gaia is contained in Gaia DR1. This part of Gaia DR1 is based on the Tycho-Gaia Astrometric Solution (paper with detailed description (Michalik et al., 2015A&A...574A.115M); paper describing theory and background (Michalik et al., 2014A&A...571A..85M); paper describing quasar extension (Michalik & Lindegren, 2016A&A...586A..26M)). At the beginning of the routine phase, for a period of 4 weeks, a special scanning mode repeatedly covering the ecliptic poles on every spin was executed for calibration purposes. Photometric data of selected RR Lyrae and Cepheid variable stars based on these high-cadence measurements are contained in Gaia DR1. Positions (RA,DE) and G magnitudes for 2152 ICRF quasars (F. Mignard et al., 2016, A&A, in press.). The Gaia Archive DR1 data is available at archives.esac.esa.int/gaia. Tgas and Gaia Sources can be downloaded as VOTables, FITS or CSV at http://cdn.gea.esac.esa.int/Gaia/ If you use public Gaia DR1 data in your paper, please take note of our guide on how to acknowledge and cite Gaia DR1: http://gaia.esac.esa.int/documentation/GDR1/Miscellaneous/\\ seccreditandcitationinstructions.html (9 data files).
Scientific Applications of two U.S. Antarctic Program Projects at NSIDC
NASA Astrophysics Data System (ADS)
Scharfen, G. R.; Bauer, R. J.
2001-12-01
The National Snow and Ice Data Center maintains two Antarctic science data management programs supporting both the efforts of Principal Investigators (PIs), and the science that is funded by the NSF Office of Polar Programs. These programs directly relate to the OPP "Guidelines and Award Conditions for Scientific Data", which identify the conditions for awards and responsibilities of PIs regarding the archival of data, and submission of metadata, resulting from their NSF OPP grants. The U.S. Antarctic Data Coordination Center (USADCC) is funded by NSF to assist PIs as they meet these requirements, and to provide a U.S. focal point for the Antarctic Master Directory, a web-based searchable directory of Antarctic scientific data. The USADCC offers access to free, easy-to-use online tools that PIs can use to create the data descriptions that the NSF policy data requires. We provide advice to PIs on how to meet the data policy requirements, and can answer specific questions on related issues. Scientists can access data set descriptions submitted to the Antarctic Master Directory, by thousands of scientists around the world, from the USADCC web pages. The USADCC website is at http://nsidc.org/NSF/USADCC/. The Antarctic Glaciological Data Center (AGDC) is funded by NSF to archive and distribute data collected by the NSF Antarctic Glaciology Program and related cryospheric investigations. The AGDC contains data sets collected by individual investigators on specific grants, and compiled products assembled from many different PI data sets, published literature, and other sources. Data sets are available electronically and include access to the data, plus useful documentation, citation information about the PI(s), locator maps, derived images and references. The AGDC website is at http://nsidc.org/NSF/AGDC/. The utility of both of these projects for scientists is illustrated by a typical user-driven case study to research, obtain and use Antarctic data for a science application.
NASA Astrophysics Data System (ADS)
Ingram, Sandra W.
This quantitative comparative descriptive study involved analyzing archival data from end-of-course (EOC) test scores in biology of English language learners (ELLs) taught or not taught using the sheltered instruction observation protocol (SIOP) model. The study includes descriptions and explanations of the benefits of the SIOP model to ELLs, especially in content area subjects such as biology. Researchers have shown that ELLs in high school lag behind their peers in academic achievement in content area subjects. Much of the research on the SIOP model took place in elementary and middle school, and more research was necessary at the high school level. This study involved analyzing student records from archival data to describe and explain if the SIOP model had an effect on the EOC test scores of ELLs taught or not taught using it. The sample consisted of 527 Hispanic students (283 females and 244 males) from Grades 9-12. An independent sample t-test determined if a significant difference existed in the mean EOC test scores of ELLs taught using the SIOP model as opposed to ELLs not taught using the SIOP model. The results indicated that a significant difference existed between EOC test scores of ELLs taught using the SIOP model and ELLs not taught using the SIOP model (p = .02). A regression analysis indicated a significant difference existed in the academic performance of ELLs taught using the SIOP model in high school science, controlling for free and reduced-price lunch (p = .001) in predicting passing scores on the EOC test in biology at the school level. The data analyzed for free and reduced-price lunch together with SIOP data indicated that both together were not significant (p = .175) for predicting passing scores on the EOC test in high school biology. Future researchers should repeat the study with student-level data as opposed to school-level data, and data should span at least three years.
Continuous description of fluctuating eccentricities
NASA Astrophysics Data System (ADS)
Blaizot, Jean-Paul; Broniowski, Wojciech; Ollitrault, Jean-Yves
2014-11-01
We consider the initial energy density in the transverse plane of a high energy nucleus-nucleus collision as a random field ρ (x), whose probability distribution P [ ρ ], the only ingredient of the present description, encodes all possible sources of fluctuations. We argue that it is a local Gaussian, with a short-range 2-point function, and that the fluctuations relevant for the calculation of the eccentricities that drive the anisotropic flow have small relative amplitudes. In fact, this 2-point function, together with the average density, contains all the information needed to calculate the eccentricities and their variances, and we derive general model independent expressions for these quantities. The short wavelength fluctuations are shown to play no role in these calculations, except for a renormalization of the short range part of the 2-point function. As an illustration, we compare to a commonly used model of independent sources, and recover the known results of this model.
A model for enhancing Internet medical document retrieval with "medical core metadata".
Malet, G; Munoz, F; Appleyard, R; Hersh, W
1999-01-01
Finding documents on the World Wide Web relevant to a specific medical information need can be difficult. The goal of this work is to define a set of document content description tags, or metadata encodings, that can be used to promote disciplined search access to Internet medical documents. The authors based their approach on a proposed metadata standard, the Dublin Core Metadata Element Set, which has recently been submitted to the Internet Engineering Task Force. Their model also incorporates the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary and MEDLINE-type content descriptions. The model defines a medical core metadata set that can be used to describe the metadata for a wide variety of Internet documents. The authors propose that their medical core metadata set be used to assign metadata to medical documents to facilitate document retrieval by Internet search engines.
A Model for Enhancing Internet Medical Document Retrieval with “Medical Core Metadata”
Malet, Gary; Munoz, Felix; Appleyard, Richard; Hersh, William
1999-01-01
Objective: Finding documents on the World Wide Web relevant to a specific medical information need can be difficult. The goal of this work is to define a set of document content description tags, or metadata encodings, that can be used to promote disciplined search access to Internet medical documents. Design: The authors based their approach on a proposed metadata standard, the Dublin Core Metadata Element Set, which has recently been submitted to the Internet Engineering Task Force. Their model also incorporates the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary and Medline-type content descriptions. Results: The model defines a medical core metadata set that can be used to describe the metadata for a wide variety of Internet documents. Conclusions: The authors propose that their medical core metadata set be used to assign metadata to medical documents to facilitate document retrieval by Internet search engines. PMID:10094069
SAT Encoding of Unification in EL
NASA Astrophysics Data System (ADS)
Baader, Franz; Morawska, Barbara
Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. In a recent paper, we have shown that unification in EL is NP-complete, and thus of a complexity that is considerably lower than in other Description Logics of comparably restricted expressive power. In this paper, we introduce a new NP-algorithm for solving unification problems in EL, which is based on a reduction to satisfiability in propositional logic (SAT). The advantage of this new algorithm is, on the one hand, that it allows us to employ highly optimized state-of-the-art SAT solvers when implementing an EL-unification algorithm. On the other hand, this reduction provides us with a proof of the fact that EL-unification is in NP that is much simpler than the one given in our previous paper on EL-unification.
SED-ED, a workflow editor for computational biology experiments written in SED-ML.
Adams, Richard R
2012-04-15
The simulation experiment description markup language (SED-ML) is a new community data standard to encode computational biology experiments in a computer-readable XML format. Its widespread adoption will require the development of software support to work with SED-ML files. Here, we describe a software tool, SED-ED, to view, edit, validate and annotate SED-ML documents while shielding end-users from the underlying XML representation. SED-ED supports modellers who wish to create, understand and further develop a simulation description provided in SED-ML format. SED-ED is available as a standalone Java application, as an Eclipse plug-in and as an SBSI (www.sbsi.ed.ac.uk) plug-in, all under an MIT open-source license. Source code is at https://sed-ed-sedmleditor.googlecode.com/svn. The application itself is available from https://sourceforge.net/projects/jlibsedml/files/SED-ED/.
Moumene, M; Drissi, F; Croce, O; Djebbari, B; Robert, C; Angelakis, E; Benouareth, D E; Raoult, D; Merhej, V
2016-03-01
We describe using a polyphasic approach that combines proteomic by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF) analysis, genomic data and phenotypic characterization the features of Lactococcus garvieae strain M14 newly isolated from the fermented milk (known as raib) of an Algerian cow. The 2 188 835 bp containing genome sequence displays a metabolic capacity to form acid fermentation that is very useful for industrial applications and encodes for two bacteriocins responsible for its eventual bioprotective properties.
2017-12-01
exhibited enhanced activation of the PI3K/AKT pathway compared to the same lines over-expressing the CA- enriched long (-L) variant PIK3CD-L (retains...demonstrate that FGFR3-S: i) encodes a more aggressive oncogenic signaling protein compared to CA-enriched FGFR3-L (retains exon 14) as defined by in vitro...into PCa cell lines for in vitro and in vivo investigations completed in Year 1 (see description below). 3 FIGURE 1. Full-length cDNA
Refining image segmentation by polygon skeletonization
NASA Technical Reports Server (NTRS)
Clarke, Keith C.
1987-01-01
A skeletonization algorithm was encoded and applied to a test data set of land-use polygons taken from a USGS digital land use dataset at 1:250,000. The distance transform produced by this method was instrumental in the description of the shape, size, and level of generalization of the outlines of the polygons. A comparison of the topology of skeletons for forested wetlands and lakes indicated that some distinction based solely upon the shape properties of the areas is possible, and may be of use in an intelligent automated land cover classification system.
Molecular Cloning and Function of FAS/APO1 Associated Protein in Breast Cancer.
1996-06-01
Ariyama T, Abe T, Druck T, Ohta M, Huebner K, Yanagisawa J, Reed JC, Sato T: PTPN13, a Fas-associated protein tyrosine phosphatase, is located on...20. Yang, Q., and Tonks, N. K. (1991). Isolation of a cDNA clone encoding a human protein-tyrosine phosphatase with homology 7. Huebner, K., Druck , T...Acad. Sci. U.S.A. 91, 7477 (1994). Res. 53, 1945 (1993).(Fig. 3D ). In contrast to Jurkat cells which 13. The original description of PTP-BAS (12
McCrea, Simon M
2007-06-18
Naming and localization of individual body part words to a high-resolution line drawing of a full human figure was tested in a mixed-sex sample of nine right handed subjects. Activation within the superior medial left parietal cortex and bilateral dorsolateral cortex was consistent with involvement of the body schema which is a dynamic postural self-representation coding and combining sensory afference and motor efference inputs/outputs that is automatic and nonconscious. Additional activation of the left rostral occipitotemporal cortex was consistent with involvement of the neural correlates of the verbalizable body structural description that encodes semantic and categorical representations to animate objects such as full human figures. The results point to a highly distributed cortical representation for the encoding and manipulation of body part information and highlight the need for the incorporation of more ecologically valid measures of body schema coding in future functional neuroimaging studies.
Metadata management and semantics in microarray repositories.
Kocabaş, F; Can, T; Baykal, N
2011-12-01
The number of microarray and other high-throughput experiments on primary repositories keeps increasing as do the size and complexity of the results in response to biomedical investigations. Initiatives have been started on standardization of content, object model, exchange format and ontology. However, there are backlogs and inability to exchange data between microarray repositories, which indicate that there is a great need for a standard format and data management. We have introduced a metadata framework that includes a metadata card and semantic nets that make experimental results visible, understandable and usable. These are encoded in syntax encoding schemes and represented in RDF (Resource Description Frame-word), can be integrated with other metadata cards and semantic nets, and can be exchanged, shared and queried. We demonstrated the performance and potential benefits through a case study on a selected microarray repository. We concluded that the backlogs can be reduced and that exchange of information and asking of knowledge discovery questions can become possible with the use of this metadata framework.
Measurement-based quantum communication with resource states generated by entanglement purification
NASA Astrophysics Data System (ADS)
Wallnöfer, J.; Dür, W.
2017-01-01
We investigate measurement-based quantum communication with noisy resource states that are generated by entanglement purification. We consider the transmission of encoded information via noisy quantum channels using a measurement-based implementation of encoding, error correction, and decoding. We show that such an approach offers advantages over direct transmission, gate-based error correction, and measurement-based schemes with direct generation of resource states. We analyze the noise structure of resource states generated by entanglement purification and show that a local error model, i.e., noise acting independently on all qubits of the resource state, is a good approximation in general, and provides an exact description for Greenberger-Horne-Zeilinger states. The latter are resources for a measurement-based implementation of error-correction codes for bit-flip or phase-flip errors. This provides an approach to link the recently found very high thresholds for fault-tolerant measurement-based quantum information processing based on local error models for resource states with error thresholds for gate-based computational models.
Permutation coding technique for image recognition systems.
Kussul, Ernst M; Baidyk, Tatiana N; Wunsch, Donald C; Makeyev, Oleksandr; Martín, Anabel
2006-11-01
A feature extractor and neural classifier for image recognition systems are proposed. The proposed feature extractor is based on the concept of random local descriptors (RLDs). It is followed by the encoder that is based on the permutation coding technique that allows to take into account not only detected features but also the position of each feature on the image and to make the recognition process invariant to small displacements. The combination of RLDs and permutation coding permits us to obtain a sufficiently general description of the image to be recognized. The code generated by the encoder is used as an input data for the neural classifier. Different types of images were used to test the proposed image recognition system. It was tested in the handwritten digit recognition problem, the face recognition problem, and the microobject shape recognition problem. The results of testing are very promising. The error rate for the Modified National Institute of Standards and Technology (MNIST) database is 0.44% and for the Olivetti Research Laboratory (ORL) database it is 0.1%.
Data archiving for animal cognition research: report of an NIMH workshop.
Kurtzman, Howard S; Church, Russell M; Crystal, Jonathon D
2002-11-01
In July 2001, the National Institute of Mental Health sponsored a workshop titled "Data Archiving for Animal Cognition Research." Participants included scientists as well as experts in archiving, publishing, policy, and law. As is described in this report, the workshop resulted in a set of conclusions and recommendations concerning (A) the impact of data archiving on research, (B) how to incorporate data archiving into research practice, (C) contents of data archives, (D) technical and archival standards, and (E) organizational, financing, and policy issues. The animal cognition research community is encouraged to begin now to establish archives, deposit data and related materials, and make use of archived materials in new scientific projects.
Scheme for Quantum Computing Immune to Decoherence
NASA Technical Reports Server (NTRS)
Williams, Colin; Vatan, Farrokh
2008-01-01
A constructive scheme has been devised to enable mapping of any quantum computation into a spintronic circuit in which the computation is encoded in a basis that is, in principle, immune to quantum decoherence. The scheme is implemented by an algorithm that utilizes multiple physical spins to encode each logical bit in such a way that collective errors affecting all the physical spins do not disturb the logical bit. The scheme is expected to be of use to experimenters working on spintronic implementations of quantum logic. Spintronic computing devices use quantum-mechanical spins (typically, electron spins) to encode logical bits. Bits thus encoded (denoted qubits) are potentially susceptible to errors caused by noise and decoherence. The traditional model of quantum computation is based partly on the assumption that each qubit is implemented by use of a single two-state quantum system, such as an electron or other spin-1.2 particle. It can be surprisingly difficult to achieve certain gate operations . most notably, those of arbitrary 1-qubit gates . in spintronic hardware according to this model. However, ironically, certain 2-qubit interactions (in particular, spin-spin exchange interactions) can be achieved relatively easily in spintronic hardware. Therefore, it would be fortunate if it were possible to implement any 1-qubit gate by use of a spin-spin exchange interaction. While such a direct representation is not possible, it is possible to achieve an arbitrary 1-qubit gate indirectly by means of a sequence of four spin-spin exchange interactions, which could be implemented by use of four exchange gates. Accordingly, the present scheme provides for mapping any 1-qubit gate in the logical basis into an equivalent sequence of at most four spin-spin exchange interactions in the physical (encoded) basis. The complexity of the mathematical derivation of the scheme from basic quantum principles precludes a description within this article; it must suffice to report that the derivation provides explicit constructions for finding the exchange couplings in the physical basis needed to implement any arbitrary 1-qubit gate. These constructions lead to spintronic encodings of quantum logic that are more efficient than those of a previously published scheme that utilizes a universal but fixed set of gates.
Evolving the Living With a Star Data System Definition
NASA Astrophysics Data System (ADS)
Otranto, J. F.; Dijoseph, M.
2003-12-01
NASA's Living With a Star (LWS) Program is a space weather-focused and applications-driven research program. The LWS Program is soliciting input from the solar, space physics, space weather, and climate science communities to develop a system that enables access to science data associated with these disciplines, and advances the development of discipline and interdisciplinary findings. The LWS Program will implement a data system that builds upon the existing and planned data capture, processing, and storage components put in place by individual spacecraft missions and also inter-project data management systems, including active and deep archives, and multi-mission data repositories. It is technically feasible for the LWS Program to integrate data from a broad set of resources, assuming they are either publicly accessible or allow access by permission. The LWS Program data system will work in coordination with spacecraft mission data systems and science data repositories, integrating their holdings using a common metadata representation. This common representation relies on a robust metadata definition that provides journalistic and technical data descriptions, plus linkages to supporting data products and tools. The LWS Program intends to become an enabling resource to PIs, interdisciplinary scientists, researchers, and students facilitating both access to a broad collection of science data, as well as the necessary supporting components to understand and make productive use of these data. For the LWS Program to represent science data that are physically distributed across various ground system elements, information will be collected about these distributed data products through a series of LWS Program-created agents. These agents will be customized to interface or interact with each one of these data systems, collect information, and forward any new metadata records to a LWS Program-developed metadata library. A populated LWS metadata library will function as a single point-of-contact that serves the entire science community as a first stop for data availability, whether or not science data are physically stored in an LWS-operated repository. Further, this metadata library will provide the user access to information for understanding these data including descriptions of the associated spacecraft and instrument, data format, calibration and operations issues, links to ancillary and correlative data products, links to processing tools and models associated with these data, and any corresponding findings produced using these data. The LWS may also support an active archive for solar, space physics, space weather, and climate data when these data would otherwise be discarded or archived off-line. This archive could potentially serve also as a data storage backup facility for LWS missions. The plan for the LWS Program metadata library is developed based upon input received from the solar and geospace science communities; the library's architecture is based on existing systems developed for serving science metadata. The LWS Program continues to seek constructive input from the science community, examples of both successes and failures in dealing with science data systems, and insights regarding the obstacles between the current state-of-the-practice and this vision for the LWS Program metadata library.
Cognitive retraining for organizational impairment in obsessive-compulsive disorder.
Buhlmann, Ulrike; Deckersbach, Thilo; Engelhard, Iris; Cook, Laura M; Rauch, Scott L; Kathmann, Norbert; Wilhelm, Sabine; Savage, Cary R
2006-11-15
Individuals with obsessive-compulsive disorder (OCD) have difficulties in organizing information during encoding associated with subsequent memory impairments. This study was designed to investigate whether impairments in organization in individuals with OCD can be alleviated with cognitive training. Thirty-five OCD subjects and 36 controls copied and recalled the Rey-Osterrieth Complex Figure Test (RCFT) [Osterrieth, P.A., 1944. Le test de copie d'une figure complexe: Contribution a l'étude de la perception et de la memoire (The test of copying a complex figure: A contribution to the study of perception and memory). Archive de Psychologie 30, 286-350.] before being randomly assigned to a training or non-training condition. The training condition was designed to improve the ability to organize complex visuospatial information in a meaningful way. The intervention phase was followed by another copy and recall trial of the RCFT. Both OCD and control subjects who underwent training improved more in organization and memory than subjects who did not receive organizational training, providing evidence that the training procedure was effective. OCD subjects improved more in organizational during encoding than control subjects, irrespective of whether or not they had received training. This suggests that organization impairment in OCD affects primarily the ability to spontaneously utilize strategies when faced with complex, ambiguous information but that the ability to implement such strategies when provided with additional trials is preserved. These findings support a distinction in OCD between failure to utilize a strategy and incapacity to implement a strategy.
Extending the XNAT archive tool for image and analysis management in ophthalmology research
NASA Astrophysics Data System (ADS)
Wahle, Andreas; Lee, Kyungmoo; Harding, Adam T.; Garvin, Mona K.; Niemeijer, Meindert; Sonka, Milan; Abràmoff, Michael D.
2013-03-01
In ophthalmology, various modalities and tests are utilized to obtain vital information on the eye's structure and function. For example, optical coherence tomography (OCT) is utilized to diagnose, screen, and aid treatment of eye diseases like macular degeneration or glaucoma. Such data are complemented by photographic retinal fundus images and functional tests on the visual field. DICOM isn't widely used yet, though, and frequently images are encoded in proprietary formats. The eXtensible Neuroimaging Archive Tool (XNAT) is an open-source NIH-funded framework for research PACS and is in use at the University of Iowa for neurological research applications. Its use for ophthalmology was hence desirable but posed new challenges due to data types thus far not considered and the lack of standardized formats. We developed custom tools for data types not natively recognized by XNAT itself using XNAT's low-level REST API. Vendor-provided tools can be included as necessary to convert proprietary data sets into valid DICOM. Clients can access the data in a standardized format while still retaining the original format if needed by specific analysis tools. With respective project-specific permissions, results like segmentations or quantitative evaluations can be stored as additional resources to previously uploaded datasets. Applications can use our abstract-level Python or C/C++ API to communicate with the XNAT instance. This paper describes concepts and details of the designed upload script templates, which can be customized to the needs of specific projects, and the novel client-side communication API which allows integration into new or existing research applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, C. S.; Gaensler, B. M.; Feain, I. J., E-mail: craiga@physics.usyd.edu.au
We present a broadband polarization analysis of 36 discrete polarized radio sources over a very broad, densely sampled frequency band. Our sample was selected on the basis of polarization behavior apparent in narrowband archival data at 1.4 GHz: half the sample shows complicated frequency-dependent polarization behavior (i.e., Faraday complexity) at these frequencies, while half shows comparatively simple behavior (i.e., they appear Faraday simple ). We re-observed the sample using the Australia Telescope Compact Array in full polarization, with 6 GHz of densely sampled frequency coverage spanning 1.3–10 GHz. We have devised a general polarization modeling technique that allows us tomore » identify multiple polarized emission components in a source, and to characterize their properties. We detect Faraday complex behavior in almost every source in our sample. Several sources exhibit particularly remarkable polarization behavior. By comparing our new and archival data, we have identified temporal variability in the broadband integrated polarization spectra of some sources. In a number of cases, the characteristics of the polarized emission components, including the range of Faraday depths over which they emit, their temporal variability, spectral index, and the linear extent of the source, allow us to argue that the spectropolarimetric data encode information about the magneto-ionic environment of active galactic nuclei themselves. Furthermore, the data place direct constraints on the geometry and magneto-ionic structure of this material. We discuss the consequences of restricted frequency bands on the detection and interpretation of polarization structures, and the implications for upcoming spectropolarimetric surveys.« less
Earth observation archive activities at DRA Farnborough
NASA Technical Reports Server (NTRS)
Palmer, M. D.; Williams, J. M.
1993-01-01
Space Sector, Defence Research Agency (DRA), Farnborough have been actively involved in the acquisition and processing of Earth Observation data for over 15 years. During that time an archive of over 20,000 items has been built up. This paper describes the major archive activities, including: operation and maintenance of the main DRA Archive, the development of a prototype Optical Disc Archive System (ODAS), the catalog systems in use at DRA, the UK Processing and Archive Facility for ERS-1 data, and future plans for archiving activities.
The Physics of the B Factories
NASA Astrophysics Data System (ADS)
Bevan, A. J.; Golob, B.; Mannel, Th.; Prell, S.; Yabsley, B. D.; Aihara, H.; Anulli, F.; Arnaud, N.; Aushev, T.; Beneke, M.; Beringer, J.; Bianchi, F.; Bigi, I. I.; Bona, M.; Brambilla, N.; Brodzicka, J.; Chang, P.; Charles, M. J.; Cheng, C. H.; Cheng, H.-Y.; Chistov, R.; Colangelo, P.; Coleman, J. P.; Drutskoy, A.; Druzhinin, V. P.; Eidelman, S.; Eigen, G.; Eisner, A. M.; Faccini, R.; Flood, K. T.; Gambino, P.; Gaz, A.; Gradl, W.; Hayashii, H.; Higuchi, T.; Hulsbergen, W. D.; Hurth, T.; Iijima, T.; Itoh, R.; Jackson, P. D.; Kass, R.; Kolomensky, Yu. G.; Kou, E.; Križan, P.; Kronfeld, A.; Kumano, S.; Kwon, Y. J.; Latham, T. E.; Leith, D. W. G. S.; Lüth, V.; Martinez-Vidal, F.; Meadows, B. T.; Mussa, R.; Nakao, M.; Nishida, S.; Ocariz, J.; Olsen, S. L.; Pakhlov, P.; Pakhlova, G.; Palano, A.; Pich, A.; Playfer, S.; Poluektov, A.; Porter, F. C.; Robertson, S. H.; Roney, J. M.; Roodman, A.; Sakai, Y.; Schwanda, C.; Schwartz, A. J.; Seidl, R.; Sekula, S. J.; Steinhauser, M.; Sumisawa, K.; Swanson, E. S.; Tackmann, F.; Trabelsi, K.; Uehara, S.; Uno, S.; van de Water, R.; Vasseur, G.; Verkerke, W.; Waldi, R.; Wang, M. Z.; Wilson, F. F.; Zupan, J.; Zupanc, A.; Adachi, I.; Albert, J.; Banerjee, Sw.; Bellis, M.; Ben-Haim, E.; Biassoni, P.; Cahn, R. N.; Cartaro, C.; Chauveau, J.; Chen, C.; Chiang, C. C.; Cowan, R.; Dalseno, J.; Davier, M.; Davies, C.; Dingfelder, J. C.; Echenard, B.; Epifanov, D.; Fulsom, B. G.; Gabareen, A. M.; Gary, J. W.; Godang, R.; Graham, M. T.; Hafner, A.; Hamilton, B.; Hartmann, T.; Hayasaka, K.; Hearty, C.; Iwasaki, Y.; Khodjamirian, A.; Kusaka, A.; Kuzmin, A.; Lafferty, G. D.; Lazzaro, A.; Li, J.; Lindemann, D.; Long, O.; Lusiani, A.; Marchiori, G.; Martinelli, M.; Miyabayashi, K.; Mizuk, R.; Mohanty, G. B.; Muller, D. R.; Nakazawa, H.; Ongmongkolkul, P.; Pacetti, S.; Palombo, F.; Pedlar, T. K.; Piilonen, L. E.; Pilloni, A.; Poireau, V.; Prothmann, K.; Pulliam, T.; Rama, M.; Ratcliff, B. N.; Roudeau, P.; Schrenk, S.; Schroeder, T.; Schubert, K. R.; Shen, C. P.; Shwartz, B.; Soffer, A.; Solodov, E. P.; Somov, A.; Starič, M.; Stracka, S.; Telnov, A. V.; Todyshev, K. Yu.; Tsuboyama, T.; Uglov, T.; Vinokurova, A.; Walsh, J. J.; Watanabe, Y.; Won, E.; Wormser, G.; Wright, D. H.; Ye, S.; Zhang, C. C.; Abachi, S.; Abashian, A.; Abe, K.; Abe, N.; Abe, R.; Abe, T.; Abrams, G. S.; Adam, I.; Adamczyk, K.; Adametz, A.; Adye, T.; Agarwal, A.; Ahmed, H.; Ahmed, M.; Ahmed, S.; Ahn, B. S.; Ahn, H. S.; Aitchison, I. J. R.; Akai, K.; Akar, S.; Akatsu, M.; Akemoto, M.; Akhmetshin, R.; Akre, R.; Alam, M. S.; Albert, J. N.; Aleksan, R.; Alexander, J. P.; Alimonti, G.; Allen, M. T.; Allison, J.; Allmendinger, T.; Alsmiller, J. R. G.; Altenburg, D.; Alwyn, K. E.; An, Q.; Anderson, J.; Andreassen, R.; Andreotti, D.; Andreotti, M.; Andress, J. C.; Angelini, C.; Anipko, D.; Anjomshoaa, A.; Anthony, P. L.; Antillon, E. A.; Antonioli, E.; Aoki, K.; Arguin, J. F.; Arinstein, K.; Arisaka, K.; Asai, K.; Asai, M.; Asano, Y.; Asgeirsson, D. J.; Asner, D. M.; Aso, T.; Aspinwall, M. L.; Aston, D.; Atmacan, H.; Aubert, B.; Aulchenko, V.; Ayad, R.; Azemoon, T.; Aziz, T.; Azzolini, V.; Azzopardi, D. E.; Baak, M. A.; Back, J. J.; Bagnasco, S.; Bahinipati, S.; Bailey, D. S.; Bailey, S.; Bailly, P.; van Bakel, N.; Bakich, A. M.; Bala, A.; Balagura, V.; Baldini-Ferroli, R.; Ban, Y.; Banas, E.; Band, H. R.; Banerjee, S.; Baracchini, E.; Barate, R.; Barberio, E.; Barbero, M.; Bard, D. J.; Barillari, T.; Barlow, N. R.; Barlow, R. J.; Barrett, M.; Bartel, W.; Bartelt, J.; Bartoldus, R.; Batignani, G.; Battaglia, M.; Bauer, J. M.; Bay, A.; Beaulieu, M.; Bechtle, P.; Beck, T. W.; Becker, J.; Becla, J.; Bedny, I.; Behari, S.; Behera, P. K.; Behn, E.; Behr, L.; Beigbeder, C.; Beiline, D.; Bell, R.; Bellini, F.; Bellodi, G.; Belous, K.; Benayoun, M.; Benelli, G.; Benitez, J. F.; Benkebil, M.; Berger, N.; Bernabeu, J.; Bernard, D.; Bernet, R.; Bernlochner, F. U.; Berryhill, J. W.; Bertsche, K.; Besson, P.; Best, D. S.; Bettarini, S.; Bettoni, D.; Bhardwaj, V.; Bhimji, W.; Bhuyan, B.; Biagini, M. E.; Biasini, M.; van Bibber, K.; Biesiada, J.; Bingham, I.; Bionta, R. M.; Bischofberger, M.; Bitenc, U.; Bizjak, I.; Blanc, F.; Blaylock, G.; Blinov, V. E.; Bloom, E.; Bloom, P. C.; Blount, N. L.; Blouw, J.; Bly, M.; Blyth, S.; Boeheim, C. T.; Bomben, M.; Bondar, A.; Bondioli, M.; Bonneaud, G. R.; Bonvicini, G.; Booke, M.; Booth, J.; Borean, C.; Borgland, A. W.; Borsato, E.; Bosi, F.; Bosisio, L.; Botov, A. A.; Bougher, J.; Bouldin, K.; Bourgeois, P.; Boutigny, D.; Bowerman, D. A.; Boyarski, A. M.; Boyce, R. F.; Boyd, J. T.; Bozek, A.; Bozzi, C.; Bračko, M.; Brandenburg, G.; Brandt, T.; Brau, B.; Brau, J.; Breon, A. B.; Breton, D.; Brew, C.; Briand, H.; Bright-Thomas, P. G.; Brigljević, V.; Britton, D. I.; Brochard, F.; Broomer, B.; Brose, J.; Browder, T. E.; Brown, C. L.; Brown, C. M.; Brown, D. N.; Browne, M.; Bruinsma, M.; Brunet, S.; Bucci, F.; Buchanan, C.; Buchmueller, O. L.; Bünger, C.; Bugg, W.; Bukin, A. D.; Bula, R.; Bulten, H.; Burchat, P. R.; Burgess, W.; Burke, J. P.; Button-Shafer, J.; Buzykaev, A. R.; Buzzo, A.; Cai, Y.; Calabrese, R.; Calcaterra, A.; Calderini, G.; Camanzi, B.; Campagna, E.; Campagnari, C.; Capra, R.; Carassiti, V.; Carpinelli, M.; Carroll, M.; Casarosa, G.; Casey, B. C. K.; Cason, N. M.; Castelli, G.; Cavallo, N.; Cavoto, G.; Cecchi, A.; Cenci, R.; Cerizza, G.; Cervelli, A.; Ceseracciu, A.; Chai, X.; Chaisanguanthum, K. S.; Chang, M. C.; Chang, Y. H.; Chang, Y. W.; Chao, D. S.; Chao, M.; Chao, Y.; Charles, E.; Chavez, C. A.; Cheaib, R.; Chekelian, V.; Chen, A.; Chen, E.; Chen, G. P.; Chen, H. F.; Chen, J.-H.; Chen, J. C.; Chen, K. F.; Chen, P.; Chen, S.; Chen, W. T.; Chen, X.; Chen, X. R.; Chen, Y. Q.; Cheng, B.; Cheon, B. G.; Chevalier, N.; Chia, Y. M.; Chidzik, S.; Chilikin, K.; Chistiakova, M. V.; Cizeron, R.; Cho, I. S.; Cho, K.; Chobanova, V.; Choi, H. H. F.; Choi, K. S.; Choi, S. K.; Choi, Y.; Choi, Y. K.; Christ, S.; Chu, P. H.; Chun, S.; Chuvikov, A.; Cibinetto, G.; Cinabro, D.; Clark, A. R.; Clark, P. J.; Clarke, C. K.; Claus, R.; Claxton, B.; Clifton, Z. C.; Cochran, J.; Cohen-Tanugi, J.; Cohn, H.; Colberg, T.; Cole, S.; Colecchia, F.; Condurache, C.; Contri, R.; Convert, P.; Convery, M. R.; Cooke, P.; Copty, N.; Cormack, C. M.; Dal Corso, F.; Corwin, L. A.; Cossutti, F.; Cote, D.; Cotta Ramusino, A.; Cottingham, W. N.; Couderc, F.; Coupal, D. P.; Covarelli, R.; Cowan, G.; Craddock, W. W.; Crane, G.; Crawley, H. B.; Cremaldi, L.; Crescente, A.; Cristinziani, M.; Crnkovic, J.; Crosetti, G.; Cuhadar-Donszelmann, T.; Cunha, A.; Curry, S.; D'Orazio, A.; Dû, S.; Dahlinger, G.; Dahmes, B.; Dallapiccola, C.; Danielson, N.; Danilov, M.; Das, A.; Dash, M.; Dasu, S.; Datta, M.; Daudo, F.; Dauncey, P. D.; David, P.; Davis, C. L.; Day, C. T.; De Mori, F.; De Domenico, G.; De Groot, N.; De la Vaissière, C.; de la Vaissière, Ch.; de Lesquen, A.; De Nardo, G.; de Sangro, R.; De Silva, A.; DeBarger, S.; Decker, F. J.; del Amo Sanchez, P.; Del Buono, L.; Del Gamba, V.; del Re, D.; Della Ricca, G.; Denig, A. G.; Derkach, D.; Derrington, I. M.; DeStaebler, H.; Destree, J.; Devmal, S.; Dey, B.; Di Girolamo, B.; Marco, E. Di; Dickopp, M.; Dima, M. O.; Dittrich, S.; Dittongo, S.; Dixon, P.; Dneprovsky, L.; Dohou, F.; Doi, Y.; Doležal, Z.; Doll, D. A.; Donald, M.; Dong, L.; Dong, L. Y.; Dorfan, J.; Dorigo, A.; Dorsten, M. P.; Dowd, R.; Dowdell, J.; Drásal, Z.; Dragic, J.; Drummond, B. W.; Dubitzky, R. S.; Dubois-Felsmann, G. P.; Dubrovin, M. S.; Duh, Y. C.; Duh, Y. T.; Dujmic, D.; Dungel, W.; Dunwoodie, W.; Dutta, D.; Dvoretskii, A.; Dyce, N.; Ebert, M.; Eckhart, E. A.; Ecklund, S.; Eckmann, R.; Eckstein, P.; Edgar, C. L.; Edwards, A. J.; Egede, U.; Eichenbaum, A. M.; Elmer, P.; Emery, S.; Enari, Y.; Enomoto, R.; Erdos, E.; Erickson, R.; Ernst, J. A.; Erwin, R. J.; Escalier, M.; Eschenburg, V.; Eschrich, I.; Esen, S.; Esteve, L.; Evangelisti, F.; Everton, C. W.; Eyges, V.; Fabby, C.; Fabozzi, F.; Fahey, S.; Falbo, M.; Fan, S.; Fang, F.; Fanin, C.; Farbin, A.; Farhat, H.; Fast, J. E.; Feindt, M.; Fella, A.; Feltresi, E.; Ferber, T.; Fernholz, R. E.; Ferrag, S.; Ferrarotto, F.; Ferroni, F.; Field, R. C.; Filippi, A.; Finocchiaro, G.; Fioravanti, E.; Firmino da Costa, J.; Fischer, P.-A.; Fisher, A. S.; Fisher, P. H.; Flacco, C. J.; Flack, R. L.; Flaecher, H. U.; Flanagan, J.; Flanigan, J. M.; Ford, K. E.; Ford, W. T.; Forster, I. J.; Forti, A. C.; Forti, F.; Fortin, D.; Foster, B.; Foulkes, S. D.; Fouque, G.; Fox, J.; Franchini, P.; Franco Sevilla, M.; Franek, B.; Frank, E. D.; Fransham, K. B.; Fratina, S.; Fratini, K.; Frey, A.; Frey, R.; Friedl, M.; Fritsch, M.; Fry, J. R.; Fujii, H.; Fujikawa, M.; Fujita, Y.; Fujiyama, Y.; Fukunaga, C.; Fukushima, M.; Fullwood, J.; Funahashi, Y.; Funakoshi, Y.; Furano, F.; Furman, M.; Furukawa, K.; Futterschneider, H.; Gabathuler, E.; Gabriel, T. A.; Gabyshev, N.; Gaede, F.; Gagliardi, N.; Gaidot, A.; Gaillard, J.-M.; Gaillard, J. R.; Galagedera, S.; Galeazzi, F.; Gallo, F.; Gamba, D.; Gamet, R.; Gan, K. K.; Gandini, P.; Ganguly, S.; Ganzhur, S. F.; Gao, Y. Y.; Gaponenko, I.; Garmash, A.; Garra Tico, J.; Garzia, I.; Gaspero, M.; Gastaldi, F.; Gatto, C.; Gaur, V.; Geddes, N. I.; Geld, T. L.; Genat, J.-F.; George, K. A.; George, M.; George, S.; Georgette, Z.; Gershon, T. J.; Gill, M. S.; Gillard, R.; Gilman, J. D.; Giordano, F.; Giorgi, M. A.; Giraud, P.-F.; Gladney, L.; Glanzman, T.; Glattauer, R.; Go, A.; Goetzen, K.; Goh, Y. M.; Gokhroo, G.; Goldenzweig, P.; Golubev, V. B.; Gopal, G. P.; Gordon, A.; Gorišek, A.; Goriletsky, V. I.; Gorodeisky, R.; Gosset, L.; Gotow, K.; Gowdy, S. J.; Graffin, P.; Grancagnolo, S.; Grauges, E.; Graziani, G.; Green, M. G.; Greene, M. G.; Grenier, G. J.; Grenier, P.; Griessinger, K.; Grillo, A. A.; Grinyov, B. V.; Gritsan, A. V.; Grosdidier, G.; Grosse Perdekamp, M.; Grosso, P.; Grothe, M.; Groysman, Y.; Grünberg, O.; Guido, E.; Guler, H.; Gunawardane, N. J. W.; Guo, Q. H.; Guo, R. S.; Guo, Z. J.; Guttman, N.; Ha, H.; Ha, H. C.; Haas, T.; Haba, J.; Hachtel, J.; Hadavand, H. K.; Hadig, T.; Hagner, C.; Haire, M.; Haitani, F.; Haji, T.; Haller, G.; Halyo, V.; Hamano, K.; Hamasaki, H.; Hamel de Monchenault, G.; Hamilton, J.; Hamilton, R.; Hamon, O.; Han, B. Y.; Han, Y. L.; Hanada, H.; Hanagaki, K.; Handa, F.; Hanson, J. E.; Hanushevsky, A.; Hara, K.; Hara, T.; Harada, Y.; Harrison, P. F.; Harrison, T. J.; Harrop, B.; Hart, A. J.; Hart, P. A.; Hartfiel, B. L.; Harton, J. L.; Haruyama, T.; Hasan, A.; Hasegawa, Y.; Hast, C.; Hastings, N. C.; Hasuko, K.; Hauke, A.; Hawkes, C. M.; Hayashi, K.; Hazumi, M.; Hee, C.; Heenan, E. M.; Heffernan, D.; Held, T.; Henderson, R.; Henderson, S. W.; Hertzbach, S. S.; Hervé, S.; Heß, M.; Heusch, C. A.; Hicheur, A.; Higashi, Y.; Higasino, Y.; Higuchi, I.; Hikita, S.; Hill, E. J.; Himel, T.; Hinz, L.; Hirai, T.; Hirano, H.; Hirschauer, J. F.; Hitlin, D. G.; Hitomi, N.; Hodgkinson, M. C.; Höcker, A.; Hoi, C. T.; Hojo, T.; Hokuue, T.; Hollar, J. J.; Hong, T. M.; Honscheid, K.; Hooberman, B.; Hopkins, D. A.; Horii, Y.; Hoshi, Y.; Hoshina, K.; Hou, S.; Hou, W. S.; Hryn'ova, T.; Hsiung, Y. B.; Hsu, C. L.; Hsu, S. C.; Hu, H.; Hu, T.; Huang, H. C.; Huang, T. J.; Huang, Y. C.; Huard, Z.; Huffer, M. E.; Hufnagel, D.; Hung, T.; Hutchcroft, D. E.; Hyun, H. J.; Ichizawa, S.; Igaki, T.; Igarashi, A.; Igarashi, S.; Igarashi, Y.; Igonkina, O.; Ikado, K.; Ikeda, H.; Ikeda, H.; Ikeda, K.; Ilic, J.; Inami, K.; Innes, W. R.; Inoue, Y.; Ishikawa, A.; Ishino, H.; Itagaki, K.; Itami, S.; Itoh, K.; Ivanchenko, V. N.; Iverson, R.; Iwabuchi, M.; Iwai, G.; Iwai, M.; Iwaida, S.; Iwamoto, M.; Iwasaki, H.; Iwasaki, M.; Iwashita, T.; Izen, J. M.; Jackson, D. J.; Jackson, F.; Jackson, G.; Jackson, P. S.; Jacobsen, R. G.; Jacoby, C.; Jaegle, I.; Jain, V.; Jalocha, P.; Jang, H. K.; Jasper, H.; Jawahery, A.; Jayatilleke, S.; Jen, C. M.; Jensen, F.; Jessop, C. P.; Ji, X. B.; John, M. J. J.; Johnson, D. R.; Johnson, J. R.; Jolly, S.; Jones, M.; Joo, K. K.; Joshi, N.; Joshi, N. J.; Judd, D.; Julius, T.; Kadel, R. W.; Kadyk, J. A.; Kagan, H.; Kagan, R.; Kah, D. H.; Kaiser, S.; Kaji, H.; Kajiwara, S.; Kakuno, H.; Kameshima, T.; Kaminski, J.; Kamitani, T.; Kaneko, J.; Kang, J. H.; Kang, J. S.; Kani, T.; Kapusta, P.; Karbach, T. M.; Karolak, M.; Karyotakis, Y.; Kasami, K.; Katano, G.; Kataoka, S. U.; Katayama, N.; Kato, E.; Kato, Y.; Kawai, H.; Kawai, M.; Kawamura, N.; Kawasaki, T.; Kay, J.; Kay, M.; Kelly, M. P.; Kelsey, M. H.; Kent, N.; Kerth, L. T.; Khan, A.; Khan, H. R.; Kharakh, D.; Kibayashi, A.; Kichimi, H.; Kiesling, C.; Kikuchi, M.; Kikutani, E.; Kim, B. H.; Kim, C. H.; Kim, D. W.; Kim, H.; Kim, H. J.; Kim, H. O.; Kim, H. W.; Kim, J. B.; Kim, J. H.; Kim, K. T.; Kim, M. J.; Kim, P.; Kim, S. K.; Kim, S. M.; Kim, T. H.; Kim, Y. I.; Kim, Y. J.; King, G. J.; Kinoshita, K.; Kirk, A.; Kirkby, D.; Kitayama, I.; Klemetti, M.; Klose, V.; Klucar, J.; Knecht, N. S.; Knoepfel, K. J.; Knowles, D. J.; Ko, B. R.; Kobayashi, N.; Kobayashi, S.; Kobayashi, T.; Kobel, M. J.; Koblitz, S.; Koch, H.; Kocian, M. L.; Kodyš, P.; Koeneke, K.; Kofler, R.; Koike, S.; Koishi, S.; Koiso, H.; Kolb, J. A.; Kolya, S. D.; Kondo, Y.; Konishi, H.; Koppenburg, P.; Koptchev, V. B.; Kordich, T. M. B.; Korol, A. A.; Korotushenko, K.; Korpar, S.; Kouzes, R. T.; Kovalskyi, D.; Kowalewski, R.; Kozakai, Y.; Kozanecki, W.; Kral, J. F.; Krasnykh, A.; Krause, R.; Kravchenko, E. A.; Krebs, J.; Kreisel, A.; Kreps, M.; Krishnamurthy, M.; Kroeger, R.; Kroeger, W.; Krokovny, P.; Kronenbitter, B.; Kroseberg, J.; Kubo, T.; Kuhr, T.; Kukartsev, G.; Kulasiri, R.; Kulikov, A.; Kumar, R.; Kumar, S.; Kumita, T.; Kuniya, T.; Kunze, M.; Kuo, C. C.; Kuo, T.-L.; Kurashiro, H.; Kurihara, E.; Kurita, N.; Kuroki, Y.; Kurup, A.; Kutter, P. E.; Kuznetsova, N.; Kvasnička, P.; Kyberd, P.; Kyeong, S. H.; Lacker, H. M.; Lae, C. K.; Lamanna, E.; Lamsa, J.; Lanceri, L.; Landi, L.; Lang, M. I.; Lange, D. J.; Lange, J. S.; Langenegger, U.; Langer, M.; Lankford, A. J.; Lanni, F.; Laplace, S.; Latour, E.; Lau, Y. P.; Lavin, D. R.; Layter, J.; Lebbolo, H.; LeClerc, C.; Leddig, T.; Leder, G.; Le Diberder, F.; Lee, C. L.; Lee, J.; Lee, J. S.; Lee, M. C.; Lee, M. H.; Lee, M. J.; Lee, S.-J.; Lee, S. E.; Lee, S. H.; Lee, Y. J.; Lees, J. P.; Legendre, M.; Leitgab, M.; Leitner, R.; Leonardi, E.; Leonidopoulos, C.; Lepeltier, V.; Leruste, Ph.; Lesiak, T.; Levi, M. E.; Levy, S. L.; Lewandowski, B.; Lewczuk, M. J.; Lewis, P.; Li, H.; Li, H. B.; Li, S.; Li, X.; Li, Y.; Gioi, L. Li; Libby, J.; Lidbury, J.; Lillard, V.; Lim, C. L.; Limosani, A.; Lin, C. S.; Lin, J. Y.; Lin, S. W.; Lin, Y. S.; Lindquist, B.; Lindsay, C.; Lista, L.; Liu, C.; Liu, F.; Liu, H.; Liu, H. M.; Liu, J.; Liu, R.; Liu, T.; Liu, Y.; Liu, Z. Q.; Liventsev, D.; Lo Vetere, M.; Locke, C. B.; Lockman, W. S.; Di Lodovico, F.; Lombardo, V.; London, G. W.; Lopes Pegna, D.; Lopez, L.; Lopez-March, N.; Lory, J.; LoSecco, J. M.; Lou, X. C.; Louvot, R.; Lu, A.; Lu, C.; Lu, M.; Lu, R. S.; Lueck, T.; Luitz, S.; Lukin, P.; Lund, P.; Luppi, E.; Lutz, A. M.; Lutz, O.; Lynch, G.; Lynch, H. L.; Lyon, A. J.; Lyubinsky, V. R.; MacFarlane, D. B.; Mackay, C.; MacNaughton, J.; Macri, M. M.; Madani, S.; Mader, W. F.; Majewski, S. A.; Majumder, G.; Makida, Y.; Malaescu, B.; Malaguti, R.; Malclés, J.; Mallik, U.; Maly, E.; Mamada, H.; Manabe, A.; Mancinelli, G.; Mandelkern, M.; Mandl, F.; Manfredi, P. F.; Mangeol, D. J. J.; Manoni, E.; Mao, Z. P.; Margoni, M.; Marker, C. E.; Markey, G.; Marks, J.; Marlow, D.; Marques, V.; Marsiske, H.; Martellotti, S.; Martin, E. C.; Martin, J. P.; Martin, L.; Martinez, A. J.; Marzolla, M.; Mass, A.; Masuzawa, M.; Mathieu, A.; Matricon, P.; Matsubara, T.; Matsuda, T.; Matsuda, T.; Matsumoto, H.; Matsumoto, S.; Matsumoto, T.; Matsuo, H.; Mattison, T. S.; Matvienko, D.; Matyja, A.; Mayer, B.; Mazur, M. A.; Mazzoni, M. A.; McCulloch, M.; McDonald, J.; McFall, J. D.; McGrath, P.; McKemey, A. K.; McKenna, J. A.; Mclachlin, S. E.; McMahon, S.; McMahon, T. R.; McOnie, S.; Medvedeva, T.; Melen, R.; Mellado, B.; Menges, W.; Menke, S.; Merchant, A. M.; Merkel, J.; Messner, R.; Metcalfe, S.; Metzler, S.; Meyer, N. T.; Meyer, T. I.; Meyer, W. T.; Michael, A. K.; Michelon, G.; Michizono, S.; Micout, P.; Miftakov, V.; Mihalyi, A.; Mikami, Y.; Milanes, D. A.; Milek, M.; Mimashi, T.; Minamora, J. S.; Mindas, C.; Minutoli, S.; Mir, L. M.; Mishra, K.; Mitaroff, W.; Miyake, H.; Miyashita, T. S.; Miyata, H.; Miyazaki, Y.; Moffitt, L. C.; Mohanty, G. B.; Mohapatra, A.; Mohapatra, A. K.; Mohapatra, D.; Moll, A.; Moloney, G. R.; Mols, J. P.; Mommsen, R. K.; Monge, M. R.; Monorchio, D.; Moore, T. B.; Moorhead, G. F.; Mora de Freitas, P.; Morandin, M.; Morgan, N.; Morgan, S. E.; Morganti, M.; Morganti, S.; Mori, S.; Mori, T.; Morii, M.; Morris, J. P.; Morsani, F.; Morton, G. W.; Moss, L. J.; Mouly, J. P.; Mount, R.; Mueller, J.; Müller-Pfefferkorn, R.; Mugge, M.; Muheim, F.; Muir, A.; Mullin, E.; Munerato, M.; Murakami, A.; Murakami, T.; Muramatsu, N.; Musico, P.; Nagai, I.; Nagamine, T.; Nagasaka, Y.; Nagashima, Y.; Nagayama, S.; Nagel, M.; Naisbit, M. T.; Nakadaira, T.; Nakahama, Y.; Nakajima, M.; Nakajima, T.; Nakamura, I.; Nakamura, T.; Nakamura, T. T.; Nakano, E.; Nakayama, H.; Nam, J. W.; Narita, S.; Narsky, I.; Nash, J. A.; Natkaniec, Z.; Nauenberg, U.; Nayak, M.; Neal, H.; Nedelkovska, E.; Negrini, M.; Neichi, K.; Nelson, D.; Nelson, S.; Neri, N.; Nesom, G.; Neubauer, S.; Newman-Coburn, D.; Ng, C.; Nguyen, X.; Nicholson, H.; Niebuhr, C.; Nief, J. Y.; Niiyama, M.; Nikolich, M. B.; Nisar, N. K.; Nishimura, K.; Nishio, Y.; Nitoh, O.; Nogowski, R.; Noguchi, S.; Nomura, T.; Nordby, M.; Nosochkov, Y.; Novokhatski, A.; Nozaki, S.; Nozaki, T.; Nugent, I. M.; O'Grady, C. P.; O'Neale, S. W.; O'Neill, F. G.; Oberhof, B.; Oddone, P. J.; Ofte, I.; Ogawa, A.; Ogawa, K.; Ogawa, S.; Ogawa, Y.; Ohkubo, R.; Ohmi, K.; Ohnishi, Y.; Ohno, F.; Ohshima, T.; Ohshima, Y.; Ohuchi, N.; Oide, K.; Oishi, N.; Okabe, T.; Okazaki, N.; Okazaki, T.; Okuno, S.; Olaiya, E. O.; Olivas, A.; Olley, P.; Olsen, J.; Ono, S.; Onorato, G.; Onuchin, A. P.; Onuki, Y.; Ooba, T.; Orimoto, T. J.; Oshima, T.; Osipenkov, I. L.; Ostrowicz, W.; Oswald, C.; Otto, S.; Oyang, J.; Oyanguren, A.; Ozaki, H.; Ozcan, V. E.; Paar, H. P.; Padoan, C.; Paick, K.; Palka, H.; Pan, B.; Pan, Y.; Panduro Vazquez, W.; Panetta, J.; Panova, A. I.; Panvini, R. S.; Panzenböck, E.; Paoloni, E.; Paolucci, P.; Pappagallo, M.; Paramesvaran, S.; Park, C. S.; Park, C. W.; Park, H.; Park, H. K.; Park, K. S.; Park, W.; Parry, R. J.; Parslow, N.; Passaggio, S.; Pastore, F. C.; Patel, P. M.; Patrignani, C.; Patteri, P.; Pavel, T.; Pavlovich, J.; Payne, D. J.; Peak, L. S.; Peimer, D. R.; Pelizaeus, M.; Pellegrini, R.; Pelliccioni, M.; Peng, C. C.; Peng, J. C.; Peng, K. C.; Peng, T.; Penichot, Y.; Pennazzi, S.; Pennington, M. R.; Penny, R. C.; Penzkofer, A.; Perazzo, A.; Perez, A.; Perl, M.; Pernicka, M.; Perroud, J.-P.; Peruzzi, I. M.; Pestotnik, R.; Peters, K.; Peters, M.; Petersen, B. A.; Petersen, T. C.; Petigura, E.; Petrak, S.; Petrella, A.; Petrič, M.; Petzold, A.; Pia, M. G.; Piatenko, T.; Piccolo, D.; Piccolo, M.; Piemontese, L.; Piemontese, M.; Pierini, M.; Pierson, S.; Pioppi, M.; Piredda, G.; Pivk, M.; Plaszczynski, S.; Polci, F.; Pompili, A.; Poropat, P.; Posocco, M.; Potter, C. T.; Potter, R. J. L.; Prasad, V.; Prebys, E.; Prencipe, E.; Prendki, J.; Prepost, R.; Prest, M.; Prim, M.; Pripstein, M.; Prudent, X.; Pruvot, S.; Puccio, E. M. T.; Purohit, M. V.; Qi, N. D.; Quinn, H.; Raaf, J.; Rabberman, R.; Raffaelli, F.; Ragghianti, G.; Rahatlou, S.; Rahimi, A. M.; Rahmat, R.; Rakitin, A. Y.; Randle-Conde, A.; Rankin, P.; Rashevskaya, I.; Ratkovsky, S.; Raven, G.; Re, V.; Reep, M.; Regensburger, J. J.; Reidy, J.; Reif, R.; Reisert, B.; Renard, C.; Renga, F.; Ricciardi, S.; Richman, J. D.; Ritchie, J. L.; Ritter, M.; Rivetta, C.; Rizzo, G.; Roat, C.; Robbe, P.; Roberts, D. A.; Robertson, A. I.; Robutti, E.; Rodier, S.; Rodriguez, D. M.; Rodriguez, J. L.; Rodriguez, R.; Roe, N. A.; Röhrken, M.; Roethel, W.; Rolquin, J.; Romanov, L.; Romosan, A.; Ronan, M. T.; Rong, G.; Ronga, F. J.; Roos, L.; Root, N.; Rosen, M.; Rosenberg, E. I.; Rossi, A.; Rostomyan, A.; Rotondo, M.; Roussot, E.; Roy, J.; Rozanska, M.; Rozen, Y.; Rozen, Y.; Rubin, A. E.; Ruddick, W. O.; Ruland, A. M.; Rybicki, K.; Ryd, A.; Ryu, S.; Ryuko, J.; Sabik, S.; Sacco, R.; Saeed, M. A.; Safai Tehrani, F.; Sagawa, H.; Sahoo, H.; Sahu, S.; Saigo, M.; Saito, T.; Saitoh, S.; Sakai, K.; Sakamoto, H.; Sakaue, H.; Saleem, M.; Salnikov, A. A.; Salvati, E.; Salvatore, F.; Samuel, A.; Sanders, D. A.; Sanders, P.; Sandilya, S.; Sandrelli, F.; Sands, W.; Sands, W. R.; Sanpei, M.; Santel, D.; Santelj, L.; Santoro, V.; Santroni, A.; Sanuki, T.; Sarangi, T. R.; Saremi, S.; Sarti, A.; Sasaki, T.; Sasao, N.; Satapathy, M.; Sato, Nobuhiko; Sato, Noriaki; Sato, Y.; Satoyama, N.; Satpathy, A.; Savinov, V.; Savvas, N.; Saxton, O. H.; Sayeed, K.; Schaffner, S. F.; Schalk, T.; Schenk, S.; Schieck, J. R.; Schietinger, T.; Schilling, C. J.; Schindler, R. H.; Schmid, S.; Schmitz, R. E.; Schmuecker, H.; Schneider, O.; Schnell, G.; Schönmeier, P.; Schofield, K. C.; Schott, G.; Schröder, H.; Schram, M.; Schubert, J.; Schümann, J.; Schultz, J.; Schumm, B. A.; Schune, M. H.; Schwanke, U.; Schwarz, H.; Schwiening, J.; Schwierz, R.; Schwitters, R. F.; Sciacca, C.; Sciolla, G.; Scott, I. J.; Seeman, J.; Seiden, A.; Seitz, R.; Seki, T.; Sekiya, A. I.; Semenov, S.; Semmler, D.; Sen, S.; Senyo, K.; Seon, O.; Serbo, V. V.; Serednyakov, S. I.; Serfass, B.; Serra, M.; Serrano, J.; Settai, Y.; Seuster, R.; Sevior, M. E.; Shakhova, K. V.; Shang, L.; Shapkin, M.; Sharma, V.; Shebalin, V.; Shelkov, V. G.; Shen, B. C.; Shen, D. Z.; Shen, Y. T.; Sherwood, D. J.; Shibata, T.; Shibata, T. A.; Shibuya, H.; Shidara, T.; Shimada, K.; Shimoyama, M.; Shinomiya, S.; Shiu, J. G.; Shorthouse, H. W.; Shpilinskaya, L. I.; Sibidanov, A.; Sicard, E.; Sidorov, A.; Sidorov, V.; Siegle, V.; Sigamani, M.; Simani, M. C.; Simard, M.; Simi, G.; Simon, F.; Simonetto, F.; Sinev, N. B.; Singh, H.; Singh, J. B.; Sinha, R.; Sitt, S.; Skovpen, Yu. I.; Sloane, R. J.; Smerkol, P.; Smith, A. J. S.; Smith, D.; Smith, D. S.; Smith, J. G.; Smol, A.; Snoek, H. L.; Snyder, A.; So, R. Y.; Sobie, R. J.; Soderstrom, E.; Soha, A.; Sohn, Y. S.; Sokoloff, M. D.; Sokolov, A.; Solagna, P.; Solovieva, E.; Soni, N.; Sonnek, P.; Sordini, V.; Spaan, B.; Spanier, S. M.; Spencer, E.; Speziali, V.; Spitznagel, M.; Spradlin, P.; Staengle, H.; Stamen, R.; Stanek, M.; Stanič, S.; Stark, J.; Steder, M.; Steininger, H.; Steinke, M.; Stelzer, J.; Stevanato, E.; Stocchi, A.; Stock, R.; Stoeck, H.; Stoker, D. P.; Stroili, R.; Strom, D.; Strother, P.; Strube, J.; Stugu, B.; Stypula, J.; Su, D.; Suda, R.; Sugahara, R.; Sugi, A.; Sugimura, T.; Sugiyama, A.; Suitoh, S.; Sullivan, M. K.; Sumihama, M.; Sumiyoshi, T.; Summers, D. J.; Sun, L.; Sun, S.; Sundermann, J. E.; Sung, H. F.; Susaki, Y.; Sutcliffe, P.; Suzuki, A.; Suzuki, J.; Suzuki, J. I.; Suzuki, K.; Suzuki, S.; Suzuki, S. Y.; Swain, J. E.; Swain, S. K.; T'Jampens, S.; Tabata, M.; Tackmann, K.; Tajima, H.; Tajima, O.; Takahashi, K.; Takahashi, S.; Takahashi, T.; Takasaki, F.; Takayama, T.; Takita, M.; Tamai, K.; Tamponi, U.; Tamura, N.; Tan, N.; Tan, P.; Tanabe, K.; Tanabe, T.; Tanaka, H. A.; Tanaka, J.; Tanaka, M.; Tanaka, S.; Tanaka, Y.; Tanida, K.; Taniguchi, N.; Taras, P.; Tasneem, N.; Tatishvili, G.; Tatomi, T.; Tawada, M.; Taylor, F.; Taylor, G. N.; Taylor, G. P.; Telnov, V. I.; Teodorescu, L.; Ter-Antonyan, R.; Teramoto, Y.; Teytelman, D.; Thérin, G.; Thiebaux, Ch.; Thiessen, D.; Thomas, E. W.; Thompson, J. M.; Thorne, F.; Tian, X. C.; Tibbetts, M.; Tikhomirov, I.; Tinslay, J. S.; Tiozzo, G.; Tisserand, V.; Tocut, V.; Toki, W. H.; Tomassini, E. W.; Tomoto, M.; Tomura, T.; Torassa, E.; Torrence, E.; Tosi, S.; Touramanis, C.; Toussaint, J. C.; Tovey, S. N.; Trapani, P. P.; Treadwell, E.; Triggiani, G.; Trincaz-Duvoid, S.; Trischuk, W.; Troost, D.; Trunov, A.; Tsai, K. L.; Tsai, Y. T.; Tsujita, Y.; Tsukada, K.; Tsukamoto, T.; Tuggle, J. M.; Tumanov, A.; Tung, Y. W.; Turnbull, L.; Turner, J.; Turri, M.; Uchida, K.; Uchida, M.; Uchida, Y.; Ueki, M.; Ueno, K.; Ujiie, N.; Ulmer, K. A.; Unno, Y.; Urquijo, P.; Ushiroda, Y.; Usov, Y.; Usseglio, M.; Usuki, Y.; Uwer, U.; Va'vra, J.; Vahsen, S. E.; Vaitsas, G.; Valassi, A.; Vallazza, E.; Vallereau, A.; Vanhoefer, P.; van Hoek, W. C.; Van Hulse, C.; van Winkle, D.; Varner, G.; Varnes, E. W.; Varvell, K. E.; Vasileiadis, G.; Velikzhanin, Y. S.; Verderi, M.; Versillé, S.; Vervink, K.; Viaud, B.; Vidal, P. B.; Villa, S.; Villanueva-Perez, P.; Vinograd, E. L.; Vitale, L.; Vitug, G. M.; Voß, C.; Voci, C.; Voena, C.; Volk, A.; von Wimmersperg-Toeller, J. H.; Vorobyev, V.; Vossen, A.; Vuagnin, G.; Vuosalo, C. O.; Wacker, K.; Wagner, A. P.; Wagner, D. L.; Wagner, G.; Wagner, M. N.; Wagner, S. R.; Wagoner, D. E.; Walker, D.; Walkowiak, W.; Wallom, D.; Wang, C. C.; Wang, C. H.; Wang, J.; Wang, J. G.; Wang, K.; Wang, L.; Wang, L. L.; Wang, P.; Wang, T. J.; Wang, W. F.; Wang, X. L.; Wang, Y. F.; Wappler, F. R.; Watanabe, M.; Watson, A. T.; Watson, J. E.; Watson, N. K.; Watt, M.; Weatherall, J. H.; Weaver, M.; Weber, T.; Wedd, R.; Wei, J. T.; Weidemann, A. W.; Weinstein, A. J. R.; Wenzel, W. A.; West, C. A.; West, C. G.; West, T. J.; White, E.; White, R. M.; Wicht, J.; Widhalm, L.; Wiechczynski, J.; Wienands, U.; Wilden, L.; Wilder, M.; Williams, D. C.; Williams, G.; Williams, J. C.; Williams, K. M.; Williams, M. I.; Willocq, S. Y.; Wilson, J. R.; Wilson, M. G.; Wilson, R. J.; Winklmeier, F.; Winstrom, L. O.; Winter, M. A.; Wisniewski, W. J.; Wittgen, M.; Wittlin, J.; Wittmer, W.; Wixted, R.; Woch, A.; Wogsland, B. J.; Won, E.; Wong, Q. K.; Wray, B. C.; Wren, A. C.; Wright, D. M.; Wu, C. H.; Wu, J.; Wu, S. L.; Wulsin, H. W.; Xella, S. M.; Xie, Q. L.; Xie, Y.; Xu, Z. Z.; Yéche, Ch.; Yamada, Y.; Yamaga, M.; Yamaguchi, A.; Yamaguchi, H.; Yamaki, T.; Yamamoto, H.; Yamamoto, N.; Yamamoto, R. K.; Yamamoto, S.; Yamanaka, T.; Yamaoka, H.; Yamaoka, J.; Yamaoka, Y.; Yamashita, Y.; Yamauchi, M.; Yan, D. S.; Yan, Y.; Yanai, H.; Yanaka, S.; Yang, H.; Yang, R.; Yang, S.; Yarritu, A. K.; Yashchenko, S.; Yashima, J.; Yasin, Z.; Yasu, Y.; Ye, S. W.; Yeh, P.; Yi, J. I.; Yi, K.; Yi, M.; Yin, Z. W.; Ying, J.; Yocky, G.; Yokoyama, K.; Yokoyama, M.; Yokoyama, T.; Yoshida, K.; Yoshida, M.; Yoshimura, Y.; Young, C. C.; Yu, C. X.; Yu, Z.; Yuan, C. Z.; Yuan, Y.; Yumiceva, F. X.; Yusa, Y.; Yushkov, A. N.; Yuta, H.; Zacek, V.; Zain, S. B.; Zallo, A.; Zambito, S.; Zander, D.; Zang, S. L.; Zanin, D.; Zaslavsky, B. G.; Zeng, Q. L.; Zghiche, A.; Zhang, B.; Zhang, J.; Zhang, J.; Zhang, L.; Zhang, L. M.; Zhang, S. Q.; Zhang, Z. P.; Zhao, H. W.; Zhao, M.; Zhao, Z. G.; Zheng, Y.; Zheng, Y. H.; Zheng, Z. P.; Zhilich, V.; Zhou, P.; Zhu, R. Y.; Zhu, Y. S.; Zhu, Z. M.; Zhulanov, V.; Ziegler, T.; Ziegler, V.; Zioulas, G.; Zisman, M.; Zito, M.; Zürcher, D.; Zwahlen, N.; Zyukova, O.; Živko, T.; Žontar, D.
2014-11-01
This work is on the Physics of the B Factories. Part A of this book contains a brief description of the SLAC and KEK B Factories as well as their detectors, BaBar and Belle, and data taking related issues. Part B discusses tools and methods used by the experiments in order to obtain results. The results themselves can be found in Part C. Please note that version 3 on the archive is the auxiliary version of the Physics of the B Factories book. This uses the notation alpha, beta, gamma for the angles of the Unitarity Triangle. The nominal version uses the notation phi_1, phi_2 and phi_3. Please cite this work as Eur. Phys. J. C74 (2014) 3026.
Lindsey, David A.; Tysdal, Russell G.; Taggart, Joseph E.
2002-01-01
The principal purpose of this report is to provide a reference archive for results of a statistical analysis of geochemical data for metasedimentary rocks of Mesoproterozoic age of the Salmon River Mountains and Lemhi Range, central Idaho. Descriptions of geochemical data sets, statistical methods, rationale for interpretations, and references to the literature are provided. Three methods of analysis are used: R-mode factor analysis of major oxide and trace element data for identifying petrochemical processes, analysis of variance for effects of rock type and stratigraphic position on chemical composition, and major-oxide ratio plots for comparison with the chemical composition of common clastic sedimentary rocks.
Boano, Rosa; Grilletto, Renato; Rabino Massa, Emma
2013-01-01
The creation of large scientific collections has been an important development for anthropological and paleopathological research. Indeed the biological collections are irreplaceable reference systems for the biological reconstruction of past population. They also assume the important role of anthropological archives and, in the global description of man, permit the integration of historical data with those from bio-anthropolgical research. Thinking about the role of mummies and bones as scientific resources, best practice of preservation of ancient specimens should be of high priority for institution and researchers. By way of example, the authors mention their experience regarding ancient human remains preserved in the Museum of Anthropology and Ethnography at the University of Turin.
Pliocene Model Intercomparison (PlioMIP) Phase 2: Scientific Objectives and Experimental Design
NASA Technical Reports Server (NTRS)
Haywood, A. M.; Dowsett, H. J.; Dolan, A. M.; Rowley, D.; Abe-Ouchi, A.; Otto-Bliesner, B.; Chandler, M. A.; Hunter, S. J.; Lunt, D. J.; Pound, M.;
2015-01-01
The Pliocene Model Intercomparison Project (PlioMIP) is a co-ordinated international climate modelling initiative to study and understand climate and environments of the Late Pliocene, and their potential relevance in the context of future climate change. PlioMIP operates under the umbrella of the Palaeoclimate Modelling Intercomparison Project (PMIP), which examines multiple intervals in Earth history, the consistency of model predictions in simulating these intervals and their ability to reproduce climate signals preserved in geological climate archives. This paper provides a thorough model intercomparison project description, and documents the experimental design in a detailed way. Specifically, this paper describes the experimental design and boundary conditions that will be utilized for the experiments in Phase 2 of PlioMIP.
Indirect Solar Wind Measurements Using Archival Cometary Tail Observations
NASA Astrophysics Data System (ADS)
Zolotova, Nadezhda; Sizonenko, Yuriy; Vokhmyanin, Mikhail; Veselovsky, Igor
2018-05-01
This paper addresses the problem of the solar wind behaviour during the Maunder minimum. Records on plasma tails of comets can shed light on the physical parameters of the solar wind in the past. We analyse descriptions and drawings of comets between the eleventh and eighteenth century. To distinguish between dust and plasma tails, we address their colour, shape, and orientation. Based on the calculations made by F.A. Bredikhin, we found that cometary tails deviate from the antisolar direction on average by more than 10°, which is typical for dust tails. We also examined the catalogues of Hevelius and Lubieniecki. The first indication of a plasma tail was revealed only for the great comet C/1769 P1.
Design and implementation of scalable tape archiver
NASA Technical Reports Server (NTRS)
Nemoto, Toshihiro; Kitsuregawa, Masaru; Takagi, Mikio
1996-01-01
In order to reduce costs, computer manufacturers try to use commodity parts as much as possible. Mainframes using proprietary processors are being replaced by high performance RISC microprocessor-based workstations, which are further being replaced by the commodity microprocessor used in personal computers. Highly reliable disks for mainframes are also being replaced by disk arrays, which are complexes of disk drives. In this paper we try to clarify the feasibility of a large scale tertiary storage system composed of 8-mm tape archivers utilizing robotics. In the near future, the 8-mm tape archiver will be widely used and become a commodity part, since recent rapid growth of multimedia applications requires much larger storage than disk drives can provide. We designed a scalable tape archiver which connects as many 8-mm tape archivers (element archivers) as possible. In the scalable archiver, robotics can exchange a cassette tape between two adjacent element archivers mechanically. Thus, we can build a large scalable archiver inexpensively. In addition, a sophisticated migration mechanism distributes frequently accessed tapes (hot tapes) evenly among all of the element archivers, which improves the throughput considerably. Even with the failures of some tape drives, the system dynamically redistributes hot tapes to the other element archivers which have live tape drives. Several kinds of specially tailored huge archivers are on the market, however, the 8-mm tape scalable archiver could replace them. To maintain high performance in spite of high access locality when a large number of archivers are attached to the scalable archiver, it is necessary to scatter frequently accessed cassettes among the element archivers and to use the tape drives efficiently. For this purpose, we introduce two cassette migration algorithms, foreground migration and background migration. Background migration transfers cassettes between element archivers to redistribute frequently accessed cassettes, thus balancing the load of each archiver. Background migration occurs the robotics are idle. Both migration algorithms are based on access frequency and space utility of each element archiver. To normalize these parameters according to the number of drives in each element archiver, it is possible to maintain high performance even if some tape drives fail. We found that the foreground migration is efficient at reducing access response time. Beside the foreground migration, the background migration makes it possible to track the transition of spatial access locality quickly.
Lessons learned from planetary science archiving
NASA Astrophysics Data System (ADS)
Zender, J.; Grayzeck, E.
2006-01-01
The need for scientific archiving of past, current, and future planetary scientific missions, laboratory data, and modeling efforts is indisputable. To quote from a message by G. Santayama carved over the entrance of the US Archive in Washington DC “Those who can not remember the past are doomed to repeat it.” The design, implementation, maintenance, and validation of planetary science archives are however disputed by the involved parties. The inclusion of the archives into the scientific heritage is problematic. For example, there is the imbalance between space agency requirements and institutional and national interests. The disparity of long-term archive requirements and immediate data analysis requests are significant. The discrepancy between the space missions archive budget and the effort required to design and build the data archive is large. An imbalance exists between new instrument development and existing, well-proven archive standards. The authors present their view on the problems and risk areas in the archiving concepts based on their experience acquired within NASA’s Planetary Data System (PDS) and ESA’s Planetary Science Archive (PSA). Individual risks and potential problem areas are discussed based on a model derived from a system analysis done upfront. The major risk for a planetary mission science archive is seen in the combination of minimal involvement by Mission Scientists and inadequate funding. The authors outline how the risks can be reduced. The paper ends with the authors view on future planetary archive implementations including the archive interoperability aspect.
(Per)Forming Archival Research Methodologies
ERIC Educational Resources Information Center
Gaillet, Lynee Lewis
2012-01-01
This article raises multiple issues associated with archival research methodologies and methods. Based on a survey of recent scholarship and interviews with experienced archival researchers, this overview of the current status of archival research both complicates traditional conceptions of archival investigation and encourages scholars to adopt…
76 FR 15349 - Advisory Committee on the Electronic Records Archives (ACERA); Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-21
... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on the Electronic Records Archives (ACERA); Meeting AGENCY: National Archives and Records Administration. ACTION: Notice of Meeting. SUMMARY... Archives and Records Administration (NARA) announces a meeting of the Advisory Committee on the Electronic...
Web Archiving for the Rest of Us: How to Collect and Manage Websites Using Free and Easy Software
ERIC Educational Resources Information Center
Dunn, Katharine; Szydlowski, Nick
2009-01-01
Large-scale projects such as the Internet Archive (www.archive.org) send out crawlers to gather snapshots of much of the web. This massive collection of archived websites may include content of interest to one's patrons. But if librarians want to control exactly when and what is archived, relying on someone else to do the archiving is not ideal.…
Spatial Language and the Embedded Listener Model in Parents’ Input to Children
Ferrara, Katrina; Silva, Malena; Wilson, Colin; Landau, Barbara
2015-01-01
Language is a collaborative act: in order to communicate successfully, speakers must generate utterances that are not only semantically valid, but also sensitive to the knowledge state of the listener. Such sensitivity could reflect use of an “embedded listener model,” where speakers choose utterances on the basis of an internal model of the listeners’ conceptual and linguistic knowledge. In this paper, we ask whether parents’ spatial descriptions incorporate an embedded listener model that reflects their children’s understanding of spatial relations and spatial terms. Adults described the positions of targets in spatial arrays to their children or to the adult experimenter. Arrays were designed so that targets could not be identified unless spatial relationships within the array were encoded and described. Parents of 3–4 year-old children encoded relationships in ways that were well-matched to their children’s level of spatial language. These encodings differed from those of the same relationships in speech to the adult experimenter (Experiment 1). By contrast, parents of individuals with severe spatial impairments (Williams syndrome) did not show clear evidence of sensitivity to their children’s level of spatial language (Experiment 2). The results provide evidence for an embedded listener model in the domain of spatial language, and indicate conditions under which the ability to model listener knowledge may be more challenging. PMID:26717804
Spatial Language and the Embedded Listener Model in Parents' Input to Children.
Ferrara, Katrina; Silva, Malena; Wilson, Colin; Landau, Barbara
2016-11-01
Language is a collaborative act: To communicate successfully, speakers must generate utterances that are not only semantically valid but also sensitive to the knowledge state of the listener. Such sensitivity could reflect the use of an "embedded listener model," where speakers choose utterances on the basis of an internal model of the listener's conceptual and linguistic knowledge. In this study, we ask whether parents' spatial descriptions incorporate an embedded listener model that reflects their children's understanding of spatial relations and spatial terms. Adults described the positions of targets in spatial arrays to their children or to the adult experimenter. Arrays were designed so that targets could not be identified unless spatial relationships within the array were encoded and described. Parents of 3-4-year-old children encoded relationships in ways that were well-matched to their children's level of spatial language. These encodings differed from those of the same relationships in speech to the adult experimenter (Experiment 1). In contrast, parents of individuals with severe spatial impairments (Williams syndrome) did not show clear evidence of sensitivity to their children's level of spatial language (Experiment 2). The results provide evidence for an embedded listener model in the domain of spatial language and indicate conditions under which the ability to model listener knowledge may be more challenging. Copyright © 2015 Cognitive Science Society, Inc.
76 FR 19147 - Advisory Committee on the Electronic Records Archives (ACERA)
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-06
... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on the Electronic Records Archives... Electronic Records Archives (ACERA). The meeting has been consolidated into one day. This meeting will be... number of individuals planning to attend must be submitted to the Electronic Records Archives Program at...
Evaluation of Deep Learning Representations of Spatial Storm Data
NASA Astrophysics Data System (ADS)
Gagne, D. J., II; Haupt, S. E.; Nychka, D. W.
2017-12-01
The spatial structure of a severe thunderstorm and its surrounding environment provide useful information about the potential for severe weather hazards, including tornadoes, hail, and high winds. Statistics computed over the area of a storm or from the pre-storm environment can provide descriptive information but fail to capture structural information. Because the storm environment is a complex, high-dimensional space, identifying methods to encode important spatial storm information in a low-dimensional form should aid analysis and prediction of storms by statistical and machine learning models. Principal component analysis (PCA), a more traditional approach, transforms high-dimensional data into a set of linearly uncorrelated, orthogonal components ordered by the amount of variance explained by each component. The burgeoning field of deep learning offers two potential approaches to this problem. Convolutional Neural Networks are a supervised learning method for transforming spatial data into a hierarchical set of feature maps that correspond with relevant combinations of spatial structures in the data. Generative Adversarial Networks (GANs) are an unsupervised deep learning model that uses two neural networks trained against each other to produce encoded representations of spatial data. These different spatial encoding methods were evaluated on the prediction of severe hail for a large set of storm patches extracted from the NCAR convection-allowing ensemble. Each storm patch contains information about storm structure and the near-storm environment. Logistic regression and random forest models were trained using the PCA and GAN encodings of the storm data and were compared against the predictions from a convolutional neural network. All methods showed skill over climatology at predicting the probability of severe hail. However, the verification scores among the methods were very similar and the predictions were highly correlated. Further evaluations are being performed to determine how the choice of input variables affects the results.
Ethics and Truth in Archival Research
ERIC Educational Resources Information Center
Tesar, Marek
2015-01-01
The complexities of the ethics and truth in archival research are often unrecognised or invisible in educational research. This paper complicates the process of collecting data in the archives, as it problematises notions of ethics and truth in the archives. The archival research took place in the former Czechoslovakia and its turbulent political…
A Vision of Archival Education at the Millennium.
ERIC Educational Resources Information Center
Tibbo, Helen R.
1997-01-01
Issues critical to the development of an archival education degree program are discussed including number of credit hours and courses. Archival educators continue to revise the Society of American Archivists (SAA) Master's of Archival Studies (M.A.S.) guidelines as higher education and the world changes. Archival educators must cooperate with…
Examining Activism in Practice: A Qualitative Study of Archival Activism
ERIC Educational Resources Information Center
Novak, Joy Rainbow
2013-01-01
While archival literature has increasingly discussed activism in the context of archives, there has been little examination of the extent to which archivists in the field have accepted or incorporated archival activism into practice. Scholarship that has explored the practical application of archival activism has predominately focused on case…
Enhancement of real-time EPICS IOC PV management for the data archiving system
NASA Astrophysics Data System (ADS)
Kim, Jae-Ha
2015-10-01
The operation of a 100-MeV linear proton accelerator, the major driving values and experimental data need to be archived. According to the experimental conditions, different data are required. Functions that can add new data and delete data in real time need to be implemented. In an experimental physics and industrial control system (EPICS) input output controller (IOC), the value of process variables (PVs) are matched with the driving values and data. The PV values are archived in text file format by using the channel archiver. There is no need to create a database (DB) server, just a need for large hard disk. Through the web, the archived data can be loaded, and new PV values can be archived without stopping the archive engine. The details of the implementation of a data archiving system with channel archiver are presented, and some preliminary results are reported.
Coronary angiogram video compression for remote browsing and archiving applications.
Ouled Zaid, Azza; Fradj, Bilel Ben
2010-12-01
In this paper, we propose a H.264/AVC based compression technique adapted to coronary angiograms. H.264/AVC coder has proven to use the most advanced and accurate motion compensation process, but, at the cost of high computational complexity. On the other hand, analysis of coronary X-ray images reveals large areas containing no diagnostically important information. Our contribution is to exploit the energy characteristics in slice equal size regions to determine the regions with relevant information content, to be encoded using the H.264 coding paradigm. The others regions, are compressed using fixed block motion compensation and conventional hard-decision quantization. Experiments have shown that at the same bitrate, this procedure reduces the H.264 coder computing time of about 25% while attaining the same visual quality. A subjective assessment, based on the consensus approach leads to a compression ratio of 30:1 which insures both a diagnostic adequacy and a sufficient compression in regards to storage and transmission requirements. Copyright © 2010 Elsevier Ltd. All rights reserved.
Live HDR video streaming on commodity hardware
NASA Astrophysics Data System (ADS)
McNamee, Joshua; Hatchett, Jonathan; Debattista, Kurt; Chalmers, Alan
2015-09-01
High Dynamic Range (HDR) video provides a step change in viewing experience, for example the ability to clearly see the soccer ball when it is kicked from the shadow of the stadium into sunshine. To achieve the full potential of HDR video, so-called true HDR, it is crucial that all the dynamic range that was captured is delivered to the display device and tone mapping is confined only to the display. Furthermore, to ensure widespread uptake of HDR imaging, it should be low cost and available on commodity hardware. This paper describes an end-to-end HDR pipeline for capturing, encoding and streaming high-definition HDR video in real-time using off-the-shelf components. All the lighting that is captured by HDR-enabled consumer cameras is delivered via the pipeline to any display, including HDR displays and even mobile devices with minimum latency. The system thus provides an integrated HDR video pipeline that includes everything from capture to post-production, archival and storage, compression, transmission, and display.
Reconstructing genome evolution in historic samples of the Irish potato famine pathogen
Martin, Michael D.; Cappellini, Enrico; Samaniego, Jose A.; Zepeda, M. Lisandra; Campos, Paula F.; Seguin-Orlando, Andaine; Wales, Nathan; Orlando, Ludovic; Ho, Simon Y. W.; Dietrich, Fred S.; Mieczkowski, Piotr A.; Heitman, Joseph; Willerslev, Eske; Krogh, Anders; Ristaino, Jean B.; Gilbert, M. Thomas P.
2013-01-01
Responsible for the Irish potato famine of 1845–49, the oomycete pathogen Phytophthora infestans caused persistent, devastating outbreaks of potato late blight across Europe in the 19th century. Despite continued interest in the history and spread of the pathogen, the genome of the famine-era strain remains entirely unknown. Here we characterize temporal genomic changes in introduced P. infestans. We shotgun sequence five 19th-century European strains from archival herbarium samples—including the oldest known European specimen, collected in 1845 from the first reported source of introduction. We then compare their genomes to those of extant isolates. We report multiple distinct genotypes in historical Europe and a suite of infection-related genes different from modern strains. At virulence-related loci, several now-ubiquitous genotypes were absent from the historical gene pool. At least one of these genotypes encodes a virulent phenotype in modern strains, which helps explain the 20th century’s episodic replacements of European P. infestans lineages. PMID:23863894
VLBA Archive &Distribution Architecture
NASA Astrophysics Data System (ADS)
Wells, D. C.
1994-01-01
Signals from the 10 antennas of NRAO's VLBA [Very Long Baseline Array] are processed by a Correlator. The complex fringe visibilities produced by the Correlator are archived on magnetic cartridges using a low-cost architecture which is capable of scaling and evolving. Archive files are copied to magnetic media to be distributed to users in FITS format, using the BINTABLE extension. Archive files are labelled using SQL INSERT statements, in order to bind the DBMS-based archive catalog to the archive media.
NASA Astrophysics Data System (ADS)
van Leeuwen, F.; de Bruijne, J. H. J.; Arenou, F.; Bakker, J.; Blomme, R.; Busso, G.; Cacciari, C.; Castañeda, J.; Cellino, A.; Clotet, M.; Comoretto, G.; Eyer, L.; González-Núñez, J.; Guy, L.; Hambly, N.; Hobbs, D.; van Leeuwen, M.; Luri, X.; Manteiga, M.; Pourbaix, D.; Roegiers, T.; Salgado, J.; Sartoretti, P.; Tanga, P.; Ulla, A.; Utrilla Molina, E.; Abreu, A.; Altmann, M.; Andrae, R.; Antoja, T.; Audard, M.; Babusiaux, C.; Bailer-Jones, C. A. L.; Barache, C.; Bastian, U.; Beck, M.; Berthier, J.; Bianchi, L.; Biermann, M.; Bombrun, A.; Bossini, D.; Breddels, M.; Brown, A. G. A.; Busonero, D.; Butkevich, A.; Cantat-Gaudin, T.; Carrasco, J. M.; Cheek, N.; Clementini, G.; Creevey, O.; Crowley, C.; David, M.; Davidson, M.; De Angeli, F.; De Ridder, J.; Delbò, M.; Dell'Oro, A.; Diakité, S.; Distefano, E.; Drimmel, R.; Durán, J.; Evans, D. W.; Fabricius, C.; Fabrizio, M.; Fernández-Hernández, J.; Findeisen, K.; Fleitas, J.; Fouesneau, M.; Galluccio, L.; Gracia-Abril, G.; Guerra, R.; Gutiérrez-Sánchez, R.; Helmi, A.; Hernandez, J.; Holl, B.; Hutton, A.; Jean-Antoine-Piccolo, A.; Jevardat de Fombelle, G.; Joliet, E.; Jordi, C.; Juhász, Á.; Klioner, S.; Löffler, W.; Lammers, U.; Lanzafame, A.; Lebzelter, T.; Leclerc, N.; Lecoeur-Taïbi, I.; Lindegren, L.; Marinoni, S.; Marrese, P. M.; Mary, N.; Massari, D.; Messineo, R.; Michalik, D.; Mignard, F.; Molinaro, R.; Molnár, L.; Montegriffo, P.; Mora, A.; Mowlavi, N.; Muinonen, K.; Muraveva, T.; Nienartowicz, K.; Ordenovic, C.; Pancino, E.; Panem, C.; Pauwels, T.; Petit, J.; Plachy, E.; Portell, J.; Racero, E.; Regibo, S.; Reylé, C.; Rimoldini, L.; Ripepi, V.; Riva, A.; Robichon, N.; Robin, A.; Roelens, M.; Romero-Gómez, M.; Sarro, L.; Seabroke, G.; Segovia, J. C.; Siddiqui, H.; Smart, R.; Smith, K.; Sordo, R.; Soria, S.; Spoto, F.; Stephenson, C.; Turon, C.; Vallenari, A.; Veljanoski, J.; Voutsinas, S.
2018-04-01
The second Gaia data release, Gaia DR2, encompasses astrometry, photometry, radial velocities, astrophysical parameters (stellar effective temperature, extinction, reddening, radius, and luminosity), and variability information plus astrometry and photometry for a sample of pre-selected bodies in the solar system. The data collected during the first 22 months of the nominal, five-year mission have been processed by the Gaia Data Processing and Analysis Consortium (DPAC), resulting into this second data release. A summary of the release properties is provided in Gaia Collaboration et al. (2018b). The overall scientific validation of the data is described in Arenou et al. (2018). Background information on the mission and the spacecraft can be found in Gaia Collaboration et al. (2016), with a more detailed presentation of the Radial Velocity Spectrometer (RVS) in Cropper et al. (2018). In addition, Gaia DR2 is accompanied by various, dedicated papers that describe the processing and validation of the various data products. Four more Gaia Collaboration papers present a glimpse of the scientific richness of the data. In addition to this set of refereed publications, this documentation provides a detailed, complete overview of the processing and validation of the Gaia DR2 data. Gaia data, from both Gaia DR1 and Gaia DR2, can be retrieved from the Gaia archive, which is accessible from https://archives.esac.esa.int/gaia. The archive also provides various tutorials on data access and data queries plus an integrated data model (i.e., description of the various fields in the data tables). In addition, Luri et al. (2018) provide concrete advice on how to deal with Gaia astrometry, with recommendations on how best to estimate distances from parallaxes. The Gaia archive features an enhanced visualisation service which can be used for quick initial explorations of the entire Gaia DR2 data set. Pre-computed cross matches between Gaia DR2 and a selected set of large surveys are provided. Gaia DR2 represents a major advance with respect to Gaia DR1 in terms of survey completeness, precision and accuracy, and the richness of the published data. Nevertheless, Gaia DR2 is still an early release based on a limited amount of input data, simplifications in the data processing, and imperfect calibrations. Many limitations hence exist which the user of Gaia DR2 should be aware of; they are described in Gaia Collaboration et al. (2018b).
Obstacles to the Access, Use and Transfer of Information from Archives: A RAMP Study.
ERIC Educational Resources Information Center
Duchein, Michel
This publication reviews means of access to information contained in the public archives (current administrative documents and archival records) and private archives (manuscripts of personal or family origin) of many countries and makes recommendations for improving access to archival information. Sections describe: (1) the origin and development…
Current status of the international Halley Watch infrared net archive
NASA Technical Reports Server (NTRS)
Mcguinness, Brian B.
1988-01-01
The primary purposes of the Halley Watch have been to promote Halley observations, coordinate and standardize the observing where useful, and to archive the results in a database readily accessible to cometary scientists. The intention of IHW is to store the observations themselves, along with any information necessary to allow users to understand and use the data, but to exclude interpretations of these data. Each of the archives produced by the IHW will appear in two versions: a printed archive and a digital archive on CD-ROMs. The archive is expected to have a very long lifetime. The IHW has already produced an archive for P/Crommelin. This consists of one printed volume and two 1600 bpi tapes. The Halley archive will contain at least twenty gigabytes of information.
NASA Astrophysics Data System (ADS)
Kluiving, Sjoerd; De Ridder, Tim; van Dasselaar, Marcel; Prins, Maarten
2017-04-01
In Medieval times the city of Vlaardingen (the Netherlands) was strategically located on the confluence of three rivers, the Meuse, the Merwede and the Vlaarding. A church of early 8th century was already located here. In a short period of time Vlaardingen developed into an international trading place, the most important place in the former county of Holland. Starting from the 11th century the river Meuse threatened to flood the settlement. These floods have been registered in the archives of the fluvisol and were recognised in a multidisciplinary sedimentary analysis of these archives. To secure the future of this vulnerable soil archive an extensive interdisciplinary research (76 mechanical drill holes, grain size analysis (GSA), thermo-gravimetric analysis (TGA), archaeological remains, soil analysis, dating methods, micromorphology, and microfauna has started in 2011 to gain knowledge on the sedimentological and pedological subsurface of the mound as well as on the well-preserved nature of the archaeological evidence. Pedogenic features are recorded with soil descriptive, micromorphological and geochemical (XRF) analysis. The soil sequence of 5 meters thickness exhibits a complex mix of 'natural' as well as 'anthropogenic layering' and initial soil formation that enables to make a distinction for relatively stable periods between periods with active sedimentation. In this paper the results of this large-scale project are demonstrated in a number of cross-sections with interrelated geological, pedological and archaeological stratification. Distinction between natural and anthropogenic layering is made on the occurrence of chemical elements phosphor and potassium. A series of four stratigraphic / sedimentary units record the period before and after the flooding disaster. Given the many archaeological remnants and features present in the lower units, we know that the medieval landscape was drowned while it was inhabited in the 12th century AD. After a final drowning phase in the 13th century, as a reaction to it, inhabitants started to raise the surface (Kluiving et al, 2016). In this presentation we discuss new coring and micromorphological data from the city center of Vlaardingen, and we aim to fine tune the flooding history in the town in the Late Medieval period in two approaches: 1. combining micromorphological results with new coring data: 2. testing archaeostratigraphical model of Vlaardingen Stadshart (Kluiving et al, 2016), focussing on Late Medieval fluvial systems, 2, 3 and 3.1. Reference Kluiving, S.J., Ridder, T. de, Dasselaar, M. van, Roozen, S. and Prins, M. 2016. Soil archives of a Fluvisol: subsurface analysis and soil history of the medieval city centre of Vlaardingen, the Netherlands - an integral approach. SOIL, 2, 271-285, 2016. doi:10.5194/soil-2-271-2016
Redescription of Alatina alata (Reynaud, 1830) (Cnidaria: Cubozoa) from Bonaire, Dutch Caribbean
LEWIS, CHERYL; BENTLAGE, BASTIAN; YANAGIHARA, ANGEL; GILLAN, WILLIAM; VAN BLERK, JOHAN; KEIL, DANIEL P.; BELY, ALEXANDRA E.; COLLINS, ALLEN G.
2016-01-01
Here we establish a neotype for Alatina alata (Reynaud, 1830) from the Dutch Caribbean island of Bonaire. The species was originally described one hundred and eighty three years ago as Carybdea alata in La Centurie Zoologique—a monograph published by René Primevère Lesson during the age of worldwide scientific exploration. While monitoring monthly reproductive swarms of A. alata medusae in Bonaire, we documented the ecology and sexual reproduction of this cubozoan species. Examination of forty six A. alata specimens and additional archived multimedia material in the collections of the National Museum of Natural History, Washington, DC revealed that A. alata is found at depths ranging from surface waters to 675 m. Additional studies have reported it at depths of up to 1607 m in the tropical and subtropical Atlantic Ocean. Herein, we resolve the taxonomic confusion long associated with A. alata due to a lack of detail in the original description and conflicting statements in the scientific literature. A new cubozoan character, the velarial lappet, is described for this taxon. The complete description provided here serves to stabilize the taxonomy of the second oldest box jellyfish species, and provide a thorough redescription of the species. PMID:25112765
Operational Processing of Ground Validation Data for the Tropical Rainfall Measuring Mission
NASA Technical Reports Server (NTRS)
Kulie, Mark S.; Robinson, Mike; Marks, David A.; Ferrier, Brad S.; Rosenfeld, Danny; Wolff, David B.
1999-01-01
The Tropical Rainfall Measuring Mission (TRMM) satellite was successfully launched in November 1997. A primary goal of TRMM is to sample tropical rainfall using the first active spaceborne precipitation radar. To validate TRMM satellite observations, a comprehensive Ground Validation (GV) Program has been implemented for this mission. A key component of GV is the analysis and quality control of meteorological ground-based radar data from four primary sites: Melbourne, FL; Houston, TX; Darwin, Australia; and Kwajalein Atoll, RMI. As part of the TRMM GV effort, the Joint Center for Earth Systems Technology (JCET) at the University of Maryland, Baltimore County, has been tasked with developing and implementing an operational system to quality control (QC), archive, and provide data for subsequent rainfall product generation from the four primary GV sites. This paper provides an overview of the JCET operational environment. A description of the QC algorithm and performance, in addition to the data flow procedure between JCET and the TRNM science and Data Information System (TSDIS), are presented. The impact of quality-controlled data on higher level rainfall and reflectivity products will also be addressed, Finally, a brief description of JCET's expanded role into producing reference rainfall products will be discussed.
Let’s have a coffee with the Standard Model of particle physics!
NASA Astrophysics Data System (ADS)
Woithe, Julia; Wiener, Gerfried J.; Van der Veken, Frederik F.
2017-05-01
The Standard Model of particle physics is one of the most successful theories in physics and describes the fundamental interactions between elementary particles. It is encoded in a compact description, the so-called ‘Lagrangian’, which even fits on t-shirts and coffee mugs. This mathematical formulation, however, is complex and only rarely makes it into the physics classroom. Therefore, to support high school teachers in their challenging endeavour of introducing particle physics in the classroom, we provide a qualitative explanation of the terms of the Lagrangian and discuss their interpretation based on associated Feynman diagrams.
Ryder, Robert T.; Trippi, Michael H.; Ruppert, Leslie F.; Ryder, Robert T.
2014-01-01
The appendixes in chapters E.4.1 and E.4.2 include (1) Log ASCII Standard (LAS) files, which encode gamma-ray, neutron, density, and other logs in text files that can be used by most well-logging software programs; and (2) graphic well-log traces. In the appendix to chapter E.4.1, the well-log traces are accompanied by lithologic descriptions with formation tops.
ERIC Educational Resources Information Center
Choudhury, G. Sayeed; DiLauro, Tim; Droettboom, Michael; Fujinaga, Ichiro; MacMillan, Karl; Nelson, Michael L.; Maly, Kurt; Thibodeau, Kenneth; Thaller, Manfred
2001-01-01
These articles describe the experiences of the Johns Hopkins University library in digitizing their collection of sheet music; motivation for buckets, Smart Object, Dumb Archive (SODA) and the Open Archives Initiative (OAI), and initial experiences using them in digital library (DL) testbeds; requirements for archival institutions, the National…
Learning patterns of life from intelligence analyst chat
NASA Astrophysics Data System (ADS)
Schneider, Michael K.; Alford, Mark; Babko-Malaya, Olga; Blasch, Erik; Chen, Lingji; Crespi, Valentino; HandUber, Jason; Haney, Phil; Nagy, Jim; Richman, Mike; Von Pless, Gregory; Zhu, Howie; Rhodes, Bradley J.
2016-05-01
Our Multi-INT Data Association Tool (MIDAT) learns patterns of life (POL) of a geographical area from video analyst observations called out in textual reporting. Typical approaches to learning POLs from video make use of computer vision algorithms to extract locations in space and time of various activities. Such approaches are subject to the detection and tracking performance of the video processing algorithms. Numerous examples of human analysts monitoring live video streams annotating or "calling out" relevant entities and activities exist, such as security analysis, crime-scene forensics, news reports, and sports commentary. This user description typically corresponds with textual capture, such as chat. Although the purpose of these text products is primarily to describe events as they happen, organizations typically archive the reports for extended periods. This archive provides a basis to build POLs. Such POLs are useful for diagnosis to assess activities in an area based on historical context, and for consumers of products, who gain an understanding of historical patterns. MIDAT combines natural language processing, multi-hypothesis tracking, and Multi-INT Activity Pattern Learning and Exploitation (MAPLE) technologies in an end-to-end lab prototype that processes textual products produced by video analysts, infers POLs, and highlights anomalies relative to those POLs with links to "tracks" of related activities performed by the same entity. MIDAT technologies perform well, achieving, for example, a 90% F1-value on extracting activities from the textual reports.
NASA Astrophysics Data System (ADS)
Tsvetkov, M. K.; Stavrev, K. Y.; Tsvetkova, K. P.; Semkov, E. H.; Mutatov, A. S.
The Wide-Field Plate Database (WFPDB) and the possibilities for its application as a research tool in observational astronomy are presented. Currently the WFPDB comprises the descriptive data for 400 000 archival wide field photographic plates obtained with 77 instruments, from a total of 1 850 000 photographs stored in 269 astronomical archives all over the world since the end of last century. The WFPDB is already accessible for the astronomical community, now only in batch mode through user requests sent by e-mail. We are working on on-line interactive access to the data via INTERNET from Sofia and parallel from the Centre de Donnees Astronomiques de Strasbourg. (Initial information can be found on World Wide Web homepage URL http://www.wfpa.acad.bg.) The WFPDB may be useful in studies of a variety of astronomical objects and phenomena, andespecially for long-term investigations of variable objects and for multi-wavelength research. We have analysed the data in the WFPDB in order to derive the overall characteristics of the totality of wide-field observations, such as the sky coverage, the distributions by observation time and date, by spectral band, and by object type. We have also examined the totality of wide-field observations from point of view of their quality, availability and digitisation. The usefulness of the WFPDB is demonstrated by the results of identification and investigation of the photometrical behaviour of optical analogues of gamma-ray bursts.
A study of 1177 odontogenic lesions in a South Kerala population
Deepthi, PV; Beena, VT; Padmakumar, SK; Rajeev, R; Sivakumar, R
2016-01-01
Context: A study on odontogenic cysts and tumors. Aims: The aim of this study is to determine the frequency of odontogenic cysts and tumors and their distribution according to age, gender, site and histopathologic types of those reported over a period of 1998–2012 in a Tertiary Health Care Center at South Kerala. Settings and Design: The archives of Department of Oral Pathology and Microbiology, were retrospectively analyzed. Subjects and Methods: Archival records were reviewed and all the cases of odontogenic cysts and tumors were retrieved from 1998 to 2012. Statistical Analysis Used: Descriptive statistical analysis was performed using the computer software, Statistical Package for Social Sciences (SPSS) IBM SPSS Software version 16. Results: Of 7117 oral biopsies, 4.29% were odontogenic tumors. Ameloblastoma was the most common odontogenic tumor comprising 50.2% of cases, followed by keratocystic odontogenic tumor (24.3%). These tumors showed a male predilection (1.19: 1). Odontogenic tumors occurred in a mean age of 33.7 ± 16.8 years. Mandible was the most common jaw affected (76.07%). Odontogenic cysts constituted 12.25% of all oral biopsies. Radicular cyst comprised 75.11% of odontogenic cysts followed by dentigerous cyst (17.2%). Conclusions: This study showed similar as well as contradictory results compared to other studies, probably due to geographical and ethnic variations which is yet to be corroborated. PMID:27601809