Sample records for mississippi soybeans metadata

  1. Effects of spring post-planting flooding on early soybean production systems in Mississippi

    USDA-ARS?s Scientific Manuscript database

    April planting of early-maturing soybean to avoid late-summer drought and to allow early harvest has become a common management practice in Mississippi. However, most of the early-planted soybeans on Sharkey clay soils in Mississippi are often exposed to waterlogged conditions during the early sprin...

  2. Evaluating soybean cultivars for resistance to Phomopsis seed decay in Mississippi

    USDA-ARS?s Scientific Manuscript database

    Phomopsis seed decay (PSD) of soybean reduces seed quality, germination and seedling vigor. PSD has been problematic in most soybean production areas including Mississippi (MS). Planting resistant cultivars is one of the most effective means to control PSD. However, very few soybean cultivars resis...

  3. Tree Species-Soil Reslationships on Marginal Soybean Lands in the Mississippi Delta

    Treesearch

    John W. Groninger; W. Michael Aust; Masato Miwa; John A. Stanturf

    1999-01-01

    In the Mississippi Alluvial Plain, marginal soybean lands are those lands that are frequently flooded and have relatively low average soybean yields. These marginal farmlands might be regenerated to bottomland hardwood species if species-site relationships and silvicultural systems were better developed. Cost effective esteblishment.and management of these stands...

  4. First report of soybean pest, Euschistus quadrator (Hempitera: pentatomidae) in Mississippi

    USDA-ARS?s Scientific Manuscript database

    Here we report on the first state and county record of Euschistus quadrator Ralston (Hemiptera: Pentatomidae) in Washington County, Mississippi. The species has been documented from Honduras to Virginia primarily on soybeans, cotton, various row crops, fruit, and non-crop hosts. The local impact...

  5. Distribution of the long-horned beetle, Dectes texanus, in soybeans of Missouri, Western Tennessee, Mississippi, and Arkansas.

    PubMed

    Tindall, Kelly V; Stewart, Scott; Musser, Fred; Lorenz, Gus; Bailey, Wayne; House, Jeff; Henry, Robert; Hastings, Don; Wallace, Milus; Fothergill, Kent

    2010-01-01

    The long-horned beetle, Dectes texanus LeConte (Coleoptera: Cerambycidae), is a stem-boring pest of soybeans, Glycine max (L.) Merrill (Fabales: Fabaceae). Soybean stems and stubble were collected from 131 counties in Arkansas, Mississippi, Missouri, and Tennessee and dissected to determine D. texanus infestation rates. All states sampled had D. texanus present in soybeans. Data from Tennessee and Arkansas showed sample infestations of D. texanus averaging nearly 40%. Samples from Missouri revealed higher infestation in the twelve southeastern counties compared to the rest of the state. Data from Mississippi suggested that D. texanus is not as problematic there as in Arkansas, Missouri, and Tennessee. Infestation rates from individual fields varied greatly (0-100%) within states. In Tennessee, second crop soybeans (i.e. soybeans planted following winter wheat) had lower infestations than full season soybeans. A map of pest distribution is presented that documents the extent of the problem, provides a baseline from which changes can be measured, contributes data for emergency registration of pesticides for specific geographic regions, and provides useful information for extension personnel, crop scouts, and growers.

  6. Distribution of the Long-Horned Beetle, Dectes texanus, in Soybeans of Missouri, Western Tennessee, Mississippi, and Arkansas

    PubMed Central

    Tindall, Kelly V.; Stewart, Scott; Musser, Fred; Lorenz, Gus; Bailey, Wayne; House, Jeff; Henry, Robert; Hastings, Don; Wallace, Milus; Fothergill, Kent

    2010-01-01

    The long-horned beetle, Dectes texanus LeConte (Coleoptera: Cerambycidae), is a stem-boring pest of soybeans, Glycine max (L.) Merrill (Fabales: Fabaceae). Soybean stems and stubble were collected from 131 counties in Arkansas, Mississippi, Missouri, and Tennessee and dissected to determine D. texanus infestation rates. All states sampled had D. texanus present in soybeans. Data from Tennessee and Arkansas showed sample infestations of D. texanus averaging nearly 40%. Samples from Missouri revealed higher infestation in the twelve southeastern counties compared to the rest of the state. Data from Mississippi suggested that D. texanus is not as problematic there as in Arkansas, Missouri, and Tennessee. Infestation rates from individual fields varied greatly (0–100%) within states. In Tennessee, second crop soybeans (i.e. soybeans planted following winter wheat) had lower infestations than full season soybeans. A map of pest distribution is presented that documents the extent of the problem, provides a baseline from which changes can be measured, contributes data for emergency registration of pesticides for specific geographic regions, and provides useful information for extension personnel, crop scouts, and growers. PMID:21062147

  7. Simulating soybean productivity under rainfed conditions for major soil types using APEX model in East Central Mississippi

    Treesearch

    Bangbang Zhang; Gary Feng; John J. Read; Xiangbin Kong; Ying Ouyang; Ardeshir Adeli; Johnie N. Jenkins

    2016-01-01

    Knowledge of soybean yield constraints under rainfed conditions on major soil types in East CentralMississippi would assist growers in the region to effectively utilize the benefits of water/irrigation man-agement. The objectives of this study were to use the Agricultural Policy/Environmental eXtender (APEX)agro-ecosystem model to simulate rainfed soybean grain yield (...

  8. Growth Predictions for Tree Species Planted on Marginal Soybean Lands in the Lower Mississippi Valley

    Treesearch

    J.W. Groninger; W.M. Aust; M. Miwa; John A. Stanturf

    2000-01-01

    The establishment of bottomland hardwood forest stands and riparian buffers on frequently-flooded soybean (Glycine max.) lands in the Lower Mississippi Valley represents a tremendous opporunity to prvide both economic and environmental benefits to the region. Selecting appropriate sites for reestablishing tree cover, accurately predicting the productivity of planted...

  9. Effects of Boron foliar-fertilization on irrigated soybean (Glycine max L. Merr.) in the Mississippi River Valley Delta of the midsouth, USA

    USDA-ARS?s Scientific Manuscript database

    Irrigated soybeans in the Mississippi Delta have been reported to with increased seed yields when fertilized with a boron (B). Furrow irrigated soybean cultivars were foliar fertilized with a B solution at growth stages R3 and/or R5. No consistent trends in yield or seed weight were noted. No phy...

  10. Growing season variability in carbon dioxide exchange of irrigated and rainfed soybean in the southern United States

    USDA-ARS?s Scientific Manuscript database

    Measurement of carbon dynamics of soybean (Glycine max L.) ecosystems outside Corn Belt of the United States (U.S.) is lacking. This study reports carbon dioxide (CO2) fluxes from a rainfed soybean field in El Reno, Oklahoma and an irrigated soybean field in Stoneville, Mississippi during the 2016 g...

  11. First report of Colletotrichum chlorophyti causing soybean anthracnose

    USDA-ARS?s Scientific Manuscript database

    Anthracnose of soybean is caused by several Colletotrichum species. Petiole samples were collected from Alabama, Illinois, and Mississippi. Diseased tissues suspected of being caused by Colletotrichum species were cut into 1-2 cm in lengths, surface-disinfested, and placed on water agar. Pure cultur...

  12. Multiplex real-time PCR detection and differentiation of Colletotrichum species infecting soybean

    USDA-ARS?s Scientific Manuscript database

    Colletotrichum species are fungal plant pathogens of worldwide significance. We isolated Colletotrichum species from soybean [Glycine max (L.) Merr.] with anthracnose symptoms in the U.S. states of Alabama, Arkansas, Illinois, Mississippi, and North Dakota from 2009 to 2013. Thirty-five strains from...

  13. Simplified Metadata Curation via the Metadata Management Tool

    NASA Astrophysics Data System (ADS)

    Shum, D.; Pilone, D.

    2015-12-01

    The Metadata Management Tool (MMT) is the newest capability developed as part of NASA Earth Observing System Data and Information System's (EOSDIS) efforts to simplify metadata creation and improve metadata quality. The MMT was developed via an agile methodology, taking into account inputs from GCMD's science coordinators and other end-users. In its initial release, the MMT uses the Unified Metadata Model for Collections (UMM-C) to allow metadata providers to easily create and update collection records in the ISO-19115 format. Through a simplified UI experience, metadata curators can create and edit collections without full knowledge of the NASA Best Practices implementation of ISO-19115 format, while still generating compliant metadata. More experienced users are also able to access raw metadata to build more complex records as needed. In future releases, the MMT will build upon recent work done in the community to assess metadata quality and compliance with a variety of standards through application of metadata rubrics. The tool will provide users with clear guidance as to how to easily change their metadata in order to improve their quality and compliance. Through these features, the MMT allows data providers to create and maintain compliant and high quality metadata in a short amount of time.

  14. Predicting structured metadata from unstructured metadata.

    PubMed

    Posch, Lisa; Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier

    2016-01-01

    Enormous amounts of biomedical data have been and are being produced by investigators all over the world. However, one crucial and limiting factor in data reuse is accurate, structured and complete description of the data or data about the data-defined as metadata. We propose a framework to predict structured metadata terms from unstructured metadata for improving quality and quantity of metadata, using the Gene Expression Omnibus (GEO) microarray database. Our framework consists of classifiers trained using term frequency-inverse document frequency (TF-IDF) features and a second approach based on topics modeled using a Latent Dirichlet Allocation model (LDA) to reduce the dimensionality of the unstructured data. Our results on the GEO database show that structured metadata terms can be the most accurately predicted using the TF-IDF approach followed by LDA both outperforming the majority vote baseline. While some accuracy is lost by the dimensionality reduction of LDA, the difference is small for elements with few possible values, and there is a large improvement over the majority classifier baseline. Overall this is a promising approach for metadata prediction that is likely to be applicable to other datasets and has implications for researchers interested in biomedical metadata curation and metadata prediction. © The Author(s) 2016. Published by Oxford University Press.

  15. Predicting structured metadata from unstructured metadata

    PubMed Central

    Posch, Lisa; Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier

    2016-01-01

    Enormous amounts of biomedical data have been and are being produced by investigators all over the world. However, one crucial and limiting factor in data reuse is accurate, structured and complete description of the data or data about the data—defined as metadata. We propose a framework to predict structured metadata terms from unstructured metadata for improving quality and quantity of metadata, using the Gene Expression Omnibus (GEO) microarray database. Our framework consists of classifiers trained using term frequency-inverse document frequency (TF-IDF) features and a second approach based on topics modeled using a Latent Dirichlet Allocation model (LDA) to reduce the dimensionality of the unstructured data. Our results on the GEO database show that structured metadata terms can be the most accurately predicted using the TF-IDF approach followed by LDA both outperforming the majority vote baseline. While some accuracy is lost by the dimensionality reduction of LDA, the difference is small for elements with few possible values, and there is a large improvement over the majority classifier baseline. Overall this is a promising approach for metadata prediction that is likely to be applicable to other datasets and has implications for researchers interested in biomedical metadata curation and metadata prediction. Database URL: http://www.yeastgenome.org/ PMID:28637268

  16. Irrigation initiation timing in soybean grown on sandy soils in Northeast Arkansas

    USDA-ARS?s Scientific Manuscript database

    Irrigation initiation timing was evaluated in furrow-irrigated soybean field with sandy soils in Mississippi County, AR. A major objective of this 2015 study was to validate and expand irrigation timing recommendations that pair plant growth measures with weather cues including use of local weather ...

  17. Nonpoint source contamination of the Mississippi river and its tributaries by herbicides

    USGS Publications Warehouse

    Pereira, W.E.; Hostettler, F.D.

    1993-01-01

    A study of the Mississippi River and its tributaries during July-August 1991, October-November 1991, and April-May 1992 has indicated that the entire navigable reach of the river is contaminated with a complex mixture of agrochemicals and their transformation products derived from nonpoint sources. Twenty-three compounds were identified, including triazine, chloroacetanilide, thiocarbamate, phenylurea, pyridazine, and organophosphorus pesticides. The upper and middle Mississippi River Basin farm lands are major sources of herbicides applied to corn, soybeans, and sorghum. Farm lands in the lower Mississippi River Basin are a major source of rice and cotton herbicides. Inputs of the five major herbicides atrazine, cyanazine, metolachlor, alachlor, and simazine to the Mississippi River are mainly from the Minnesota, Des Moines, Missouri, and Ohio Rivers. Ratios of desethylatrazine/atrazine potentially are useful indicators of groundwater and surface water interactions in the Mississippi River. These ratios suggested that during baseflow conditions, there is a significant groundwater contribution to the river. The Mississippi River thus serves as a drainage channel for pesticide-contaminated surface and groundwater from the midwestern United States. Conservative estimates of annual mass transport indicated that about 160 t of atrazine, 71 t of cyanazine, 56 t of metolachlor, and 18 t of alachlor were discharged into the Gulf of Mexico in 1991.

  18. Metadata Means Communication: The Challenges of Producing Useful Metadata

    NASA Astrophysics Data System (ADS)

    Edwards, P. N.; Batcheller, A. L.

    2010-12-01

    Metadata are increasingly perceived as an important component of data sharing systems. For instance, metadata accompanying atmospheric model output may indicate the grid size, grid type, and parameter settings used in the model configuration. We conducted a case study of a data portal in the atmospheric sciences using in-depth interviews, document review, and observation. OUr analysis revealed a number of challenges in producing useful metadata. First, creating and managing metadata required considerable effort and expertise, yet responsibility for these tasks was ill-defined and diffused among many individuals, leading to errors, failure to capture metadata, and uncertainty about the quality of the primary data. Second, metadata ended up stored in many different forms and software tools, making it hard to manage versions and transfer between formats. Third, the exact meanings of metadata categories remained unsettled and misunderstood even among a small community of domain experts -- an effect we expect to be exacerbated when scientists from other disciplines wish to use these data. In practice, we found that metadata problems due to these obstacles are often overcome through informal, personal communication, such as conversations or email. We conclude that metadata serve to communicate the context of data production from the people who produce data to those who wish to use it. Thus while formal metadata systems are often public, critical elements of metadata (those embodied in informal communication) may never be recorded. Therefore, efforts to increase data sharing should include ways to facilitate inter-investigator communication. Instead of tackling metadata challenges only on the formal level, we can improve data usability for broader communities by better supporting metadata communication.

  19. Log-less metadata management on metadata server for parallel file systems.

    PubMed

    Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  20. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    PubMed Central

    Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally. PMID:24892093

  1. Rainwater deficit and irrigation demand for row crops in Mississippi Blackland Prairie

    Treesearch

    Gary Feng; Ying Ouyang; Ardeshir Adeli; John Read; Johnie Jenkins

    2018-01-01

    Irrigation research in the mid-south United States has not kept pace with a steady increase in irrigated area in recent years. This study used rainfall records from 1895 to 2016 to determine rainwater deficit and irrigation demand for soybean [Glycine max (L.) Merr.], corn (Zea mays L.), and cotton (Gossypium hirsutum L.) in the Blackland Prairie region of Mississippi...

  2. Evaluation of soybean breeding lines for resistance to phomopsis seed decay in stoneville mississippi 2014

    USDA-ARS?s Scientific Manuscript database

    Phomopsis seed decay (PSD) of soybean is a major cause of poor seed quality in most soybean production areas, especially in the mid-southern region of the United States. Breeding for PSD-resistance is the most effective long-term strategy to control this disease. To breed soybean lines with resistan...

  3. Metadata for Web Resources: How Metadata Works on the Web.

    ERIC Educational Resources Information Center

    Dillon, Martin

    This paper discusses bibliographic control of knowledge resources on the World Wide Web. The first section sets the context of the inquiry. The second section covers the following topics related to metadata: (1) definitions of metadata, including metadata as tags and as descriptors; (2) metadata on the Web, including general metadata systems,…

  4. Sediment data collected in 2010 from Cat Island, Mississippi

    USGS Publications Warehouse

    Buster, Noreen A.; Kelso, Kyle W.; Miselis, Jennifer L.; Kindinger, Jack G.

    2014-01-01

    Scientists from the U.S. Geological Survey, St. Petersburg Coastal and Marine Science Center, in collaboration with the U.S. Army Corps of Engineers, conducted geophysical and sedimentological surveys in 2010 around Cat Island, Mississippi, which is the westernmost island in the Mississippi-Alabama barrier island chain. The objective of the study was to understand the geologic evolution of Cat Island relative to other barrier islands in the northern Gulf of Mexico by identifying relationships between the geologic history, present day morphology, and sediment distribution. This data series serves as an archive of terrestrial and marine sediment vibracores collected August 4-6 and October 20-22, 2010, respectively. Geographic information system data products include marine and terrestrial core locations and 2007 shoreline data. Additional files include marine and terrestrial core description logs, core photos, results of sediment grain-size analyses, optically stimulated luminescence dating and carbon-14 dating locations and results, Field Activity Collection System logs, and formal Federal Geographic Data Committee metadata.

  5. Creating preservation metadata from XML-metadata profiles

    NASA Astrophysics Data System (ADS)

    Ulbricht, Damian; Bertelmann, Roland; Gebauer, Petra; Hasler, Tim; Klump, Jens; Kirchner, Ingo; Peters-Kottig, Wolfgang; Mettig, Nora; Rusch, Beate

    2014-05-01

    Registration of dataset DOIs at DataCite makes research data citable and comes with the obligation to keep data accessible in the future. In addition, many universities and research institutions measure data that is unique and not repeatable like the data produced by an observational network and they want to keep these data for future generations. In consequence, such data should be ingested in preservation systems, that automatically care for file format changes. Open source preservation software that is developed along the definitions of the ISO OAIS reference model is available but during ingest of data and metadata there are still problems to be solved. File format validation is difficult, because format validators are not only remarkably slow - due to variety in file formats different validators return conflicting identification profiles for identical data. These conflicts are hard to resolve. Preservation systems have a deficit in the support of custom metadata. Furthermore, data producers are sometimes not aware that quality metadata is a key issue for the re-use of data. In the project EWIG an university institute and a research institute work together with Zuse-Institute Berlin, that is acting as an infrastructure facility, to generate exemplary workflows for research data into OAIS compliant archives with emphasis on the geosciences. The Institute for Meteorology provides timeseries data from an urban monitoring network whereas GFZ Potsdam delivers file based data from research projects. To identify problems in existing preservation workflows the technical work is complemented by interviews with data practitioners. Policies for handling data and metadata are developed. Furthermore, university teaching material is created to raise the future scientists awareness of research data management. As a testbed for ingest workflows the digital preservation system Archivematica [1] is used. During the ingest process metadata is generated that is compliant to the

  6. Metadata (MD)

    Treesearch

    Robert E. Keane

    2006-01-01

    The Metadata (MD) table in the FIREMON database is used to record any information about the sampling strategy or data collected using the FIREMON sampling procedures. The MD method records metadata pertaining to a group of FIREMON plots, such as all plots in a specific FIREMON project. FIREMON plots are linked to metadata using a unique metadata identifier that is...

  7. Metadata Dictionary Database: A Proposed Tool for Academic Library Metadata Management

    ERIC Educational Resources Information Center

    Southwick, Silvia B.; Lampert, Cory

    2011-01-01

    This article proposes a metadata dictionary (MDD) be used as a tool for metadata management. The MDD is a repository of critical data necessary for managing metadata to create "shareable" digital collections. An operational definition of metadata management is provided. The authors explore activities involved in metadata management in…

  8. The New Online Metadata Editor for Generating Structured Metadata

    NASA Astrophysics Data System (ADS)

    Devarakonda, R.; Shrestha, B.; Palanisamy, G.; Hook, L.; Killeffer, T.; Boden, T.; Cook, R. B.; Zolly, L.; Hutchison, V.; Frame, M. T.; Cialella, A. T.; Lazer, K.

    2014-12-01

    Nobody is better suited to "describe" data than the scientist who created it. This "description" about a data is called Metadata. In general terms, Metadata represents the who, what, when, where, why and how of the dataset. eXtensible Markup Language (XML) is the preferred output format for metadata, as it makes it portable and, more importantly, suitable for system discoverability. The newly developed ORNL Metadata Editor (OME) is a Web-based tool that allows users to create and maintain XML files containing key information, or metadata, about the research. Metadata include information about the specific projects, parameters, time periods, and locations associated with the data. Such information helps put the research findings in context. In addition, the metadata produced using OME will allow other researchers to find these data via Metadata clearinghouses like Mercury [1] [2]. Researchers simply use the ORNL Metadata Editor to enter relevant metadata into a Web-based form. How is OME helping Big Data Centers like ORNL DAAC? The ORNL DAAC is one of NASA's Earth Observing System Data and Information System (EOSDIS) data centers managed by the ESDIS Project. The ORNL DAAC archives data produced by NASA's Terrestrial Ecology Program. The DAAC provides data and information relevant to biogeochemical dynamics, ecological data, and environmental processes, critical for understanding the dynamics relating to the biological components of the Earth's environment. Typically data produced, archived and analyzed is at a scale of multiple petabytes, which makes the discoverability of the data very challenging. Without proper metadata associated with the data, it is difficult to find the data you are looking for and equally difficult to use and understand the data. OME will allow data centers like the ORNL DAAC to produce meaningful, high quality, standards-based, descriptive information about their data products in-turn helping with the data discoverability and

  9. Summary of reported agriculture and irrigation water use in Mississippi County, Arkansas, 1991

    USGS Publications Warehouse

    Holland, T.W.; Manning, C.A.; Stafford, K.L.

    1993-01-01

    This report summarizes the 1991 water-use reporting through the Conservation District Office in Mississippi County, Arkansas. The number of withdrawal registrations for Mississippi County was 981 (946 groundwater and 35 surface water). Water withdrawals reported during the registration process total 0.06 Mgal/d (0.01 Mgal/d groundwater and 0.05 Mgal/d surface water) for agriculture and 97.82 Mgal/d (94.16 Mgal/d groundwater and 3.66 Mgal/d surface water) for irrigation. The registration reports for 1991 indicate that this water was applied to 109,345 acres of land to irrigate rice, corn, soybeans, milo, cotton, hay, vegetables, berries, and sod as well as for the agricultural use of animal aquaculture.

  10. Evaluation of soybean commercial varieties for resistance to Phomopsis seed decay in the Mississippi Delta, 2012

    USDA-ARS?s Scientific Manuscript database

    Soybean Phomopsis seed decay (PSD), primarily caused by Phomopsis longicolla, is a major cause of poor seed quality in the United States, especially in the mid-southern region. To identify new sources of soybean lines resistant to PSD, 16 commercial soybean varieties (MG IV and MGV) were planted on ...

  11. Afforesting agricultural lands in the Mississippi Alluvial Valley (USA): effects of silvicultural methods on understory plant diversity

    Treesearch

    Diane De Steven; Callie J. Schweitzer; Steven C. Hughes; John A. Stanturf

    2015-01-01

    To compare methods for bottomland hardwood reforestation on marginal farmlands in the Mississippi Alluvial Valley, four afforestation treatments (natural colonization, sown oak acorns, planted oak seedlings, cottonwood–oak interplant) were established in 1995 on former soybean cropland. Natural, sown, and planted-oak plots were not managed after establishment....

  12. Master Metadata Repository and Metadata-Management System

    NASA Technical Reports Server (NTRS)

    Armstrong, Edward; Reed, Nate; Zhang, Wen

    2007-01-01

    A master metadata repository (MMR) software system manages the storage and searching of metadata pertaining to data from national and international satellite sources of the Global Ocean Data Assimilation Experiment (GODAE) High Resolution Sea Surface Temperature Pilot Project [GHRSSTPP]. These sources produce a total of hundreds of data files daily, each file classified as one of more than ten data products representing global sea-surface temperatures. The MMR is a relational database wherein the metadata are divided into granulelevel records [denoted file records (FRs)] for individual satellite files and collection-level records [denoted data set descriptions (DSDs)] that describe metadata common to all the files from a specific data product. FRs and DSDs adhere to the NASA Directory Interchange Format (DIF). The FRs and DSDs are contained in separate subdatabases linked by a common field. The MMR is configured in MySQL database software with custom Practical Extraction and Reporting Language (PERL) programs to validate and ingest the metadata records. The database contents are converted into the Federal Geographic Data Committee (FGDC) standard format by use of the Extensible Markup Language (XML). A Web interface enables users to search for availability of data from all sources.

  13. THE NEW ONLINE METADATA EDITOR FOR GENERATING STRUCTURED METADATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devarakonda, Ranjeet; Shrestha, Biva; Palanisamy, Giri

    Nobody is better suited to describe data than the scientist who created it. This description about a data is called Metadata. In general terms, Metadata represents the who, what, when, where, why and how of the dataset [1]. eXtensible Markup Language (XML) is the preferred output format for metadata, as it makes it portable and, more importantly, suitable for system discoverability. The newly developed ORNL Metadata Editor (OME) is a Web-based tool that allows users to create and maintain XML files containing key information, or metadata, about the research. Metadata include information about the specific projects, parameters, time periods, andmore » locations associated with the data. Such information helps put the research findings in context. In addition, the metadata produced using OME will allow other researchers to find these data via Metadata clearinghouses like Mercury [2][4]. OME is part of ORNL s Mercury software fleet [2][3]. It was jointly developed to support projects funded by the United States Geological Survey (USGS), U.S. Department of Energy (DOE), National Aeronautics and Space Administration (NASA) and National Oceanic and Atmospheric Administration (NOAA). OME s architecture provides a customizable interface to support project-specific requirements. Using this new architecture, the ORNL team developed OME instances for USGS s Core Science Analytics, Synthesis, and Libraries (CSAS&L), DOE s Next Generation Ecosystem Experiments (NGEE) and Atmospheric Radiation Measurement (ARM) Program, and the international Surface Ocean Carbon Dioxide ATlas (SOCAT). Researchers simply use the ORNL Metadata Editor to enter relevant metadata into a Web-based form. From the information on the form, the Metadata Editor can create an XML file on the server that the editor is installed or to the user s personal computer. Researchers can also use the ORNL Metadata Editor to modify existing XML metadata files. As an example, an NGEE Arctic scientist use OME to

  14. Evolutions in Metadata Quality

    NASA Astrophysics Data System (ADS)

    Gilman, J.

    2016-12-01

    Metadata Quality is one of the chief drivers of discovery and use of NASA EOSDIS (Earth Observing System Data and Information System) data. Issues with metadata such as lack of completeness, inconsistency, and use of legacy terms directly hinder data use. As the central metadata repository for NASA Earth Science data, the Common Metadata Repository (CMR) has a responsibility to its users to ensure the quality of CMR search results. This talk will cover how we encourage metadata authors to improve the metadata through the use of integrated rubrics of metadata quality and outreach efforts. In addition we'll demonstrate Humanizers, a technique for dealing with the symptoms of metadata issues. Humanizers allow CMR administrators to identify specific metadata issues that are fixed at runtime when the data is indexed. An example Humanizer is the aliasing of processing level "Level 1" to "1" to improve consistency across collections. The CMR currently indexes 35K collections and 300M granules.

  15. CMR Metadata Curation

    NASA Technical Reports Server (NTRS)

    Shum, Dana; Bugbee, Kaylin

    2017-01-01

    This talk explains the ongoing metadata curation activities in the Common Metadata Repository. It explores tools that exist today which are useful for building quality metadata and also opens up the floor for discussions on other potentially useful tools.

  16. Evolution in Metadata Quality: Common Metadata Repository's Role in NASA Curation Efforts

    NASA Technical Reports Server (NTRS)

    Gilman, Jason; Shum, Dana; Baynes, Katie

    2016-01-01

    Metadata Quality is one of the chief drivers of discovery and use of NASA EOSDIS (Earth Observing System Data and Information System) data. Issues with metadata such as lack of completeness, inconsistency, and use of legacy terms directly hinder data use. As the central metadata repository for NASA Earth Science data, the Common Metadata Repository (CMR) has a responsibility to its users to ensure the quality of CMR search results. This poster covers how we use humanizers, a technique for dealing with the symptoms of metadata issues, as well as our plans for future metadata validation enhancements. The CMR currently indexes 35K collections and 300M granules.

  17. Mid-winter food use and body weights of mallards and wood ducks in Mississippi

    USGS Publications Warehouse

    Delnicki, D.; Reinecke, K.J.

    1986-01-01

    We obtained esophageal food samples from 311 mallards (Anas platyrhynchos) and 94 wood ducks (Aix sponsa) and body weights from 2,118 mallards and 315 wood ducks in western Mississippi during December and January 1979-83. On average, mallards ingested 3.0% animal food, principally aquatic invertebrates, and 97.0% plant food. Rice, soybeans, and seeds of 'moist soil' plants provided 41.3, 41.6, and 10-11% of the total food intake. Wood ducks ingested nearly 100% plant food, of which 23.4% was soybeans and 74.3% was acorns from Nuttall (Quercus nuttallii), water (Q. nigra), and willow oaks (Q. phellos). Mallard food use varied with water conditions; the use of rice decreased and soybeans increased during 1980-81 when cumulative November-January precipitation was < 50% of normal. Wood duck food use varied with habitat; the diet included more acorns at sites having larger acreages of intact bottomland hardwood forest. Mallard and wood duck body weights varied within and among winters. Mallard weights decreased by about 2% from December to January each year. We considered this a regulated loss, whereas we attributed increases and decreases of 4-5% in average weights during wet and dry winters to changes in feeding opportunities associated with winter precipitation. Wood duck weights followed similar trends. We concluded that continued drainage in the Mississippi Delta will adversely affect waterfowl foraging opportunities, and that research on winter feeding ecology will progress more rapidly if we develop an understanding of the foraging efficiencies associated with alternate food resources.

  18. Modeling effects of climatological variability and management practices on conservation of groundwater from the Mississippi River Valley Shallow Alluvial Aquifer in the Mississippi Delta region

    NASA Astrophysics Data System (ADS)

    Thornton, Robert Frank

    Ninety-eight percent of water taken from the Mississippi River Shallow Alluvial Aquifer, hereafter referred to as "the aquifer" or "MRVA," is used by the agricultural industry for irrigation. Mississippi Delta agriculture is increasingly using more water from the MRVA and the aquifer has been losing about 300,000 acre-feet per year. This research expands on previous work in which a model was developed that simulates the effects of climatic variability, crop acreage changes, and specific irrigation methods on consequent variations in the water volume of the MRVA. This study corrects an identified problem by replacing total growing season precipitation with an irrigation demand driver based on evaporation and crop coefficients and changing the time scale from the entire growing season to a daily resolution. The calculated irrigation demand, as a climatological driver for the model, captures effective precipitation more precisely than the initial growing season precipitation driver. Predictive equations resulting from regression analyses of measured versus calculated irrigation water use showed R2 and correlations of 0.33 and 0.57, 0.77 and 0.88, 0.71 and 0.84, and 0.68 and 0.82 for cotton, corn, soybeans and rice, respectively. Ninety-five percent of the predicted values fall within a range of + or - about 23,000 acre-feet, an error of about 10-percent. The study also adds an additional conservation strategy through the use of surface water from on-farm reservoirs in lieu of groundwater. Analyses show that climate could provide the entire water need of the plants in 70-percent of the years for corn, 65-percent of the years for soybeans and cotton, and even 5-percent of the years for rice. Storing precipitation in on-farm structures is an effective way to reduce reliance of Delta producers on groundwater. If producers adopted, at a minimum, the 97.5:2.5 ratio suggested management practice, this minimal management strategy could potentially conserve 48-percent, 35

  19. Distributed metadata servers for cluster file systems using shared low latency persistent key-value metadata store

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.

    A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata requestmore » can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.« less

  20. Harvesting NASA's Common Metadata Repository

    NASA Astrophysics Data System (ADS)

    Shum, D.; Mitchell, A. E.; Durbin, C.; Norton, J.

    2017-12-01

    As part of NASA's Earth Observing System Data and Information System (EOSDIS), the Common Metadata Repository (CMR) stores metadata for over 30,000 datasets from both NASA and international providers along with over 300M granules. This metadata enables sub-second discovery and facilitates data access. While the CMR offers a robust temporal, spatial and keyword search functionality to the general public and international community, it is sometimes more desirable for international partners to harvest the CMR metadata and merge the CMR metadata into a partner's existing metadata repository. This poster will focus on best practices to follow when harvesting CMR metadata to ensure that any changes made to the CMR can also be updated in a partner's own repository. Additionally, since each partner has distinct metadata formats they are able to consume, the best practices will also include guidance on retrieving the metadata in the desired metadata format using CMR's Unified Metadata Model translation software.

  1. BASINS Metadata

    EPA Pesticide Factsheets

    Metadata or data about data describes the content, quality, condition, and other characteristics of data. Geospatial metadata are critical to data discovery and serves as the fuel for the Geospatial One-Stop data portal.

  2. USGIN ISO metadata profile

    NASA Astrophysics Data System (ADS)

    Richard, S. M.

    2011-12-01

    The USGIN project has drafted and is using a specification for use of ISO 19115/19/39 metadata, recommendations for simple metadata content, and a proposal for a URI scheme to identify resources using resolvable http URI's(see http://lab.usgin.org/usgin-profiles). The principal target use case is a catalog in which resources can be registered and described by data providers for discovery by users. We are currently using the ESRI Geoportal (Open Source), with configuration files for the USGIN profile. The metadata offered by the catalog must provide sufficient content to guide search engines to locate requested resources, to describe the resource content, provenance, and quality so users can determine if the resource will serve for intended usage, and finally to enable human users and sofware clients to obtain or access the resource. In order to achieve an operational federated catalog system, provisions in the ISO specification must be restricted and usage clarified to reduce the heterogeneity of 'standard' metadata and service implementations such that a single client can search against different catalogs, and the metadata returned by catalogs can be parsed reliably to locate required information. Usage of the complex ISO 19139 XML schema allows for a great deal of structured metadata content, but the heterogenity in approaches to content encoding has hampered development of sophisticated client software that can take advantage of the rich metadata; the lack of such clients in turn reduces motivation for metadata producers to produce content-rich metadata. If the only significant use of the detailed, structured metadata is to format into text for people to read, then the detailed information could be put in free text elements and be just as useful. In order for complex metadata encoding and content to be useful, there must be clear and unambiguous conventions on the encoding that are utilized by the community that wishes to take advantage of advanced metadata

  3. Evaluation of maturity group III soybean lines for resistance to purple seed stain in Mississippi, 2010

    USDA-ARS?s Scientific Manuscript database

    Purple seed stain (PSS) of soybean is an important disease caused by Cercospora kikuchii. PSS reduces seed quality and market grade, affects seed germination and vigor, and has been reported wherever soybeans are grown worldwide. In 2009, PSS caused 6.4 million bushels of yield losses in 16 southern...

  4. Evaluation of maturity group IV soybean lines for resistance to purple seed stains in Mississippi 2010

    USDA-ARS?s Scientific Manuscript database

    Purple seed stain (PSS) of soybean is an important disease caused by Cercospora kikuchii. PSS reduces seed quality and market grade, affects seed germination and vigor, and has been reported wherever soybeans are grown worldwide. In 2009, PSS caused 6.4 million bushels of yield losses in 16 southern...

  5. Building Format-Agnostic Metadata Repositories

    NASA Astrophysics Data System (ADS)

    Cechini, M.; Pilone, D.

    2010-12-01

    This presentation will discuss the problems that surround persisting and discovering metadata in multiple formats; a set of tenets that must be addressed in a solution; and NASA’s Earth Observing System (EOS) ClearingHOuse’s (ECHO) proposed approach. In order to facilitate cross-discipline data analysis, Earth Scientists will potentially interact with more than one data source. The most common data discovery paradigm relies on services and/or applications facilitating the discovery and presentation of metadata. What may not be common are the formats in which the metadata are formatted. As the number of sources and datasets utilized for research increases, it becomes more likely that a researcher will encounter conflicting metadata formats. Metadata repositories, such as the EOS ClearingHOuse (ECHO), along with data centers, must identify ways to address this issue. In order to define the solution to this problem, the following tenets are identified: - There exists a set of ‘core’ metadata fields recommended for data discovery. - There exists a set of users who will require the entire metadata record for advanced analysis. - There exists a set of users who will require a ‘core’ set of metadata fields for discovery only. - There will never be a cessation of new formats or a total retirement of all old formats. - Users should be presented metadata in a consistent format. ECHO has undertaken an effort to transform its metadata ingest and discovery services in order to support the growing set of metadata formats. In order to address the previously listed items, ECHO’s new metadata processing paradigm utilizes the following approach: - Identify a cross-format set of ‘core’ metadata fields necessary for discovery. - Implement format-specific indexers to extract the ‘core’ metadata fields into an optimized query capability. - Archive the original metadata in its entirety for presentation to users requiring the full record. - Provide on

  6. Harvesting NASA's Common Metadata Repository (CMR)

    NASA Technical Reports Server (NTRS)

    Shum, Dana; Durbin, Chris; Norton, James; Mitchell, Andrew

    2017-01-01

    As part of NASA's Earth Observing System Data and Information System (EOSDIS), the Common Metadata Repository (CMR) stores metadata for over 30,000 datasets from both NASA and international providers along with over 300M granules. This metadata enables sub-second discovery and facilitates data access. While the CMR offers a robust temporal, spatial and keyword search functionality to the general public and international community, it is sometimes more desirable for international partners to harvest the CMR metadata and merge the CMR metadata into a partner's existing metadata repository. This poster will focus on best practices to follow when harvesting CMR metadata to ensure that any changes made to the CMR can also be updated in a partner's own repository. Additionally, since each partner has distinct metadata formats they are able to consume, the best practices will also include guidance on retrieving the metadata in the desired metadata format using CMR's Unified Metadata Model translation software.

  7. Metadata Realities for Cyberinfrastructure: Data Authors as Metadata Creators

    ERIC Educational Resources Information Center

    Mayernik, Matthew Stephen

    2011-01-01

    As digital data creation technologies become more prevalent, data and metadata management are necessary to make data available, usable, sharable, and storable. Researchers in many scientific settings, however, have little experience or expertise in data and metadata management. In this dissertation, I explore the everyday data and metadata…

  8. Archive of Digitized Analog Boomer Seismic Reflection Data Collected from the Mississippi-Alabama-Florida Shelf During Cruises Onboard the R/V Kit Jones, June 1990 and July 1991

    USGS Publications Warehouse

    Sanford, Jordan M.; Harrison, Arnell S.; Wiese, Dana S.; Flocks, James G.

    2009-01-01

    In June of 1990 and July of 1991, the U.S. Geological Survey (USGS) conducted geophysical surveys to investigate the shallow geologic framework of the Mississippi-Alabama-Florida shelf in the northern Gulf of Mexico, from Mississippi Sound to the Florida Panhandle. Work was done onboard the Mississippi Mineral Resources Institute R/V Kit Jones as part of a project to study coastal erosion and offshore sand resources. This report is part of a series to digitally archive the legacy analog data collected from the Mississippi-Alabama SHelf (MASH). The MASH data rescue project is a cooperative effort by the USGS and the Minerals Management Service (MMS). This report serves as an archive of high-resolution scanned Tagged Image File Format (TIFF) and Graphics Interchange Format (GIF) images of the original boomer paper records, navigation files, trackline maps, Geographic Information System (GIS) files, cruise logs, and formal Federal Geographic Data Committee (FGDC) metadata.

  9. Automated Metadata Extraction

    DTIC Science & Technology

    2008-06-01

    provides a means for file owners to add metadata which can then be used by iTunes for cataloging and searching [4]. Metadata can be stored in different...based and contain AAC data formats [3]. Specifically, Apple uses Protected AAC to encode copy-protected music titles purchased from the iTunes Music...Store [4]. The files purchased from the iTunes Music Store include the following metadata. • Name • Email address of purchaser • Year • Album

  10. Assessment of soybean breeding lines for resistance to Phomopsis seed decay from field trials in Stoneville, Mississippi

    USDA-ARS?s Scientific Manuscript database

    Phomopsis seed decay (PSD) is one of the most important seed diseases in soybean. A fungal pathogen, Phomopsis longicolla (syn. Diaporthe longicolla), is the primary causal agent of PSD. Planting PSD-resistant soybean cultivars is the most effective strategy to manage this disease. However, few comm...

  11. Metazen - metadata capture for metagenomes.

    PubMed

    Bischof, Jared; Harrison, Travis; Paczian, Tobias; Glass, Elizabeth; Wilke, Andreas; Meyer, Folker

    2014-01-01

    As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. Unfortunately, these tools are not specifically designed for metagenomic surveys; in particular, they lack the appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility.

  12. EOS ODL Metadata On-line Viewer

    NASA Astrophysics Data System (ADS)

    Yang, J.; Rabi, M.; Bane, B.; Ullman, R.

    2002-12-01

    We have recently developed and deployed an EOS ODL metadata on-line viewer. The EOS ODL metadata viewer is a web server that takes: 1) an EOS metadata file in Object Description Language (ODL), 2) parameters, such as which metadata to view and what style of display to use, and returns an HTML or XML document displaying the requested metadata in the requested style. This tool is developed to address widespread complaints by science community that the EOS Data and Information System (EOSDIS) metadata files in ODL are difficult to read by allowing users to upload and view an ODL metadata file in different styles using a web browser. Users have the selection to view all the metadata or part of the metadata, such as Collection metadata, Granule metadata, or Unsupported Metadata. Choices of display styles include 1) Web: a mouseable display with tabs and turn-down menus, 2) Outline: Formatted and colored text, suitable for printing, 3) Generic: Simple indented text, a direct representation of the underlying ODL metadata, and 4) None: No stylesheet is applied and the XML generated by the converter is returned directly. Not all display styles are implemented for all the metadata choices. For example, Web style is only implemented for Collection and Granule metadata groups with known attribute fields, but not for Unsupported, Other, and All metadata. The overall strategy of the ODL viewer is to transform an ODL metadata file to a viewable HTML in two steps. The first step is to convert the ODL metadata file to an XML using a Java-based parser/translator called ODL2XML. The second step is to transform the XML to an HTML using stylesheets. Both operations are done on the server side. This allows a lot of flexibility in the final result, and is very portable cross-platform. Perl CGI behind the Apache web server is used to run the Java ODL2XML, and then run the results through an XSLT processor. The EOS ODL viewer can be accessed from either a PC or a Mac using Internet

  13. Visualization of JPEG Metadata

    NASA Astrophysics Data System (ADS)

    Malik Mohamad, Kamaruddin; Deris, Mustafa Mat

    There are a lot of information embedded in JPEG image than just graphics. Visualization of its metadata would benefit digital forensic investigator to view embedded data including corrupted image where no graphics can be displayed in order to assist in evidence collection for cases such as child pornography or steganography. There are already available tools such as metadata readers, editors and extraction tools but mostly focusing on visualizing attribute information of JPEG Exif. However, none have been done to visualize metadata by consolidating markers summary, header structure, Huffman table and quantization table in a single program. In this paper, metadata visualization is done by developing a program that able to summarize all existing markers, header structure, Huffman table and quantization table in JPEG. The result shows that visualization of metadata helps viewing the hidden information within JPEG more easily.

  14. Mercury Toolset for Spatiotemporal Metadata

    NASA Technical Reports Server (NTRS)

    Wilson, Bruce E.; Palanisamy, Giri; Devarakonda, Ranjeet; Rhyne, B. Timothy; Lindsley, Chris; Green, James

    2010-01-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily) harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  15. Mercury Toolset for Spatiotemporal Metadata

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce; Rhyne, B. Timothy; Lindsley, Chris

    2010-06-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily)harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  16. Estimating the ratio of pond size to irrigated soybeans land in Mississippi: A case study

    USDA-ARS?s Scientific Manuscript database

    Although more on-farm storage ponds have been constructed in recent years to mitigate groundwater resources depletion in Mississippi, little effort has been devoted to estimating the ratio of pond size to irrigated crop land based on pond matric and its hydrological conditions. Knowledge of this ra...

  17. Evaluating non-relational storage technology for HEP metadata and meta-data catalog

    NASA Astrophysics Data System (ADS)

    Grigorieva, M. A.; Golosova, M. V.; Gubin, M. Y.; Klimentov, A. A.; Osipova, V. V.; Ryabinkin, E. A.

    2016-10-01

    Large-scale scientific experiments produce vast volumes of data. These data are stored, processed and analyzed in a distributed computing environment. The life cycle of experiment is managed by specialized software like Distributed Data Management and Workload Management Systems. In order to be interpreted and mined, experimental data must be accompanied by auxiliary metadata, which are recorded at each data processing step. Metadata describes scientific data and represent scientific objects or results of scientific experiments, allowing them to be shared by various applications, to be recorded in databases or published via Web. Processing and analysis of constantly growing volume of auxiliary metadata is a challenging task, not simpler than the management and processing of experimental data itself. Furthermore, metadata sources are often loosely coupled and potentially may lead to an end-user inconsistency in combined information queries. To aggregate and synthesize a range of primary metadata sources, and enhance them with flexible schema-less addition of aggregated data, we are developing the Data Knowledge Base architecture serving as the intelligence behind GUIs and APIs.

  18. Department of the Interior metadata implementation guide—Framework for developing the metadata component for data resource management

    USGS Publications Warehouse

    Obuch, Raymond C.; Carlino, Jennifer; Zhang, Lin; Blythe, Jonathan; Dietrich, Christopher; Hawkinson, Christine

    2018-04-12

    The Department of the Interior (DOI) is a Federal agency with over 90,000 employees across 10 bureaus and 8 agency offices. Its primary mission is to protect and manage the Nation’s natural resources and cultural heritage; provide scientific and other information about those resources; and honor its trust responsibilities or special commitments to American Indians, Alaska Natives, and affiliated island communities. Data and information are critical in day-to-day operational decision making and scientific research. DOI is committed to creating, documenting, managing, and sharing high-quality data and metadata in and across its various programs that support its mission. Documenting data through metadata is essential in realizing the value of data as an enterprise asset. The completeness, consistency, and timeliness of metadata affect users’ ability to search for and discover the most relevant data for the intended purpose; and facilitates the interoperability and usability of these data among DOI bureaus and offices. Fully documented metadata describe data usability, quality, accuracy, provenance, and meaning.Across DOI, there are different maturity levels and phases of information and metadata management implementations. The Department has organized a committee consisting of bureau-level points-of-contacts to collaborate on the development of more consistent, standardized, and more effective metadata management practices and guidance to support this shared mission and the information needs of the Department. DOI’s metadata implementation plans establish key roles and responsibilities associated with metadata management processes, procedures, and a series of actions defined in three major metadata implementation phases including: (1) Getting started—Planning Phase, (2) Implementing and Maintaining Operational Metadata Management Phase, and (3) the Next Steps towards Improving Metadata Management Phase. DOI’s phased approach for metadata management addresses

  19. The RBV metadata catalog

    NASA Astrophysics Data System (ADS)

    Andre, Francois; Fleury, Laurence; Gaillardet, Jerome; Nord, Guillaume

    2015-04-01

    RBV (Réseau des Bassins Versants) is a French initiative to consolidate the national efforts made by more than 15 elementary observatories funded by various research institutions (CNRS, INRA, IRD, IRSTEA, Universities) that study river and drainage basins. The RBV Metadata Catalogue aims at giving an unified vision of the work produced by every observatory to both the members of the RBV network and any external person interested by this domain of research. Another goal is to share this information with other existing metadata portals. Metadata management is heterogeneous among observatories ranging from absence to mature harvestable catalogues. Here, we would like to explain the strategy used to design a state of the art catalogue facing this situation. Main features are as follows : - Multiple input methods: Metadata records in the catalog can either be entered with the graphical user interface, harvested from an existing catalogue or imported from information system through simplified web services. - Hierarchical levels: Metadata records may describe either an observatory, one of its experimental site or a single dataset produced by one instrument. - Multilingualism: Metadata can be easily entered in several configurable languages. - Compliance to standards : the backoffice part of the catalogue is based on a CSW metadata server (Geosource) which ensures ISO19115 compatibility and the ability of being harvested (globally or partially). On going tasks focus on the use of SKOS thesaurus and SensorML description of the sensors. - Ergonomy : The user interface is built with the GWT Framework to offer a rich client application with a fully ajaxified navigation. - Source code sharing : The work has led to the development of reusable components which can be used to quickly create new metadata forms in other GWT applications You can visit the catalogue (http://portailrbv.sedoo.fr/) or contact us by email rbv@sedoo.fr.

  20. Creating context for the experiment record. User-defined metadata: investigations into metadata usage in the LabTrove ELN.

    PubMed

    Willoughby, Cerys; Bird, Colin L; Coles, Simon J; Frey, Jeremy G

    2014-12-22

    The drive toward more transparency in research, the growing willingness to make data openly available, and the reuse of data to maximize the return on research investment all increase the importance of being able to find information and make links to the underlying data. The use of metadata in Electronic Laboratory Notebooks (ELNs) to curate experiment data is an essential ingredient for facilitating discovery. The University of Southampton has developed a Web browser-based ELN that enables users to add their own metadata to notebook entries. A survey of these notebooks was completed to assess user behavior and patterns of metadata usage within ELNs, while user perceptions and expectations were gathered through interviews and user-testing activities within the community. The findings indicate that while some groups are comfortable with metadata and are able to design a metadata structure that works effectively, many users are making little attempts to use it, thereby endangering their ability to recover data in the future. A survey of patterns of metadata use in these notebooks, together with feedback from the user community, indicated that while a few groups are comfortable with metadata and are able to design a metadata structure that works effectively, many users adopt a "minimum required" approach to metadata. To investigate whether the patterns of metadata use in LabTrove were unusual, a series of surveys were undertaken to investigate metadata usage in a variety of platforms supporting user-defined metadata. These surveys also provided the opportunity to investigate whether interface designs in these other environments might inform strategies for encouraging metadata creation and more effective use of metadata in LabTrove.

  1. Reaction of maturity group IV soybean plant introductions to Phomopsis Seed Decay in Arkansas Mississippi and Missouri 2009

    USDA-ARS?s Scientific Manuscript database

    Soybean Phomopsis seed decay (PSD) causes poor seed quality and suppresses yield in most of soybean-growing states in United States. In 2009, PSD caused over 12 million bushel yield loss in 16 southern states. The disease is primarily caused by Phomopsis longicolla along with other Phomopsis and Dia...

  2. Viewing and Editing Earth Science Metadata MOBE: Metadata Object Browser and Editor in Java

    NASA Astrophysics Data System (ADS)

    Chase, A.; Helly, J.

    2002-12-01

    Metadata is an important, yet often neglected aspect of successful archival efforts. However, to generate robust, useful metadata is often a time consuming and tedious task. We have been approaching this problem from two directions: first by automating metadata creation, pulling from known sources of data, and in addition, what this (paper/poster?) details, developing friendly software for human interaction with the metadata. MOBE and COBE(Metadata Object Browser and Editor, and Canonical Object Browser and Editor respectively), are Java applications for editing and viewing metadata and digital objects. MOBE has already been designed and deployed, currently being integrated into other areas of the SIOExplorer project. COBE is in the design and development stage, being created with the same considerations in mind as those for MOBE. Metadata creation, viewing, data object creation, and data object viewing, when taken on a small scale are all relatively simple tasks. Computer science however, has an infamous reputation for transforming the simple into complex. As a system scales upwards to become more robust, new features arise and additional functionality is added to the software being written to manage the system. The software that emerges from such an evolution, though powerful, is often complex and difficult to use. With MOBE the focus is on a tool that does a small number of tasks very well. The result has been an application that enables users to manipulate metadata in an intuitive and effective way. This allows for a tool that serves its purpose without introducing additional cognitive load onto the user, an end goal we continue to pursue.

  3. Partnerships To Mine Unexploited Sources of Metadata.

    ERIC Educational Resources Information Center

    Reynolds, Regina Romano

    This paper discusses the metadata created for other purposes as a potential source of bibliographic data. The first section addresses collecting metadata by means of templates, including the Nordic Metadata Project's Dublin Core Metadata Template. The second section considers potential partnerships for re-purposing metadata for bibliographic use,…

  4. Reaction of maturity group V soybean plant introductions to Phomopsis Seed Decay in Arkansas Mississippi and Missouri 2009

    USDA-ARS?s Scientific Manuscript database

    In 2009, Soybean Phomopsis seed decay (PSD) caused over 12 million bushels of yield loss in 16 southern states. This disease severely affects soybean seed quality due to the reduction of seed viability, oil content, and alteration of seed composition, and it may also increase moldy and/or split seed...

  5. Evaluating and Evolving Metadata in Multiple Dialects

    NASA Astrophysics Data System (ADS)

    Kozimor, J.; Habermann, T.; Powers, L. A.; Gordon, S.

    2016-12-01

    Despite many long-term homogenization efforts, communities continue to develop focused metadata standards along with related recommendations and (typically) XML representations (aka dialects) for sharing metadata content. Different representations easily become obstacles to sharing information because each representation generally requires a set of tools and skills that are designed, built, and maintained specifically for that representation. In contrast, community recommendations are generally described, at least initially, at a more conceptual level and are more easily shared. For example, most communities agree that dataset titles should be included in metadata records although they write the titles in different ways. This situation has led to the development of metadata repositories that can ingest and output metadata in multiple dialects. As an operational example, the NASA Common Metadata Repository (CMR) includes three different metadata dialects (DIF, ECHO, and ISO 19115-2). These systems raise a new question for metadata providers: if I have a choice of metadata dialects, which should I use and how do I make that decision? We have developed a collection of metadata evaluation tools that can be used to evaluate metadata records in many dialects for completeness with respect to recommendations from many organizations and communities. We have applied these tools to over 8000 collection and granule metadata records in four different dialects. This large collection of identical content in multiple dialects enables us to address questions about metadata and dialect evolution and to answer those questions quantitatively. We will describe those tools and results from evaluating the NASA CMR metadata collection.

  6. Multi-facetted Metadata - Describing datasets with different metadata schemas at the same time

    NASA Astrophysics Data System (ADS)

    Ulbricht, Damian; Klump, Jens; Bertelmann, Roland

    2013-04-01

    Inspired by the wish to re-use research data a lot of work is done to bring data systems of the earth sciences together. Discovery metadata is disseminated to data portals to allow building of customized indexes of catalogued dataset items. Data that were once acquired in the context of a scientific project are open for reappraisal and can now be used by scientists that were not part of the original research team. To make data re-use easier, measurement methods and measurement parameters must be documented in an application metadata schema and described in a written publication. Linking datasets to publications - as DataCite [1] does - requires again a specific metadata schema and every new use context of the measured data may require yet another metadata schema sharing only a subset of information with the meta information already present. To cope with the problem of metadata schema diversity in our common data repository at GFZ Potsdam we established a solution to store file-based research data and describe these with an arbitrary number of metadata schemas. Core component of the data repository is an eSciDoc infrastructure that provides versioned container objects, called eSciDoc [2] "items". The eSciDoc content model allows assigning files to "items" and adding any number of metadata records to these "items". The eSciDoc items can be submitted, revised, and finally published, which makes the data and metadata available through the internet worldwide. GFZ Potsdam uses eSciDoc to support its scientific publishing workflow, including mechanisms for data review in peer review processes by providing temporary web links for external reviewers that do not have credentials to access the data. Based on the eSciDoc API, panMetaDocs [3] provides a web portal for data management in research projects. PanMetaDocs, which is based on panMetaWorks [4], is a PHP based web application that allows to describe data with any XML-based schema. It uses the eSciDoc infrastructures

  7. Metazen – metadata capture for metagenomes

    DOE PAGES

    Bischof, Jared; Harrison, Travis; Paczian, Tobias; ...

    2014-12-08

    Background: As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. These tools are not specifically designed for metagenomic surveys; in particular, they lack themore » appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Results: Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Conclusion: Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility.« less

  8. Metazen – metadata capture for metagenomes

    PubMed Central

    2014-01-01

    Background As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. Unfortunately, these tools are not specifically designed for metagenomic surveys; in particular, they lack the appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Results Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Conclusions Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility. PMID:25780508

  9. Metazen – metadata capture for metagenomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, Jared; Harrison, Travis; Paczian, Tobias

    Background: As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. These tools are not specifically designed for metagenomic surveys; in particular, they lack themore » appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Results: Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Conclusion: Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility.« less

  10. Sources of nitrate yields in the Mississippi River Basin.

    PubMed

    David, Mark B; Drinkwater, Laurie E; McIsaac, Gregory F

    2010-01-01

    Riverine nitrate N in the Mississippi River leads to hypoxia in the Gulf of Mexico. Several recent modeling studies estimated major N inputs and suggested source areas that could be targeted for conservation programs. We conducted a similar analysis with more recent and extensive data that demonstrates the importance of hydrology in controlling the percentage of net N inputs (NNI) exported by rivers. The average fraction of annual riverine nitrate N export/NNI ranged from 0.05 for the lower Mississippi subbasin to 0.3 for the upper Mississippi River basin and as high as 1.4 (4.2 in a wet year) for the Embarras River watershed, a mostly tile-drained basin. Intensive corn (Zea mays L.) and soybean [Glycine max (L.) Merr.] watersheds on Mollisols had low NNI values and when combined with riverine N losses suggest a net depletion of soil organic N. We used county-level data to develop a nonlinear model ofN inputs and landscape factors that were related to winter-spring riverine nitrate yields for 153 watersheds within the basin. We found that river runoff times fertilizer N input was the major predictive term, explaining 76% of the variation in the model. Fertilizer inputs were highly correlated with fraction of land area in row crops. Tile drainage explained 17% of the spatial variation in winter-spring nitrate yield, whereas human consumption of N (i.e., sewage effluent) accounted for 7%. Net N inputs were not a good predictor of riverine nitrate N yields, nor were other N balances. We used this model to predict the expected nitrate N yield from each county in the Mississippi River basin; the greatest nitrate N yields corresponded to the highly productive, tile-drained cornbelt from southwest Minnesota across Iowa, Illinois, Indiana, and Ohio. This analysis can be used to guide decisions about where efforts to reduce nitrate N losses can be most effectively targeted to improve local water quality and reduce export to the Gulf of Mexico.

  11. Estimating the ratio of pond size to irrigated soybean land in Mississippi: a case study

    Treesearch

    Ying Ouyang; G. Feng; J. Read; T. D. Leininger; J. N. Jenkins

    2016-01-01

    Although more on-farm storage ponds have been constructed in recent years to mitigate groundwater resources depletion in Mississippi, little effort has been devoted to estimating the ratio of on-farm water storage pond size to irrigated crop land based on pond metric and its hydrogeological conditions.  In this study, two simulation scenarios were chosen to...

  12. Utilization of early soybeans for food and reproduction by the tarnished plant bug (Hemiptera: Miridae) in the delta of Mississippi.

    PubMed

    Snodgrass, G L; Jackson, R E; Abel, C A; Perera, O P

    2010-08-01

    Commercially produced maturity group (MG) IV soybeans, Glycine max L., were sampled during bloom for tarnished plant bugs, Lygus lineolaris (Palisot de Beauvois), during May and June 1999 (3 fields) and 2001 (18 fields). The adults and nymphs were found primarily in single population peaks in both years, indicating a single new generation was produced during each year. The peak mean numbers of nymphs were 0.61 and 0.84 per drop cloth sample in 1999 and 2001, respectively. Adults peaked at 3.96 (1999) and 3.76 (2001) per sweep net sample (25 sweeps). Tests using laboratory-reared and field-collected tarnished plant bugs resulted in very poor survival of nymphs on 16 different soybean varieties (MG III, one; IV, four; V, nine; VI, two). A large cage (0.06 ha) field test found that the number of nymphs produced on eight soybean varieties after mated adults were released into the cages was lower than could be expected on a suitable host. These results indicated that soybean was a marginal host for tarnished plant bugs. However, the numbers of adults and nymphs found in the commercially produced fields sampled in the study may have been high enough to cause feeding damage to the flowering soybeans. The nature of the damage and its possible economic importance were not determined. Reproduction of tarnished plant bugs in the commercially produced early soybean fields showed that the early soybeans provided tarnished plant bugs with a very abundant host at a time when only wild hosts were previously available.

  13. Archive of sediment data from vibracores collected in 2010 offshore of the Mississippi barrier islands

    USGS Publications Warehouse

    Kelso, Kyle W.; Flocks, James G.

    2015-01-01

    Selection of the core site locations was based on geophysical surveys conducted around the islands from 2008 to 2010. The surveys, using acoustic systems to image and interpret the nearsurface stratigraphy, were conducted to investigate the geologic controls on island evolution. This data series serves as an archive of sediment data collected from August to September 2010, offshore of the Mississippi barrier islands. Data products, including descriptive core logs, core photographs, results of sediment grain-size analyses, sample location maps, and geographic information system (GIS) data files with accompanying formal Federal Geographic Data Committee (FDGC) metadata can be downloaded from the data products and downloads page.

  14. A Solution to Metadata: Using XML Transformations to Automate Metadata

    DTIC Science & Technology

    2010-06-01

    developed their own metadata standards—Directory Interchange Format (DIF), Ecological Metadata Language ( EML ), and International Organization for...mented all their data using the EML standard. However, when later attempting to publish to a data clearinghouse— such as the Geospatial One-Stop (GOS...construct calls to its transform(s) method by providing the type of the incoming content (e.g., eml ), the type of the resulting content (e.g., fgdc) and

  15. Archive of digital chirp subbottom profile data collected during USGS Cruise 13CCT04 offshore of Petit Bois Island, Mississippi, August 2013

    USGS Publications Warehouse

    Forde, Arnell S.; Flocks, James G.; Kindinger, Jack G.; Bernier, Julie C.; Kelso, Kyle W.; Wiese, Dana S.

    2015-01-01

    From August 13-23, 2013, the U.S. Geological Survey (USGS), in cooperation with the U.S. Army Corps of Engineers (USACE) conducted geophysical surveys to investigate the geologic controls on barrier island framework and long-term sediment transport offshore of Petit Bois Island, Mississippi. This investigation is part of a broader USGS study on Coastal Change and Transport (CCT). These surveys were funded through the Mississippi Coastal Improvements Program (MsCIP) with partial funding provided by the Northern Gulf of Mexico Ecosystem Change and Hazard Susceptibility Project. This report serves as an archive of unprocessed digital chirp subbottom data, trackline maps, navigation files, Geographic Information System (GIS) files, Field Activity Collection System (FACS) logs, and formal Federal Geographic Data Committee (FGDC) metadata. Gained-showing a relative increase in signal amplitude-digital images of the seismic profiles are provided.

  16. Descriptive Metadata: Emerging Standards.

    ERIC Educational Resources Information Center

    Ahronheim, Judith R.

    1998-01-01

    Discusses metadata, digital resources, cross-disciplinary activity, and standards. Highlights include Standard Generalized Markup Language (SGML); Extensible Markup Language (XML); Dublin Core; Resource Description Framework (RDF); Text Encoding Initiative (TEI); Encoded Archival Description (EAD); art and cultural-heritage metadata initiatives;…

  17. Making Metadata Better with CMR and MMT

    NASA Technical Reports Server (NTRS)

    Gilman, Jason Arthur; Shum, Dana

    2016-01-01

    Ensuring complete, consistent and high quality metadata is a challenge for metadata providers and curators. The CMR and MMT systems provide providers and curators options to build in metadata quality from the start and also assess and improve the quality of already existing metadata.

  18. Mississippi's forests, 2006

    Treesearch

    Sonja N. Oswalt; Tony G. Johnson; John W. Coulston; Christopher M. Oswalt

    2009-01-01

    Forest land covers 19.6 million acres in Mississippi, or about 65 percent of the land area. The majority of forests are classed as timberland. One hundred and thirty-seven tree species were measured on Mississippi forests in the 2006 inventory. Thirty six percent of Mississippi's forest land is classified as loblolly-shortleaf pine forest, 27 percent is classified...

  19. Effects of rotation of cotton (Gossypium hirsutum L.) and soybean [Glycine max (L.) Merr.] crops on soil fertility in Elizabeth, Mississippi, USA

    USDA-ARS?s Scientific Manuscript database

    The effects of cotton (Gossypium hirsutum L.):soybean [Glycine max (L.) Merr.] rotations on the soil fertility levels are limited. An irrigated soybean:cotton rotation experiment was conducted from 2012 through 2015 near Elizabeth, MS. Rotation sequences were; continuous soybean, continuous cotton...

  20. Enriched Video Semantic Metadata: Authorization, Integration, and Presentation.

    ERIC Educational Resources Information Center

    Mu, Xiangming; Marchionini, Gary

    2003-01-01

    Presents an enriched video metadata framework including video authorization using the Video Annotation and Summarization Tool (VAST)-a video metadata authorization system that integrates both semantic and visual metadata-- metadata integration, and user level applications. Results demonstrated that the enriched metadata were seamlessly…

  1. Data, Metadata - Who Cares?

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2013-04-01

    There is a traditional saying that metadata are understandable, semantic-rich, and searchable. Data, on the other hand, are big, with no accessible semantics, and just downloadable. Not only has this led to an imbalance of search support form a user perspective, but also underneath to a deep technology divide often using relational databases for metadata and bespoke archive solutions for data. Our vision is that this barrier will be overcome, and data and metadata become searchable likewise, leveraging the potential of semantic technologies in combination with scalability technologies. Ultimately, in this vision ad-hoc processing and filtering will not distinguish any longer, forming a uniformly accessible data universe. In the European EarthServer initiative, we work towards this vision by federating database-style raster query languages with metadata search and geo broker technology. We present our approach taken, how it can leverage OGC standards, the benefits envisaged, and first results.

  2. Shorebird use of managed wetlands in the Mississippi Alluvial Valley

    USGS Publications Warehouse

    Twedt, Daniel J.; Nelms, Curtis O.; Rettig, Virginia E.; Aycock, S. Ray

    1998-01-01

    We assessed shorebird densities on managed wetland habitats during fall and winter within the primarily agricultural landscape of the Mississippi Alluvial Valley. From November through March, shorebird densities were greater on soybean fields than on rice or moist-soil fields. Killdeer (Charadrius vociferus) and Common Snipe (Gallinago gallinago) were common throughout winter, whereas Yellowlegs (Tringa spp.) and ?peep? sandpipers (Calidris spp.) were present but less abundant. During fall, Dowitchers (Limnodromus spp.), Pectoral Sandpipers (Calidris melanotos), Killdeer, and peep sandpipers were the most abundant species on managed shorebird habitat units. Although shorebird densities were consistently greater on habitats managed by drawing down existing water, we were unable to detect a significant difference in densities from areas managed by flooding previously dry habitat.

  3. Archive of digital Chirp subbottom profile data collected during USGS cruise 08CCT01, Mississippi Gulf Islands, July 2008

    USGS Publications Warehouse

    Forde, Arnell S.; Dadisman, Shawn V.; Flocks, James G.; Worley, Charles R.

    2011-01-01

    In July of 2008, the U.S. Geological Survey (USGS) conducted geophysical surveys to investigate the geologic controls on island framework from Ship Island to Horn Island, Mississippi, for the Northern Gulf of Mexico (NGOM) Ecosystem Change and Hazard Susceptibility project. Funding was provided through the Geologic Framework and Holocene Coastal Evolution of the Mississippi-Alabama Region Subtask (http://ngom.er.usgs.gov/task2_2/index.php); this project is also part of a broader USGS study on Coastal Change and Transport (CCT). This report serves as an archive of unprocessed digital Chirp seismic reflection data, trackline maps, navigation files, Geographic Information System (GIS) files, Field Activity Collection System (FACS) logs, observer's logbook, and formal Federal Geographic Data Committee (FGDC) metadata. Gained (a relative increase in signal amplitude) digital images of the seismic profiles are also provided. Refer to the Acronyms page for expansion of acronyms and abbreviations used in this report.

  4. METADATA REGISTRY, ISO/IEC 11179

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pon, R K; Buttler, D J

    2008-01-03

    ISO/IEC-11179 is an international standard that documents the standardization and registration of metadata to make data understandable and shareable. This standardization and registration allows for easier locating, retrieving, and transmitting data from disparate databases. The standard defines the how metadata are conceptually modeled and how they are shared among parties, but does not define how data is physically represented as bits and bytes. The standard consists of six parts. Part 1 provides a high-level overview of the standard and defines the basic element of a metadata registry - a data element. Part 2 defines the procedures for registering classification schemesmore » and classifying administered items in a metadata registry (MDR). Part 3 specifies the structure of an MDR. Part 4 specifies requirements and recommendations for constructing definitions for data and metadata. Part 5 defines how administered items are named and identified. Part 6 defines how administered items are registered and assigned an identifier.« less

  5. Do Community Recommendations Improve Metadata?

    NASA Astrophysics Data System (ADS)

    Gordon, S.; Habermann, T.; Jones, M. B.; Leinfelder, B.; Mecum, B.; Powers, L. A.; Slaughter, P.

    2016-12-01

    Complete documentation of scientific data is the surest way to facilitate discovery and reuse. What is complete metadata? There are many metadata recommendations from communities like the OGC, FGDC, NASA, and LTER, that can provide data documentation guidance for discovery, access, use and understanding. Often, the recommendations that communities develop are for a particular metadata dialect. Two examples of this are the LTER Completeness recommendation for EML and the FGDC Data Discovery recommendation for CSDGM. Can community adoption of a recommendation ensure that what is included in the metadata is understandable to the scientific community and beyond? By applying quantitative analysis to different LTER and USGS metadata collections in DataOne and ScienceBase, we show that community recommendations can improve the completeness of collections over time. Additionally, by comparing communities in DataOne that use the EML and CSDGM dialects, but have not adopted the recommendations to the communities that have, the positive effects of recommendation adoption on documentation completeness can be measured.

  6. Reaction of maturity group V soybean lines to purple seed stains in Mississippi 2010

    USDA-ARS?s Scientific Manuscript database

    In 2009, soybean purple seed stain (PSS) caused 6.4 million bushels of yield losses in 16 southern states. This disease severely reduces seed market grade and affects seed germination and vigor. PSS is caused by Cercospora kikuchii and is an economy important disease. To identify new sources of resi...

  7. Managing Complex Change in Clinical Study Metadata

    PubMed Central

    Brandt, Cynthia A.; Gadagkar, Rohit; Rodriguez, Cesar; Nadkarni, Prakash M.

    2004-01-01

    In highly functional metadata-driven software, the interrelationships within the metadata become complex, and maintenance becomes challenging. We describe an approach to metadata management that uses a knowledge-base subschema to store centralized information about metadata dependencies and use cases involving specific types of metadata modification. Our system borrows ideas from production-rule systems in that some of this information is a high-level specification that is interpreted and executed dynamically by a middleware engine. Our approach is implemented in TrialDB, a generic clinical study data management system. We review approaches that have been used for metadata management in other contexts and describe the features, capabilities, and limitations of our system. PMID:15187070

  8. Environmental settings of the South Fork Iowa River basin, Iowa, and the Bogue Phalia basin, Mississippi, 2006-10

    USGS Publications Warehouse

    McCarthy, Kathleen A.; Rose, Claire E.; Kalkhoff, Stephen J.

    2012-01-01

    Studies of the transport and fate of agricultural chemicals in different environmental settings were conducted by the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) Program's Agricultural Chemicals Team (ACT) at seven sites across the Nation, including the South Fork Iowa River basin in central Iowa and the Bogue Phalia basin in northwestern Mississippi. The South Fork Iowa River basin is representative of midwestern agriculture, where corn and soybeans are the predominant crops and a large percentage of the cultivated land is underlain by artificial drainage. The Bogue Phalia basin is representative of corn, soybean, cotton, and rice cropping in the humid, subtropical southeastern United States. Details of the environmental settings of these basins and the data-collection activities conducted by the USGS ACT over the 2006-10 study period are described in this report.

  9. Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations.

    PubMed

    Martínez-Romero, Marcos; O'Connor, Martin J; Shankar, Ravi D; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L; Gevaert, Olivier; Graybeal, John; Musen, Mark A

    2017-01-01

    In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository.

  10. Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations

    PubMed Central

    Martínez-Romero, Marcos; O’Connor, Martin J.; Shankar, Ravi D.; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L.; Gevaert, Olivier; Graybeal, John; Musen, Mark A.

    2017-01-01

    In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository. PMID:29854196

  11. Metadata Effectiveness in Internet Discovery: An Analysis of Digital Collection Metadata Elements and Internet Search Engine Keywords

    ERIC Educational Resources Information Center

    Yang, Le

    2016-01-01

    This study analyzed digital item metadata and keywords from Internet search engines to learn what metadata elements actually facilitate discovery of digital collections through Internet keyword searching and how significantly each metadata element affects the discovery of items in a digital repository. The study found that keywords from Internet…

  12. Science friction: data, metadata, and collaboration.

    PubMed

    Edwards, Paul N; Mayernik, Matthew S; Batcheller, Archer L; Bowker, Geoffrey C; Borgman, Christine L

    2011-10-01

    When scientists from two or more disciplines work together on related problems, they often face what we call 'science friction'. As science becomes more data-driven, collaborative, and interdisciplinary, demand increases for interoperability among data, tools, and services. Metadata--usually viewed simply as 'data about data', describing objects such as books, journal articles, or datasets--serve key roles in interoperability. Yet we find that metadata may be a source of friction between scientific collaborators, impeding data sharing. We propose an alternative view of metadata, focusing on its role in an ephemeral process of scientific communication, rather than as an enduring outcome or product. We report examples of highly useful, yet ad hoc, incomplete, loosely structured, and mutable, descriptions of data found in our ethnographic studies of several large projects in the environmental sciences. Based on this evidence, we argue that while metadata products can be powerful resources, usually they must be supplemented with metadata processes. Metadata-as-process suggests the very large role of the ad hoc, the incomplete, and the unfinished in everyday scientific work.

  13. CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises

    NASA Astrophysics Data System (ADS)

    Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.

    2011-12-01

    JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web

  14. Metadata in the Wild: An Empirical Survey of OPeNDAP-accessible Metadata and its Implications for Discovery

    NASA Astrophysics Data System (ADS)

    Hardy, D.; Janée, G.; Gallagher, J.; Frew, J.; Cornillon, P.

    2006-12-01

    The OPeNDAP Data Access Protocol (DAP) is a community standard for sharing scientific data across the Internet. Data providers using DAP have adopted a variety of metadata conventions to improve data utility, such as COARDS (1995) and CF (2003). Our results show, however, that metadata do not follow these conventions in practice. We collected metadata from over a hundred DAP servers, tens of thousands of data objects, and hundreds of collections. We found that a minority claim to adhere to a metadata convention, and a small percentage accurately adhere to their stated convention. We present descriptive statistics of our survey and highlight common traits such as well-populated attributes. Our empirical results indicate that unified search services cannot rely solely on metadata conventions. Although we encourage all providers to adopt a small subset of the CF convention for discovery purposes, we have no evidence to suggest that improved conventions would simplify the fundamental problem of heterogeneity. Large-scale discovery services must find methods for integrating incompatible metadata.

  15. Metadata for WIS and WIGOS: GAW Profile of ISO19115 and Draft WIGOS Core Metadata Standard

    NASA Astrophysics Data System (ADS)

    Klausen, Jörg; Howe, Brian

    2014-05-01

    The World Meteorological Organization (WMO) Integrated Global Observing System (WIGOS) is a key WMO priority to underpin all WMO Programs and new initiatives such as the Global Framework for Climate Services (GFCS). The development of the WIGOS Operational Information Resource (WIR) is central to the WIGOS Framework Implementation Plan (WIGOS-IP). The WIR shall provide information on WIGOS and its observing components, as well as requirements of WMO application areas. An important aspect is the description of the observational capabilities by way of structured metadata. The Global Atmosphere Watch is the WMO program addressing the chemical composition and selected physical properties of the atmosphere. Observational data are collected and archived by GAW World Data Centres (WDCs) and related data centres. The Task Team on GAW WDCs (ET-WDC) have developed a profile of the ISO19115 metadata standard that is compliant with the WMO Information System (WIS) specification for the WMO Core Metadata Profile v1.3. This profile is intended to harmonize certain aspects of the documentation of observations as well as the interoperability of the WDCs. The Inter-Commission-Group on WIGOS (ICG-WIGOS) has established the Task Team on WIGOS Metadata (TT-WMD) with representation of all WMO Technical Commissions and the objective to define the WIGOS Core Metadata. The result of this effort is a draft semantic standard comprising of a set of metadata classes that are considered to be of critical importance for the interpretation of observations relevant to WIGOS. The purpose of the presentation is to acquaint the audience with the standard and to solicit informal feed-back from experts in the various disciplines of meteorology and climatology. This feed-back will help ET-WDC and TT-WMD to refine the GAW metadata profile and the draft WIGOS metadata standard, thereby increasing their utility and acceptance.

  16. GraphMeta: Managing HPC Rich Metadata in Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Dong; Chen, Yong; Carns, Philip

    High-performance computing (HPC) systems face increasingly critical metadata management challenges, especially in the approaching exascale era. These challenges arise not only from exploding metadata volumes, but also from increasingly diverse metadata, which contains data provenance and arbitrary user-defined attributes in addition to traditional POSIX metadata. This ‘rich’ metadata is becoming critical to supporting advanced data management functionality such as data auditing and validation. In our prior work, we identified a graph-based model as a promising solution to uniformly manage HPC rich metadata due to its flexibility and generality. However, at the same time, graph-based HPC rich metadata anagement also introducesmore » significant challenges to the underlying infrastructure. In this study, we first identify the challenges on the underlying infrastructure to support scalable, high-performance rich metadata management. Based on that, we introduce GraphMeta, a graphbased engine designed for this use case. It achieves performance scalability by introducing a new graph partitioning algorithm and a write-optimal storage engine. We evaluate GraphMeta under both synthetic and real HPC metadata workloads, compare it with other approaches, and demonstrate its advantages in terms of efficiency and usability for rich metadata management in HPC systems.« less

  17. Mitogenome metadata: current trends and proposed standards.

    PubMed

    Strohm, Jeff H T; Gwiazdowski, Rodger A; Hanner, Robert

    2016-09-01

    Mitogenome metadata are descriptive terms about the sequence, and its specimen description that allow both to be digitally discoverable and interoperable. Here, we review a sampling of mitogenome metadata published in the journal Mitochondrial DNA between 2005 and 2014. Specifically, we have focused on a subset of metadata fields that are available for GenBank records, and specified by the Genomics Standards Consortium (GSC) and other biodiversity metadata standards; and we assessed their presence across three main categories: collection, biological and taxonomic information. To do this we reviewed 146 mitogenome manuscripts, and their associated GenBank records, and scored them for 13 metadata fields. We also explored the potential for mitogenome misidentification using their sequence diversity, and taxonomic metadata on the Barcode of Life Datasystems (BOLD). For this, we focused on all Lepidoptera and Perciformes mitogenomes included in the review, along with additional mitogenome sequence data mined from Genbank. Overall, we found that none of 146 mitogenome projects provided all the metadata we looked for; and only 17 projects provided at least one category of metadata across the three main categories. Comparisons using mtDNA sequences from BOLD, suggest that some mitogenomes may be misidentified. Lastly, we appreciate the research potential of mitogenomes announced through this journal; and we conclude with a suggestion of 13 metadata fields, available on GenBank, that if provided in a mitogenomes's GenBank record, would increase their research value.

  18. Metadata Sets for e-Government Resources: The Extended e-Government Metadata Schema (eGMS+)

    NASA Astrophysics Data System (ADS)

    Charalabidis, Yannis; Lampathaki, Fenareti; Askounis, Dimitris

    In the dawn of the Semantic Web era, metadata appear as a key enabler that assists management of the e-Government resources related to the provision of personalized, efficient and proactive services oriented towards the real citizens’ needs. As different authorities typically use different terms to describe their resources and publish them in various e-Government Registries that may enhance the access to and delivery of governmental knowledge, but also need to communicate seamlessly at a national and pan-European level, the need for a unified e-Government metadata standard emerges. This paper presents the creation of an ontology-based extended metadata set for e-Government Resources that embraces services, documents, XML Schemas, code lists, public bodies and information systems. Such a metadata set formalizes the exchange of information between portals and registries and assists the service transformation and simplification efforts, while it can be further taken into consideration when applying Web 2.0 techniques in e-Government.

  19. Watershed Boundary Dataset for Mississippi

    USGS Publications Warehouse

    Wilson, K. Van; Clair, Michael G.; Turnipseed, D. Phil; Rebich, Richard A.

    2009-01-01

    The U.S. Geological Survey, in cooperation with the Mississippi Department of Environmental Quality, U.S. Department of Agriculture-Natural Resources Conservation Service, Mississippi Department of Transportation, U.S. Department of Agriculture-Forest Service, and the Mississippi Automated Resource Information System developed a 1:24,000-scale Watershed Boundary Dataset for Mississippi including watershed and subwatershed boundaries, codes, names, and areas. The Watershed Boundary Dataset for Mississippi provides a standard geographical framework for water-resources and selected land-resources planning. The original 8-digit subbasins (Hydrologic Unit Codes) were further subdivided into 10-digit watersheds (62.5 to 391 square miles (mi2)) and 12-digit subwatersheds (15.6 to 62.5 mi2) - the exceptions being the Delta part of Mississippi and the Mississippi River inside levees, which were subdivided into 10-digit watersheds only. Also, large water bodies in the Mississippi Sound along the coast were not delineated as small as a typical 12-digit subwatershed. All of the data - including watershed and subwatershed boundaries, subdivision codes and names, and drainage-area data - are stored in a Geographic Information System database, which are available at: http://ms.water.usgs.gov/. This map shows information on drainage and hydrography in the form of U.S. Geological Survey hydrologic unit boundaries for water-resource 2-digit regions, 4-digit subregions, 6-digit basins (formerly called accounting units), 8-digit subbasins (formerly called cataloging units), 10-digit watershed, and 12-digit subwatersheds in Mississippi. A description of the project study area, methods used in the development of watershed and subwatershed boundaries for Mississippi, and results are presented in Wilson and others (2008). The data presented in this map and by Wilson and others (2008) supersede the data presented for Mississippi by Seaber and others (1987) and U.S. Geological Survey (1977).

  20. MPEG-7: standard metadata for multimedia content

    NASA Astrophysics Data System (ADS)

    Chang, Wo

    2005-08-01

    The eXtensible Markup Language (XML) metadata technology of describing media contents has emerged as a dominant mode of making media searchable both for human and machine consumptions. To realize this premise, many online Web applications are pushing this concept to its fullest potential. However, a good metadata model does require a robust standardization effort so that the metadata content and its structure can reach its maximum usage between various applications. An effective media content description technology should also use standard metadata structures especially when dealing with various multimedia contents. A new metadata technology called MPEG-7 content description has merged from the ISO MPEG standards body with the charter of defining standard metadata to describe audiovisual content. This paper will give an overview of MPEG-7 technology and what impact it can bring forth to the next generation of multimedia indexing and retrieval applications.

  1. Towards Data Value-Level Metadata for Clinical Studies.

    PubMed

    Zozus, Meredith Nahm; Bonner, Joseph

    2017-01-01

    While several standards for metadata describing clinical studies exist, comprehensive metadata to support traceability of data from clinical studies has not been articulated. We examine uses of metadata in clinical studies. We examine and enumerate seven sources of data value-level metadata in clinical studies inclusive of research designs across the spectrum of the National Institutes of Health definition of clinical research. The sources of metadata inform categorization in terms of metadata describing the origin of a data value, the definition of a data value, and operations to which the data value was subjected. The latter is further categorized into information about changes to a data value, movement of a data value, retrieval of a data value, and data quality checks, constraints or assessments to which the data value was subjected. The implications of tracking and managing data value-level metadata are explored.

  2. A metadata template for ocean acidification data

    NASA Astrophysics Data System (ADS)

    Jiang, L.

    2014-12-01

    Metadata is structured information that describes, explains, and locates an information resource (e.g., data). It is often coarsely described as data about data, and documents information such as what was measured, by whom, when, where, and how it was sampled, analyzed, with what instruments. Metadata is inherent to ensure the survivability and accessibility of the data into the future. With the rapid expansion of biological response ocean acidification (OA) studies, the lack of a common metadata template to document such type of data has become a significant gap for ocean acidification data management efforts. In this paper, we present a metadata template that can be applied to a broad spectrum of OA studies, including those studying the biological responses of organisms on ocean acidification. The "variable metadata section", which includes the variable name, observation type, whether the variable is a manipulation condition or response variable, and the biological subject on which the variable is studied, forms the core of this metadata template. Additional metadata elements, such as principal investigators, temporal and spatial coverage, platforms for the sampling, data citation are essential components to complete the template. We explain the structure of the template, and define many metadata elements that may be unfamiliar to researchers. For that reason, this paper can serve as a user's manual for the template.

  3. Metadata Wizard: an easy-to-use tool for creating FGDC-CSDGM metadata for geospatial datasets in ESRI ArcGIS Desktop

    USGS Publications Warehouse

    Ignizio, Drew A.; O'Donnell, Michael S.; Talbert, Colin B.

    2014-01-01

    Creating compliant metadata for scientific data products is mandated for all federal Geographic Information Systems professionals and is a best practice for members of the geospatial data community. However, the complexity of the The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata, the limited availability of easy-to-use tools, and recent changes in the ESRI software environment continue to make metadata creation a challenge. Staff at the U.S. Geological Survey Fort Collins Science Center have developed a Python toolbox for ESRI ArcDesktop to facilitate a semi-automated workflow to create and update metadata records in ESRI’s 10.x software. The U.S. Geological Survey Metadata Wizard tool automatically populates several metadata elements: the spatial reference, spatial extent, geospatial presentation format, vector feature count or raster column/row count, native system/processing environment, and the metadata creation date. Once the software auto-populates these elements, users can easily add attribute definitions and other relevant information in a simple Graphical User Interface. The tool, which offers a simple design free of esoteric metadata language, has the potential to save many government and non-government organizations a significant amount of time and costs by facilitating the development of The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata compliant metadata for ESRI software users. A working version of the tool is now available for ESRI ArcDesktop, version 10.0, 10.1, and 10.2 (downloadable at http:/www.sciencebase.gov/metadatawizard).

  4. Inheritance rules for Hierarchical Metadata Based on ISO 19115

    NASA Astrophysics Data System (ADS)

    Zabala, A.; Masó, J.; Pons, X.

    2012-04-01

    Mainly, ISO19115 has been used to describe metadata for datasets and services. Furthermore, ISO19115 standard (as well as the new draft ISO19115-1) includes a conceptual model that allows to describe metadata at different levels of granularity structured in hierarchical levels, both in aggregated resources such as particularly series, datasets, and also in more disaggregated resources such as types of entities (feature type), types of attributes (attribute type), entities (feature instances) and attributes (attribute instances). In theory, to apply a complete metadata structure to all hierarchical levels of metadata, from the whole series to an individual feature attributes, is possible, but to store all metadata at all levels is completely impractical. An inheritance mechanism is needed to store each metadata and quality information at the optimum hierarchical level and to allow an ease and efficient documentation of metadata in both an Earth observation scenario such as a multi-satellite mission multiband imagery, as well as in a complex vector topographical map that includes several feature types separated in layers (e.g. administrative limits, contour lines, edification polygons, road lines, etc). Moreover, and due to the traditional split of maps in tiles due to map handling at detailed scales or due to the satellite characteristics, each of the previous thematic layers (e.g. 1:5000 roads for a country) or band (Landsat-5 TM cover of the Earth) are tiled on several parts (sheets or scenes respectively). According to hierarchy in ISO 19115, the definition of general metadata can be supplemented by spatially specific metadata that, when required, either inherits or overrides the general case (G.1.3). Annex H of this standard states that only metadata exceptions are defined at lower levels, so it is not necessary to generate the full registry of metadata for each level but to link particular values to the general value that they inherit. Conceptually the metadata

  5. Incorporating ISO Metadata Using HDF Product Designer

    NASA Technical Reports Server (NTRS)

    Jelenak, Aleksandar; Kozimor, John; Habermann, Ted

    2016-01-01

    The need to store in HDF5 files increasing amounts of metadata of various complexity is greatly overcoming the capabilities of the Earth science metadata conventions currently in use. Data producers until now did not have much choice but to come up with ad hoc solutions to this challenge. Such solutions, in turn, pose a wide range of issues for data managers, distributors, and, ultimately, data users. The HDF Group is experimenting on a novel approach of using ISO 19115 metadata objects as a catch-all container for all the metadata that cannot be fitted into the current Earth science data conventions. This presentation will showcase how the HDF Product Designer software can be utilized to help data producers include various ISO metadata objects in their products.

  6. Handling Metadata in a Neurophysiology Laboratory

    PubMed Central

    Zehl, Lyuba; Jaillet, Florent; Stoewer, Adrian; Grewe, Jan; Sobolev, Andrey; Wachtler, Thomas; Brochier, Thomas G.; Riehle, Alexa; Denker, Michael; Grün, Sonja

    2016-01-01

    To date, non-reproducibility of neurophysiological research is a matter of intense discussion in the scientific community. A crucial component to enhance reproducibility is to comprehensively collect and store metadata, that is, all information about the experiment, the data, and the applied preprocessing steps on the data, such that they can be accessed and shared in a consistent and simple manner. However, the complexity of experiments, the highly specialized analysis workflows and a lack of knowledge on how to make use of supporting software tools often overburden researchers to perform such a detailed documentation. For this reason, the collected metadata are often incomplete, incomprehensible for outsiders or ambiguous. Based on our research experience in dealing with diverse datasets, we here provide conceptual and technical guidance to overcome the challenges associated with the collection, organization, and storage of metadata in a neurophysiology laboratory. Through the concrete example of managing the metadata of a complex experiment that yields multi-channel recordings from monkeys performing a behavioral motor task, we practically demonstrate the implementation of these approaches and solutions with the intention that they may be generalized to other projects. Moreover, we detail five use cases that demonstrate the resulting benefits of constructing a well-organized metadata collection when processing or analyzing the recorded data, in particular when these are shared between laboratories in a modern scientific collaboration. Finally, we suggest an adaptable workflow to accumulate, structure and store metadata from different sources using, by way of example, the odML metadata framework. PMID:27486397

  7. Handling Metadata in a Neurophysiology Laboratory.

    PubMed

    Zehl, Lyuba; Jaillet, Florent; Stoewer, Adrian; Grewe, Jan; Sobolev, Andrey; Wachtler, Thomas; Brochier, Thomas G; Riehle, Alexa; Denker, Michael; Grün, Sonja

    2016-01-01

    To date, non-reproducibility of neurophysiological research is a matter of intense discussion in the scientific community. A crucial component to enhance reproducibility is to comprehensively collect and store metadata, that is, all information about the experiment, the data, and the applied preprocessing steps on the data, such that they can be accessed and shared in a consistent and simple manner. However, the complexity of experiments, the highly specialized analysis workflows and a lack of knowledge on how to make use of supporting software tools often overburden researchers to perform such a detailed documentation. For this reason, the collected metadata are often incomplete, incomprehensible for outsiders or ambiguous. Based on our research experience in dealing with diverse datasets, we here provide conceptual and technical guidance to overcome the challenges associated with the collection, organization, and storage of metadata in a neurophysiology laboratory. Through the concrete example of managing the metadata of a complex experiment that yields multi-channel recordings from monkeys performing a behavioral motor task, we practically demonstrate the implementation of these approaches and solutions with the intention that they may be generalized to other projects. Moreover, we detail five use cases that demonstrate the resulting benefits of constructing a well-organized metadata collection when processing or analyzing the recorded data, in particular when these are shared between laboratories in a modern scientific collaboration. Finally, we suggest an adaptable workflow to accumulate, structure and store metadata from different sources using, by way of example, the odML metadata framework.

  8. Soybean Genetics

    USDA-ARS?s Scientific Manuscript database

    Soybean genetics is a broad area encompassing all aspects, such as qualitative genetics, molecular genetics, etc. The objective of this book chapter was to include information that could be used for soybean improvement, and to summarize the current status of soybean genomics. Soybean germplasm is ...

  9. Evaluating the privacy properties of telephone metadata.

    PubMed

    Mayer, Jonathan; Mutchler, Patrick; Mitchell, John C

    2016-05-17

    Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences.

  10. Evaluating the privacy properties of telephone metadata

    PubMed Central

    Mayer, Jonathan; Mutchler, Patrick; Mitchell, John C.

    2016-01-01

    Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences. PMID:27185922

  11. Seeking the Path to Metadata Nirvana

    NASA Astrophysics Data System (ADS)

    Graybeal, J.

    2008-12-01

    Scientists have always found reusing other scientists' data challenging. Computers did not fundamentally change the problem, but enabled more and larger instances of it. In fact, by removing human mediation and time delays from the data sharing process, computers emphasize the contextual information that must be exchanged in order to exchange and reuse data. This requirement for contextual information has two faces: "interoperability" when talking about systems, and "the metadata problem" when talking about data. As much as any single organization, the Marine Metadata Interoperability (MMI) project has been tagged with the mission "Solve the metadata problem." Of course, if that goal is achieved, then sustained, interoperable data systems for interdisciplinary observing networks can be easily built -- pesky metadata differences, like which protocol to use for data exchange, or what the data actually measures, will be a thing of the past. Alas, as you might imagine, there will always be complexities and incompatibilities that are not addressed, and data systems that are not interoperable, even within a science discipline. So should we throw up our hands and surrender to the inevitable? Not at all. Rather, we try to minimize metadata problems as much as we can. In this we increasingly progress, despite natural forces that pull in the other direction. Computer systems let us work with more complexity, build community knowledge and collaborations, and preserve and publish our progress and (dis-)agreements. Funding organizations, science communities, and technologists see the importance interoperable systems and metadata, and direct resources toward them. With the new approaches and resources, projects like IPY and MMI can simultaneously define, display, and promote effective strategies for sustainable, interoperable data systems. This presentation will outline the role metadata plays in durable interoperable data systems, for better or worse. It will describe times

  12. Making Interoperability Easier with the NASA Metadata Management Tool

    NASA Astrophysics Data System (ADS)

    Shum, D.; Reese, M.; Pilone, D.; Mitchell, A. E.

    2016-12-01

    ISO 19115 has enabled interoperability amongst tools, yet many users find it hard to build ISO metadata for their collections because it can be large and overly flexible for their needs. The Metadata Management Tool (MMT), part of NASA's Earth Observing System Data and Information System (EOSDIS), offers users a modern, easy to use browser based tool to develop ISO compliant metadata. Through a simplified UI experience, metadata curators can create and edit collections without any understanding of the complex ISO-19115 format, while still generating compliant metadata. The MMT is also able to assess the completeness of collection level metadata by evaluating it against a variety of metadata standards. The tool provides users with clear guidance as to how to change their metadata in order to improve their quality and compliance. It is based on NASA's Unified Metadata Model for Collections (UMM-C) which is a simpler metadata model which can be cleanly mapped to ISO 19115. This allows metadata authors and curators to meet ISO compliance requirements faster and more accurately. The MMT and UMM-C have been developed in an agile fashion, with recurring end user tests and reviews to continually refine the tool, the model and the ISO mappings. This process is allowing for continual improvement and evolution to meet the community's needs.

  13. Metadata and Service at the GFZ ISDC Portal

    NASA Astrophysics Data System (ADS)

    Ritschel, B.

    2008-05-01

    The online service portal of the GFZ Potsdam Information System and Data Center (ISDC) is an access point for all manner of geoscientific geodata, its corresponding metadata, scientific documentation and software tools. At present almost 2000 national and international users and user groups have the opportunity to request Earth science data from a portfolio of 275 different products types and more than 20 Million single data files with an added volume of approximately 12 TByte. The majority of the data and information, the portal currently offers to the public, are global geomonitoring products such as satellite orbit and Earth gravity field data as well as geomagnetic and atmospheric data for the exploration. These products for Earths changing system are provided via state-of-the art retrieval techniques. The data product catalog system behind these techniques is based on the extensive usage of standardized metadata, which are describing the different geoscientific product types and data products in an uniform way. Where as all ISDC product types are specified by NASA's Directory Interchange Format (DIF), Version 9.0 Parent XML DIF metadata files, the individual data files are described by extended DIF metadata documents. Depending on the beginning of the scientific project, one part of data files are described by extended DIF, Version 6 metadata documents and the other part are specified by data Child XML DIF metadata documents. Both, the product type dependent parent DIF metadata documents and the data file dependent child DIF metadata documents are derived from a base-DIF.xsd xml schema file. The ISDC metadata philosophy defines a geoscientific product as a package consisting of mostly one or sometimes more than one data file plus one extended DIF metadata file. Because NASA's DIF metadata standard has been developed in order to specify a collection of data only, the extension of the DIF standard consists of new and specific attributes, which are necessary for

  14. The PDS4 Metadata Management System

    NASA Astrophysics Data System (ADS)

    Raugh, A. C.; Hughes, J. S.

    2018-04-01

    We present the key features of the Planetary Data System (PDS) PDS4 Information Model as an extendable metadata management system for planetary metadata related to data structure, analysis/interpretation, and provenance.

  15. Transgenic soybeans and soybean protein analysis: an overview.

    PubMed

    Natarajan, Savithiry; Luthria, Devanand; Bae, Hanhong; Lakshman, Dilip; Mitra, Amitava

    2013-12-04

    To meet the increasing global demand for soybeans for food and feed consumption, new high-yield varieties with improved quality traits are needed. To ensure the safety of the crop, it is important to determine the variation in seed proteins along with unintended changes that may occur in the crop as a result various stress stimuli, breeding, and genetic modification. Understanding the variation of seed proteins in the wild and cultivated soybean cultivars is useful for determining unintended protein expression in new varieties of soybeans. Proteomic technology is useful to analyze protein variation due to various stimuli. This short review discusses transgenic soybeans, different soybean proteins, and the approaches used for protein analysis. The characterization of soybean protein will be useful for researchers, nutrition professionals, and regulatory agencies dealing with soy-derived food products.

  16. Characterization of Soybean WRKY Gene Family and Identification of Soybean WRKY Genes that Promote Resistance to Soybean Cyst Nematode.

    PubMed

    Yang, Yan; Zhou, Yuan; Chi, Yingjun; Fan, Baofang; Chen, Zhixiang

    2017-12-19

    WRKY proteins are a superfamily of plant transcription factors with important roles in plants. WRKY proteins have been extensively analyzed in plant species including Arabidopsis and rice. Here we report characterization of soybean WRKY gene family and their functional analysis in resistance to soybean cyst nematode (SCN), the most important soybean pathogen. Through search of the soybean genome, we identified 174 genes encoding WRKY proteins that can be classified into seven groups as established in other plants. WRKY variants including a WRKY-related protein unique to legumes have also been identified. Expression analysis reveals both diverse expression patterns in different soybean tissues and preferential expression of specific WRKY groups in certain tissues. Furthermore, a large number of soybean WRKY genes were responsive to salicylic acid. To identify soybean WRKY genes that promote soybean resistance to SCN, we first screened soybean WRKY genes for enhancing SCN resistance when over-expressed in transgenic soybean hairy roots. To confirm the results, we transformed five WRKY genes into a SCN-susceptible soybean cultivar and generated transgenic soybean lines. Transgenic soybean lines overexpressing three WRKY transgenes displayed increased resistance to SCN. Thus, WRKY genes could be explored to develop new soybean cultivars with enhanced resistance to SCN.

  17. Assessing Metadata Quality of a Federally Sponsored Health Data Repository.

    PubMed

    Marc, David T; Beattie, James; Herasevich, Vitaly; Gatewood, Laël; Zhang, Rui

    2016-01-01

    The U.S. Federal Government developed HealthData.gov to disseminate healthcare datasets to the public. Metadata is provided for each datasets and is the sole source of information to find and retrieve data. This study employed automated quality assessments of the HealthData.gov metadata published from 2012 to 2014 to measure completeness, accuracy, and consistency of applying standards. The results demonstrated that metadata published in earlier years had lower completeness, accuracy, and consistency. Also, metadata that underwent modifications following their original creation were of higher quality. HealthData.gov did not uniformly apply Dublin Core Metadata Initiative to the metadata, which is a widely accepted metadata standard. These findings suggested that the HealthData.gov metadata suffered from quality issues, particularly related to information that wasn't frequently updated. The results supported the need for policies to standardize metadata and contributed to the development of automated measures of metadata quality.

  18. Assessing Metadata Quality of a Federally Sponsored Health Data Repository

    PubMed Central

    Marc, David T.; Beattie, James; Herasevich, Vitaly; Gatewood, Laël; Zhang, Rui

    2016-01-01

    The U.S. Federal Government developed HealthData.gov to disseminate healthcare datasets to the public. Metadata is provided for each datasets and is the sole source of information to find and retrieve data. This study employed automated quality assessments of the HealthData.gov metadata published from 2012 to 2014 to measure completeness, accuracy, and consistency of applying standards. The results demonstrated that metadata published in earlier years had lower completeness, accuracy, and consistency. Also, metadata that underwent modifications following their original creation were of higher quality. HealthData.gov did not uniformly apply Dublin Core Metadata Initiative to the metadata, which is a widely accepted metadata standard. These findings suggested that the HealthData.gov metadata suffered from quality issues, particularly related to information that wasn’t frequently updated. The results supported the need for policies to standardize metadata and contributed to the development of automated measures of metadata quality. PMID:28269883

  19. Protein profile of mature soybean seeds and prepared soybean milk.

    PubMed

    Capriotti, Anna Laura; Caruso, Giuseppe; Cavaliere, Chiara; Samperi, Roberto; Stampachiacchiere, Serena; Zenezini Chiozzi, Riccardo; Laganà, Aldo

    2014-10-08

    The soybean (Glycine max (L.) Merrill) is economically the most important bean in the world, providing a wide range of vegetable proteins. Soybean milk is a colloidal solution obtained as water extract from swelled and ground soybean seeds. Soybean proteins represent about 35-40% on a dry weight basis and they are receiving increasing attention with respect to their health effects. However, the soybean is a well-recognized allergenic food, and therefore, it is urgent to define its protein components responsible for the allergenicity in order to develop hypoallergenic soybean products for sensitive people. The main aim of this work was the characterization of seed and milk soybean proteome and their comparison in terms of protein content and specific proteins. Using a shotgun proteomics approach, 243 nonredundant proteins were identified in mature soybean seeds.

  20. Evolving Metadata in NASA Earth Science Data Systems

    NASA Astrophysics Data System (ADS)

    Mitchell, A.; Cechini, M. F.; Walter, J.

    2011-12-01

    NASA's Earth Observing System (EOS) is a coordinated series of satellites for long term global observations. NASA's Earth Observing System Data and Information System (EOSDIS) is a petabyte-scale archive of environmental data that supports global climate change research by providing end-to-end services from EOS instrument data collection to science data processing to full access to EOS and other earth science data. On a daily basis, the EOSDIS ingests, processes, archives and distributes over 3 terabytes of data from NASA's Earth Science missions representing over 3500 data products ranging from various types of science disciplines. EOSDIS is currently comprised of 12 discipline specific data centers that are collocated with centers of science discipline expertise. Metadata is used in all aspects of NASA's Earth Science data lifecycle from the initial measurement gathering to the accessing of data products. Missions use metadata in their science data products when describing information such as the instrument/sensor, operational plan, and geographically region. Acting as the curator of the data products, data centers employ metadata for preservation, access and manipulation of data. EOSDIS provides a centralized metadata repository called the Earth Observing System (EOS) ClearingHouse (ECHO) for data discovery and access via a service-oriented-architecture (SOA) between data centers and science data users. ECHO receives inventory metadata from data centers who generate metadata files that complies with the ECHO Metadata Model. NASA's Earth Science Data and Information System (ESDIS) Project established a Tiger Team to study and make recommendations regarding the adoption of the international metadata standard ISO 19115 in EOSDIS. The result was a technical report recommending an evolution of NASA data systems towards a consistent application of ISO 19115 and related standards including the creation of a NASA-specific convention for core ISO 19115 elements. Part of

  1. The XML Metadata Editor of GFZ Data Services

    NASA Astrophysics Data System (ADS)

    Ulbricht, Damian; Elger, Kirsten; Tesei, Telemaco; Trippanera, Daniele

    2017-04-01

    Following the FAIR data principles, research data should be Findable, Accessible, Interoperable and Reuseable. Publishing data under these principles requires to assign persistent identifiers to the data and to generate rich machine-actionable metadata. To increase the interoperability, metadata should include shared vocabularies and crosslink the newly published (meta)data and related material. However, structured metadata formats tend to be complex and are not intended to be generated by individual scientists. Software solutions are needed that support scientists in providing metadata describing their data. To facilitate data publication activities of 'GFZ Data Services', we programmed an XML metadata editor that assists scientists to create metadata in different schemata popular in the earth sciences (ISO19115, DIF, DataCite), while being at the same time usable by and understandable for scientists. Emphasis is placed on removing barriers, in particular the editor is publicly available on the internet without registration [1] and the scientists are not requested to provide information that may be generated automatically (e.g. the URL of a specific licence or the contact information of the metadata distributor). Metadata are stored in browser cookies and a copy can be saved to the local hard disk. To improve usability, form fields are translated into the scientific language, e.g. 'creators' of the DataCite schema are called 'authors'. To assist filling in the form, we make use of drop down menus for small vocabulary lists and offer a search facility for large thesauri. Explanations to form fields and definitions of vocabulary terms are provided in pop-up windows and a full documentation is available for download via the help menu. In addition, multiple geospatial references can be entered via an interactive mapping tool, which helps to minimize problems with different conventions to provide latitudes and longitudes. Currently, we are extending the metadata editor

  2. A Metadata Action Language

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Clancy, Dan (Technical Monitor)

    2001-01-01

    The data management problem comprises data processing and data tracking. Data processing is the creation of new data based on existing data sources. Data tracking consists of storing metadata descriptions of available data. This paper addresses the data management problem by casting it as an AI planning problem. Actions are data-processing commands, plans are dataflow programs and goals are metadata descriptions of desired data products. Data manipulation is simply plan generation and execution, and a key component of data tracking is inferring the effects of an observed plan. We introduce a new action language for data management domains, called ADILM. We discuss the connection between data processing and information integration and show how a language for the latter must be modified to support the former. The paper also discusses information gathering within a data-processing framework, and show how ADILM metadata expressions are a generalization of Local Completeness.

  3. Improving Metadata Compliance for Earth Science Data Records

    NASA Astrophysics Data System (ADS)

    Armstrong, E. M.; Chang, O.; Foster, D.

    2014-12-01

    One of the recurring challenges of creating earth science data records is to ensure a consistent level of metadata compliance at the granule level where important details of contents, provenance, producer, and data references are necessary to obtain a sufficient level of understanding. These details are important not just for individual data consumers but also for autonomous software systems. Two of the most popular metadata standards at the granule level are the Climate and Forecast (CF) Metadata Conventions and the Attribute Conventions for Dataset Discovery (ACDD). Many data producers have implemented one or both of these models including the Group for High Resolution Sea Surface Temperature (GHRSST) for their global SST products and the Ocean Biology Processing Group for NASA ocean color and SST products. While both the CF and ACDD models contain various level of metadata richness, the actual "required" attributes are quite small in number. Metadata at the granule level becomes much more useful when recommended or optional attributes are implemented that document spatial and temporal ranges, lineage and provenance, sources, keywords, and references etc. In this presentation we report on a new open source tool to check the compliance of netCDF and HDF5 granules to the CF and ACCD metadata models. The tool, written in Python, was originally implemented to support metadata compliance for netCDF records as part of the NOAA's Integrated Ocean Observing System. It outputs standardized scoring for metadata compliance for both CF and ACDD, produces an objective summary weight, and can be implemented for remote records via OPeNDAP calls. Originally a command-line tool, we have extended it to provide a user-friendly web interface. Reports on metadata testing are grouped in hierarchies that make it easier to track flaws and inconsistencies in the record. We have also extended it to support explicit metadata structures and semantic syntax for the GHRSST project that can be

  4. Assessing Public Metabolomics Metadata, Towards Improving Quality.

    PubMed

    Ferreira, João D; Inácio, Bruno; Salek, Reza M; Couto, Francisco M

    2017-12-13

    Public resources need to be appropriately annotated with metadata in order to make them discoverable, reproducible and traceable, further enabling them to be interoperable or integrated with other datasets. While data-sharing policies exist to promote the annotation process by data owners, these guidelines are still largely ignored. In this manuscript, we analyse automatic measures of metadata quality, and suggest their application as a mean to encourage data owners to increase the metadata quality of their resources and submissions, thereby contributing to higher quality data, improved data sharing, and the overall accountability of scientific publications. We analyse these metadata quality measures in the context of a real-world repository of metabolomics data (i.e. MetaboLights), including a manual validation of the measures, and an analysis of their evolution over time. Our findings suggest that the proposed measures can be used to mimic a manual assessment of metadata quality.

  5. MISSISSIPPI EMBAYMENT AQUIFER SYSTEM IN MISSISSIPPI: GEOHYDROLOGIC DATA COMPILATION FOR FLOW MODEL SIMULATION.

    USGS Publications Warehouse

    Arthur, J.K.; Taylor, R.E.

    1986-01-01

    As part of the Gulf Coast Regional Aquifer System Analysis (GC RASA) study, data from 184 geophysical well logs were used to define the geohydrologic framework of the Mississippi embayment aquifer system in Mississippi for flow model simulation. Five major aquifers of Eocene and Paleocene age were defined within this aquifer system in Mississippi. A computer data storage system was established to assimilate the information obtained from the geophysical logs. Computer programs were developed to manipulate the data to construct geologic sections and structure maps. Data from the storage system will be input to a five-layer, three-dimensional, finite-difference digital computer model that is used to simulate the flow dynamics in the five major aquifers of the Mississippi embayment aquifer system.

  6. Declining groundwater level caused by irrigation to row crops in the Lower Mississippi River Basin, Current Situation and Trends

    NASA Astrophysics Data System (ADS)

    Feng, G.; Gao, F.; Ouyang, Y.

    2017-12-01

    The Mississippi River is North America's largest river and the second largest watershed in the world. It flows over 3,700 km through America's heartland to the Gulf of Mexico. Over 3 million hectares in the Lower Mississippi River Basin represent irrigated cropland and 90 percent of those lands currently rely on the groundwater supply. The primary crops grown in this region are soybean, corn, cotton, and rice. Increased water withdrawals for irrigating those crops and stagnant recharging jeopardize the long-term availability of the aquifer and place irrigation agriculture in the region on an unsustainable path. The objectives of this study were to: 1) analyze the current groundwater level in the Lower Mississippi River Basin based on the water table depth observed by Yazoo Mississippi Delta Joint Water Management District from 2000 and 2016; 2) determine trends of change in groundwater level under conventional and groundwater saving irrigation management practices (ET or soil moisture based full irrigation scheduling using all groundwater or different percentages of ground and surface water). The coupled SWAT and MODFLOW model was applied to investigate the trends. Observed results showed that the groundwater level has declined from 33 to 26 m at an annual decrease rate of 0.4 m in the past 17 years. Simulated results revealed that the groundwater storage was decreased by 26 cm/month due to irrigation in crop season. It is promising that the groundwater storage was increased by 23 cm/month, sometimes even 60 cm/month in crop off-growing season because of recharge from rainfall. Our results suggest that alternative ET or soil moisture based groundwater saving irrigation scheduling with conjunctive use of surface water is a sustainable practice for irrigated agriculture in in the Lower Mississippi River Basin.

  7. The Metadata Coverage Index (MCI): A standardized metric for quantifying database metadata richness.

    PubMed

    Liolios, Konstantinos; Schriml, Lynn; Hirschman, Lynette; Pagani, Ioanna; Nosrat, Bahador; Sterk, Peter; White, Owen; Rocca-Serra, Philippe; Sansone, Susanna-Assunta; Taylor, Chris; Kyrpides, Nikos C; Field, Dawn

    2012-07-30

    Variability in the extent of the descriptions of data ('metadata') held in public repositories forces users to assess the quality of records individually, which rapidly becomes impractical. The scoring of records on the richness of their description provides a simple, objective proxy measure for quality that enables filtering that supports downstream analysis. Pivotally, such descriptions should spur on improvements. Here, we introduce such a measure - the 'Metadata Coverage Index' (MCI): the percentage of available fields actually filled in a record or description. MCI scores can be calculated across a database, for individual records or for their component parts (e.g., fields of interest). There are many potential uses for this simple metric: for example; to filter, rank or search for records; to assess the metadata availability of an ad hoc collection; to determine the frequency with which fields in a particular record type are filled, especially with respect to standards compliance; to assess the utility of specific tools and resources, and of data capture practice more generally; to prioritize records for further curation; to serve as performance metrics of funded projects; or to quantify the value added by curation. Here we demonstrate the utility of MCI scores using metadata from the Genomes Online Database (GOLD), including records compliant with the 'Minimum Information about a Genome Sequence' (MIGS) standard developed by the Genomic Standards Consortium. We discuss challenges and address the further application of MCI scores; to show improvements in annotation quality over time, to inform the work of standards bodies and repository providers on the usability and popularity of their products, and to assess and credit the work of curators. Such an index provides a step towards putting metadata capture practices and in the future, standards compliance, into a quantitative and objective framework.

  8. Developing Cyberinfrastructure Tools and Services for Metadata Quality Evaluation

    NASA Astrophysics Data System (ADS)

    Mecum, B.; Gordon, S.; Habermann, T.; Jones, M. B.; Leinfelder, B.; Powers, L. A.; Slaughter, P.

    2016-12-01

    Metadata and data quality are at the core of reusable and reproducible science. While great progress has been made over the years, much of the metadata collected only addresses data discovery, covering concepts such as titles and keywords. Improving metadata beyond the discoverability plateau means documenting detailed concepts within the data such as sampling protocols, instrumentation used, and variables measured. Given that metadata commonly do not describe their data at this level, how might we improve the state of things? Giving scientists and data managers easy to use tools to evaluate metadata quality that utilize community-driven recommendations is the key to producing high-quality metadata. To achieve this goal, we created a set of cyberinfrastructure tools and services that integrate with existing metadata and data curation workflows which can be used to improve metadata and data quality across the sciences. These tools work across metadata dialects (e.g., ISO19115, FGDC, EML, etc.) and can be used to assess aspects of quality beyond what is internal to the metadata such as the congruence between the metadata and the data it describes. The system makes use of a user-friendly mechanism for expressing a suite of checks as code in popular data science programming languages such as Python and R. This reduces the burden on scientists and data managers to learn yet another language. We demonstrated these services and tools in three ways. First, we evaluated a large corpus of datasets in the DataONE federation of data repositories against a metadata recommendation modeled after existing recommendations such as the LTER best practices and the Attribute Convention for Dataset Discovery (ACDD). Second, we showed how this service can be used to display metadata and data quality information to data producers during the data submission and metadata creation process, and to data consumers through data catalog search and access tools. Third, we showed how the centrally

  9. Evaluation of soybean breeding lines for resistance to Phomopsis seed decay: Results of 2014, 2015, and 2016 field trials in Stoneville, Mississippi

    USDA-ARS?s Scientific Manuscript database

    Soybean [Glycine max (L.) Merr.] is one of the most important crops in the world. Phomopsis seed decay (PSD) is a soybean seed disease that causes poor seed quality. This disease is caused primarily by a fungal pathogen, Phomopsis longicolla (syn. Diaporthe longicolla). Planting PSD-resistant soybea...

  10. Collection Metadata Solutions for Digital Library Applications

    NASA Technical Reports Server (NTRS)

    Hill, Linda L.; Janee, Greg; Dolin, Ron; Frew, James; Larsgaard, Mary

    1999-01-01

    Within a digital library, collections may range from an ad hoc set of objects that serve a temporary purpose to established library collections intended to persist through time. The objects in these collections vary widely, from library and data center holdings to pointers to real-world objects, such as geographic places, and the various metadata schemas that describe them. The key to integrated use of such a variety of collections in a digital library is collection metadata that represents the inherent and contextual characteristics of a collection. The Alexandria Digital Library (ADL) Project has designed and implemented collection metadata for several purposes: in XML form, the collection metadata "registers" the collection with the user interface client; in HTML form, it is used for user documentation; eventually, it will be used to describe the collection to network search agents; and it is used for internal collection management, including mapping the object metadata attributes to the common search parameters of the system.

  11. Achieving interoperability for metadata registries using comparative object modeling.

    PubMed

    Park, Yu Rang; Kim, Ju Han

    2010-01-01

    Achieving data interoperability between organizations relies upon agreed meaning and representation (metadata) of data. For managing and registering metadata, many organizations have built metadata registries (MDRs) in various domains based on international standard for MDR framework, ISO/IEC 11179. Following this trend, two pubic MDRs in biomedical domain have been created, United States Health Information Knowledgebase (USHIK) and cancer Data Standards Registry and Repository (caDSR), from U.S. Department of Health & Human Services and National Cancer Institute (NCI), respectively. Most MDRs are implemented with indiscriminate extending for satisfying organization-specific needs and solving semantic and structural limitation of ISO/IEC 11179. As a result it is difficult to address interoperability among multiple MDRs. In this paper, we propose an integrated metadata object model for achieving interoperability among multiple MDRs. To evaluate this model, we developed an XML Schema Definition (XSD)-based metadata exchange format. We created an XSD-based metadata exporter, supporting both the integrated metadata object model and organization-specific MDR formats.

  12. NASA Space Day in Mississippi - Senate

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Astronaut Michael Foale (center) and Stennis Space Center officials met with Mississippi Lt. Gov. Phil Bryant (at rear podium) and Gulf Coast delegation members in Mississippi Senate chambers during NASA Space Day in Mississippi activities at the Capitol on January 30.

  13. NASA Space Day in Mississippi - Senate

    NASA Image and Video Library

    2008-01-30

    Astronaut Michael Foale (center) and Stennis Space Center officials met with Mississippi Lt. Gov. Phil Bryant (at rear podium) and Gulf Coast delegation members in Mississippi Senate chambers during NASA Space Day in Mississippi activities at the Capitol on January 30.

  14. Brady's Geothermal Field Nodal Seismometers Metadata

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lesley Parker

    Metadata for the nodal seismometer array deployed at the POROTOMO's Natural Laboratory in Brady Hot Spring, Nevada during the March 2016 testing. Metadata includes location and timing for each instrument as well as file lists of data to be uploaded in a separate submission.

  15. Metadata to Support Data Warehouse Evolution

    NASA Astrophysics Data System (ADS)

    Solodovnikova, Darja

    The focus of this chapter is metadata necessary to support data warehouse evolution. We present the data warehouse framework that is able to track evolution process and adapt data warehouse schemata and data extraction, transformation, and loading (ETL) processes. We discuss the significant part of the framework, the metadata repository that stores information about the data warehouse, logical and physical schemata and their versions. We propose the physical implementation of multiversion data warehouse in a relational DBMS. For each modification of a data warehouse schema, we outline the changes that need to be made to the repository metadata and in the database.

  16. Streamlining geospatial metadata in the Semantic Web

    NASA Astrophysics Data System (ADS)

    Fugazza, Cristiano; Pepe, Monica; Oggioni, Alessandro; Tagliolato, Paolo; Carrara, Paola

    2016-04-01

    In the geospatial realm, data annotation and discovery rely on a number of ad-hoc formats and protocols. These have been created to enable domain-specific use cases generalized search is not feasible for. Metadata are at the heart of the discovery process and nevertheless they are often neglected or encoded in formats that either are not aimed at efficient retrieval of resources or are plainly outdated. Particularly, the quantum leap represented by the Linked Open Data (LOD) movement did not induce so far a consistent, interlinked baseline in the geospatial domain. In a nutshell, datasets, scientific literature related to them, and ultimately the researchers behind these products are only loosely connected; the corresponding metadata intelligible only to humans, duplicated on different systems, seldom consistently. Instead, our workflow for metadata management envisages i) editing via customizable web- based forms, ii) encoding of records in any XML application profile, iii) translation into RDF (involving the semantic lift of metadata records), and finally iv) storage of the metadata as RDF and back-translation into the original XML format with added semantics-aware features. Phase iii) hinges on relating resource metadata to RDF data structures that represent keywords from code lists and controlled vocabularies, toponyms, researchers, institutes, and virtually any description one can retrieve (or directly publish) in the LOD Cloud. In the context of a distributed Spatial Data Infrastructure (SDI) built on free and open-source software, we detail phases iii) and iv) of our workflow for the semantics-aware management of geospatial metadata.

  17. The Global Streamflow Indices and Metadata Archive (GSIM) - Part 1: The production of a daily streamflow archive and metadata

    NASA Astrophysics Data System (ADS)

    Do, Hong Xuan; Gudmundsson, Lukas; Leonard, Michael; Westra, Seth

    2018-04-01

    This is the first part of a two-paper series presenting the Global Streamflow Indices and Metadata archive (GSIM), a worldwide collection of metadata and indices derived from more than 35 000 daily streamflow time series. This paper focuses on the compilation of the daily streamflow time series based on 12 free-to-access streamflow databases (seven national databases and five international collections). It also describes the development of three metadata products (freely available at https://doi.pangaea.de/10.1594/PANGAEA.887477): (1) a GSIM catalogue collating basic metadata associated with each time series, (2) catchment boundaries for the contributing area of each gauge, and (3) catchment metadata extracted from 12 gridded global data products representing essential properties such as land cover type, soil type, and climate and topographic characteristics. The quality of the delineated catchment boundary is also made available and should be consulted in GSIM application. The second paper in the series then explores production and analysis of streamflow indices. Having collated an unprecedented number of stations and associated metadata, GSIM can be used to advance large-scale hydrological research and improve understanding of the global water cycle.

  18. Making Interoperability Easier with NASA's Metadata Management Tool (MMT)

    NASA Technical Reports Server (NTRS)

    Shum, Dana; Reese, Mark; Pilone, Dan; Baynes, Katie

    2016-01-01

    While the ISO-19115 collection level metadata format meets many users' needs for interoperable metadata, it can be cumbersome to create it correctly. Through the MMT's simple UI experience, metadata curators can create and edit collections which are compliant with ISO-19115 without full knowledge of the NASA Best Practices implementation of ISO-19115 format. Users are guided through the metadata creation process through a forms-based editor, complete with field information, validation hints and picklists. Once a record is completed, users can download the metadata in any of the supported formats with just 2 clicks.

  19. Metadata, Identifiers, and Physical Samples

    NASA Astrophysics Data System (ADS)

    Arctur, D. K.; Lenhardt, W. C.; Hills, D. J.; Jenkyns, R.; Stroker, K. J.; Todd, N. S.; Dassie, E. P.; Bowring, J. F.

    2016-12-01

    Physical samples are integral to much of the research conducted by geoscientists. The samples used in this research are often obtained at significant cost and represent an important investment for future research. However, making information about samples - whether considered data or metadata - available for researchers to enable discovery is difficult: a number of key elements related to samples are difficult to characterize in common ways, such as classification, location, sample type, sampling method, repository information, subsample distribution, and instrumentation, because these differ from one domain to the next. Unifying these elements or developing metadata crosswalks is needed. The iSamples (Internet of Samples) NSF-funded Research Coordination Network (RCN) is investigating ways to develop these types of interoperability and crosswalks. Within the iSamples RCN, one of its working groups, WG1, has focused on the metadata related to physical samples. This includes identifying existing metadata standards and systems, and how they might interoperate with the International Geo Sample Number (IGSN) schema (schema.igsn.org) in order to help inform leading practices for metadata. For example, we are examining lifecycle metadata beyond the IGSN `birth certificate.' As a first step, this working group is developing a list of relevant standards and comparing their various attributes. In addition, the working group is looking toward technical solutions to facilitate developing a linked set of registries to build the web of samples. Finally, the group is also developing a comparison of sample identifiers and locators. This paper will provide an overview and comparison of the standards identified thus far, as well as an update on the technical solutions examined for integration. We will discuss how various sample identifiers might work in complementary fashion with the IGSN to more completely describe samples, facilitate retrieval of contextual information, and

  20. Mississippi's industrial gulf ports

    DOT National Transportation Integrated Search

    2001-09-01

    As a State, Mississippi ranked 20th in waterborne traffic in 1999, moving over 46 million short tons of commodities. Mississippi's port activities generate $38 million in state payroll taxes and $21 million in state sales taxes. Port and servicing in...

  1. Transforming Dermatologic Imaging for the Digital Era: Metadata and Standards.

    PubMed

    Caffery, Liam J; Clunie, David; Curiel-Lewandrowski, Clara; Malvehy, Josep; Soyer, H Peter; Halpern, Allan C

    2018-01-17

    Imaging is increasingly being used in dermatology for documentation, diagnosis, and management of cutaneous disease. The lack of standards for dermatologic imaging is an impediment to clinical uptake. Standardization can occur in image acquisition, terminology, interoperability, and metadata. This paper presents the International Skin Imaging Collaboration position on standardization of metadata for dermatologic imaging. Metadata is essential to ensure that dermatologic images are properly managed and interpreted. There are two standards-based approaches to recording and storing metadata in dermatologic imaging. The first uses standard consumer image file formats, and the second is the file format and metadata model developed for the Digital Imaging and Communication in Medicine (DICOM) standard. DICOM would appear to provide an advantage over using consumer image file formats for metadata as it includes all the patient, study, and technical metadata necessary to use images clinically. Whereas, consumer image file formats only include technical metadata and need to be used in conjunction with another actor-for example, an electronic medical record-to supply the patient and study metadata. The use of DICOM may have some ancillary benefits in dermatologic imaging including leveraging DICOM network and workflow services, interoperability of images and metadata, leveraging existing enterprise imaging infrastructure, greater patient safety, and better compliance to legislative requirements for image retention.

  2. Prediction of Solar Eruptions Using Filament Metadata

    NASA Astrophysics Data System (ADS)

    Aggarwal, Ashna; Schanche, Nicole; Reeves, Katharine K.; Kempton, Dustin; Angryk, Rafal

    2018-05-01

    We perform a statistical analysis of erupting and non-erupting solar filaments to determine the properties related to the eruption potential. In order to perform this study, we correlate filament eruptions documented in the Heliophysics Event Knowledgebase (HEK) with HEK filaments that have been grouped together using a spatiotemporal tracking algorithm. The HEK provides metadata about each filament instance, including values for length, area, tilt, and chirality. We add additional metadata properties such as the distance from the nearest active region and the magnetic field decay index. We compare trends in the metadata from erupting and non-erupting filament tracks to discover which properties present signs of an eruption. We find that a change in filament length over time is the most important factor in discriminating between erupting and non-erupting filament tracks, with erupting tracks being more likely to have decreasing length. We attempt to find an ensemble of predictive filament metadata using a Random Forest Classifier approach, but find the probability of correctly predicting an eruption with the current metadata is only slightly better than chance.

  3. eXtended MetaData Registry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2006-10-25

    The purpose of the eXtended MetaData Registry (XMDR) prototype is to demonstrate the feasibility and utility of constructing an extended metadata registry, i.e., one which encompasses richer classification support, facilities for including terminologies, and better support for formal specification of semantics. The prototype registry will also serve as a reference implementation for the revised versions of ISO 11179, Parts 2 and 3 to help guide production implementations.

  4. Making metadata usable in a multi-national research setting.

    PubMed

    Ellul, Claire; Foord, Joanna; Mooney, John

    2013-11-01

    SECOA (Solutions for Environmental Contrasts in Coastal Areas) is a multi-national research project examining the effects of human mobility on urban settlements in fragile coastal environments. This paper describes the setting up of a SECOA metadata repository for non-specialist researchers such as environmental scientists and tourism experts. Conflicting usability requirements of two groups - metadata creators and metadata users - are identified along with associated limitations of current metadata standards. A description is given of a configurable metadata system designed to grow as the project evolves. This work is of relevance for similar projects such as INSPIRE. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. Forests of Mississippi, 2014

    Treesearch

    S.N. Oswalt

    2015-01-01

    This resource update provides an overview of forest resources in Mississippi based on an inventory conducted by the Forest Inventory and Analysis (FIA) Program at the Southern Research Station of the Forest Service, U.S. Department of Agriculture in cooperation with the Mississippi Forestry Commission.

  6. Burning Mississippi: Letters Home, Hollywood History.

    ERIC Educational Resources Information Center

    Yarrow, Michael

    1989-01-01

    Recollects summer 1964, when hundreds of civil rights workers went to Mississippi to aid Black voter registration. Points out that the movie "Mississippi Burning" ignores the courageous struggle of Mississippi Blacks and, instead, presents a disempowering version of history focusing on violent White males (the Klan and the FBI). (SV)

  7. Nutritional value of raw soybeans, extruded soybeans, roasted soybeans and tallow as fat sources in early lactating dairy cows.

    PubMed

    Amanlou, H; Maheri-Sis, N; Bassiri, S; Mirza-Aghazadeh, A; Salamatdust, R; Moosavi, A; Karimi, V

    2012-01-01

    Thirty multiparous Holstein cows (29.8 ± 4.01days in milk; 671.6 ± 31.47 kg of body weight) were used in a completely randomized design to compare nutritional value of four fat sources including tallow, raw soybeans, extruded soybeans and roasted soybeans for 8 weeks. Experimental diets were a control containing 27.4 % alfalfa silage, 22.5% corn silage, and 50.1% concentrate, and four diets with either tallow, raw soybean, extruded soybean, or roasted soybean added to provide 1.93% supplemental fat. Dry matter and NEL intakes were similar among treatments, while cows fed fat diets had significantly (P<0.05) high NEL intakes when compared to control with no fat. Supplemental fat, whether tallow or full fat soybeans increased milk production (1.89-2.45 kg/d; P<0.01) and FCM production (1.05-2.79; P<0.01). Milk fat yield and percentage of cows fed fat-supplemented diets were significantly (P<0.01 and P<0.05 respectively) higher than control. Between fat-supplemented diets, roasted soybean caused highest milk fat yield and extruded soybean caused lowest milk fat yield. There was no significant effect of supplemental fat on the milk protein and lactose content and yield. Feed efficiency of fat-supplemented diets was significantly (P<0.01) higher than control. Body weight, body weight change and BCS (body condition score) of cows, as well as energy balance and energy efficiency were similar between treatments. In conclusion, while there was no significant effect of fat sources on production response of cows, fat originating from heat-treated soybean help to minimize imported RUP (rumen undegradable protein) sources level as fish meal in comparison with tallow and raw soybean oil. In the Current study, there was no statistical significance among nutritional values of oil from extruded soybeans and roasted soybeans.

  8. Progress in defining a standard for file-level metadata

    NASA Technical Reports Server (NTRS)

    Williams, Joel; Kobler, Ben

    1996-01-01

    In the following narrative, metadata required to locate a file on tape or collection of tapes will be referred to as file-level metadata. This paper discribes the rationale for and the history of the effort to define a standard for this metadata.

  9. A standard for measuring metadata quality in spectral libraries

    NASA Astrophysics Data System (ADS)

    Rasaiah, B.; Jones, S. D.; Bellman, C.

    2013-12-01

    A standard for measuring metadata quality in spectral libraries Barbara Rasaiah, Simon Jones, Chris Bellman RMIT University Melbourne, Australia barbara.rasaiah@rmit.edu.au, simon.jones@rmit.edu.au, chris.bellman@rmit.edu.au ABSTRACT There is an urgent need within the international remote sensing community to establish a metadata standard for field spectroscopy that ensures high quality, interoperable metadata sets that can be archived and shared efficiently within Earth observation data sharing systems. Metadata are an important component in the cataloguing and analysis of in situ spectroscopy datasets because of their central role in identifying and quantifying the quality and reliability of spectral data and the products derived from them. This paper presents approaches to measuring metadata completeness and quality in spectral libraries to determine reliability, interoperability, and re-useability of a dataset. Explored are quality parameters that meet the unique requirements of in situ spectroscopy datasets, across many campaigns. Examined are the challenges presented by ensuring that data creators, owners, and data users ensure a high level of data integrity throughout the lifecycle of a dataset. Issues such as field measurement methods, instrument calibration, and data representativeness are investigated. The proposed metadata standard incorporates expert recommendations that include metadata protocols critical to all campaigns, and those that are restricted to campaigns for specific target measurements. The implication of semantics and syntax for a robust and flexible metadata standard are also considered. Approaches towards an operational and logistically viable implementation of a quality standard are discussed. This paper also proposes a way forward for adapting and enhancing current geospatial metadata standards to the unique requirements of field spectroscopy metadata quality. [0430] BIOGEOSCIENCES / Computational methods and data processing [0480

  10. The role of metadata in managing large environmental science datasets. Proceedings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melton, R.B.; DeVaney, D.M.; French, J. C.

    1995-06-01

    The purpose of this workshop was to bring together computer science researchers and environmental sciences data management practitioners to consider the role of metadata in managing large environmental sciences datasets. The objectives included: establishing a common definition of metadata; identifying categories of metadata; defining problems in managing metadata; and defining problems related to linking metadata with primary data.

  11. Mississippi Research Catalog, '99.

    ERIC Educational Resources Information Center

    Mississippi State Board of Trustees of State Institutions of Higher Learning, Jackson.

    This document, mandated by the University Research Center Act of 1988, presents financial balance sheets listing receipts and disbursements of research funds for research activities being conducted at the eight state-supported universities in Mississippi: Alcorn State University; Delta State University; Jackson State University; Mississippi State…

  12. Lack of transgene and glyphosate effects on yield, and mineral and amino acid content of glyphosate-resistant soybean.

    PubMed

    Duke, Stephen O; Rimando, Agnes M; Reddy, Krishna N; Cizdziel, James V; Bellaloui, Nacer; Shaw, David R; Williams, Martin M; Maul, Jude E

    2018-05-01

    There has been controversy as to whether the glyphosate resistance gene and/or glyphosate applied to glyphosate-resistant (GR) soybean affect the content of cationic minerals (especially Mg, Mn and Fe), yield and amino acid content of GR soybean. A two-year field study (2013 and 2014) examined these questions at sites in Mississippi, USA. There were no effects of glyphosate, the GR transgene or field crop history (for a field with both no history of glyphosate use versus one with a long history of glyphosate use) on grain yield. Furthermore, these factors had no consistent effects on measured mineral (Al, As, Ba, Cd, Ca, Co, Cr, Cs, Cu, Fe, Ga, K, Li, Mg, Mn, Ni, Pb, Rb, Se, Sr, Tl, U, V, Zn) content of leaves or harvested seed. Effects on minerals were small and inconsistent between years, treatments and mineral, and appeared to be random false positives. No notable effects on free or protein amino acids of the seed were measured, although glyphosate and its degradation product, aminomethylphosphonic acid (AMPA), were found in the seed in concentrations consistent with previous studies. Neither glyphosate nor the GR transgene affect the content of the minerals measured in leaves and seed, harvested seed amino acid composition, or yield of GR soybean. Furthermore, soils with a legacy of GR crops have no effects on these parameters in soybean. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  13. The Upper Mississippi River System—Topobathy

    USGS Publications Warehouse

    Stone, Jayme M.; Hanson, Jenny L.; Sattler, Stephanie R.

    2017-03-23

    The Upper Mississippi River System (UMRS), the navigable part of the Upper Mississippi and Illinois Rivers, is a diverse ecosystem that contains river channels, tributaries, shallow-water wetlands, backwater lakes, and flood-plain forests. Approximately 10,000 years of geologic and hydrographic history exist within the UMRS. Because it maintains crucial wildlife and fish habitats, the dynamic ecosystems of the Upper Mississippi River Basin and its tributaries are contingent on the adjacent flood plains and water-level fluctuations of the Mississippi River. Separate data for flood-plain elevation (lidar) and riverbed elevation (bathymetry) were collected on the UMRS by the U.S. Army Corps of Engineers’ (USACE) Upper Mississippi River Restoration (UMRR) Program. Using the two elevation datasets, the U.S. Geological Survey (USGS) Upper Midwest Environmental Sciences Center (UMESC) developed a systemic topobathy dataset.

  14. Efficient processing of MPEG-21 metadata in the binary domain

    NASA Astrophysics Data System (ADS)

    Timmerer, Christian; Frank, Thomas; Hellwagner, Hermann; Heuer, Jörg; Hutter, Andreas

    2005-10-01

    XML-based metadata is widely adopted across the different communities and plenty of commercial and open source tools for processing and transforming are available on the market. However, all of these tools have one thing in common: they operate on plain text encoded metadata which may become a burden in constrained and streaming environments, i.e., when metadata needs to be processed together with multimedia content on the fly. In this paper we present an efficient approach for transforming such kind of metadata which are encoded using MPEG's Binary Format for Metadata (BiM) without additional en-/decoding overheads, i.e., within the binary domain. Therefore, we have developed an event-based push parser for BiM encoded metadata which transforms the metadata by a limited set of processing instructions - based on traditional XML transformation techniques - operating on bit patterns instead of cost-intensive string comparisons.

  15. Content Metadata Standards for Marine Science: A Case Study

    USGS Publications Warehouse

    Riall, Rebecca L.; Marincioni, Fausto; Lightsom, Frances L.

    2004-01-01

    The U.S. Geological Survey developed a content metadata standard to meet the demands of organizing electronic resources in the marine sciences for a broad, heterogeneous audience. These metadata standards are used by the Marine Realms Information Bank project, a Web-based public distributed library of marine science from academic institutions and government agencies. The development and deployment of this metadata standard serve as a model, complete with lessons about mistakes, for the creation of similarly specialized metadata standards for digital libraries.

  16. The effect of the 2011 flood on agricultural chemical and sediment movement in the lower Mississippi River Basin

    NASA Astrophysics Data System (ADS)

    Welch, H.; Coupe, R.; Aulenbach, B.

    2012-04-01

    Extreme hydrologic events, such as floods, can overwhelm a surface water system's ability to process chemicals and can move large amounts of material downstream to larger surface water bodies. The Mississippi River is the 3rd largest River in the world behind the Amazon in South America and the Congo in Africa. The Mississippi-Atchafalaya River basin grows much of the country's corn, soybean, rice, cotton, pigs, and chickens. This is large-scale modern day agriculture with large inputs of nutrients to increase yields and large applied amounts of crop protection chemicals, such as pesticides. The basin drains approximately 41% of the conterminous United States and is the largest contributor of nutrients to the Gulf of Mexico each spring. The amount of water and nutrients discharged from the Mississippi River has been related to the size of the low dissolved oxygen area that forms off of the coast of Louisiana and Texas each summer. From March through April 2011, the upper Mississippi River basin received more than five times more precipitation than normal, which combined with snow melt from the Missouri River basin, created a historic flood event that lasted from April through July. The U.S. Geological Survey, as part of the National Stream Quality Accounting Network (NASQAN), collected samples from six sites located in the lower Mississippi-Atchafalaya River basin, as well as, samples from the three flow-diversion structures or floodways: the Birds Point-New Madrid in Missouri and the Morganza and Bonnet Carré in Louisiana, from April through July. Samples were analyzed for nutrients, pesticides, suspended sediments, and particle size; results were used to determine the water quality of the river during the 2011 flood. Monthly loads for nitrate, phosphorus, pesticides (atrazine, glyphosate, fluometuron, and metolachlor), and sediment were calculated to quantify the movement of agricultural chemicals and sediment into the Gulf of Mexico. Nutrient loads were

  17. Improving Scientific Metadata Interoperability And Data Discoverability using OAI-PMH

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James M.; Wilson, Bruce E.

    2010-12-01

    While general-purpose search engines (such as Google or Bing) are useful for finding many things on the Internet, they are often of limited usefulness for locating Earth Science data relevant (for example) to a specific spatiotemporal extent. By contrast, tools that search repositories of structured metadata can locate relevant datasets with fairly high precision, but the search is limited to that particular repository. Federated searches (such as Z39.50) have been used, but can be slow and the comprehensiveness can be limited by downtime in any search partner. An alternative approach to improve comprehensiveness is for a repository to harvest metadata from other repositories, possibly with limits based on subject matter or access permissions. Searches through harvested metadata can be extremely responsive, and the search tool can be customized with semantic augmentation appropriate to the community of practice being served. However, there are a number of different protocols for harvesting metadata, with some challenges for ensuring that updates are propagated and for collaborations with repositories using differing metadata standards. The Open Archive Initiative Protocol for Metadata Handling (OAI-PMH) is a standard that is seeing increased use as a means for exchanging structured metadata. OAI-PMH implementations must support Dublin Core as a metadata standard, with other metadata formats as optional. We have developed tools which enable our structured search tool (Mercury; http://mercury.ornl.gov) to consume metadata from OAI-PMH services in any of the metadata formats we support (Dublin Core, Darwin Core, FCDC CSDGM, GCMD DIF, EML, and ISO 19115/19137). We are also making ORNL DAAC metadata available through OAI-PMH for other metadata tools to utilize, such as the NASA Global Change Master Directory, GCMD). This paper describes Mercury capabilities with multiple metadata formats, in general, and, more specifically, the results of our OAI-PMH implementations and

  18. Spectral Detection of Soybean Aphid (Hemiptera: Aphididae) and Confounding Insecticide Effects in Soybean

    NASA Astrophysics Data System (ADS)

    Alves, Tavvs Micael

    Soybean aphid, Aphis glycines (Hemiptera: Aphididae) is the primary insect pest of soybean in the northcentral United States. Soybean aphid may cause stunted plants, leaf discoloration, plant death, and decrease soybean yield by 40%. Sampling plans have been developed for supporting soybean aphid management. However, growers' perception about time involved in direct insect counts has been contributing to a lower adoption of traditional pest scouting methods and may be associated with the use of prophylactic insecticide applications in soybean. Remote sensing of plant spectral (light-derived) responses to soybean aphid feeding is a promising alternative to estimate injury without direct insect counts and, thus, increase adoption and efficiency of scouting programs. This research explored the use of remote sensing of soybean reflectance for detection of soybean aphids and showed that foliar insecticides may have implications for subsequent use of soybean spectral reflectance for pest detection. (Abstract shortened by ProQuest.).

  19. openPDS: protecting the privacy of metadata through SafeAnswers.

    PubMed

    de Montjoye, Yves-Alexandre; Shmueli, Erez; Wang, Samuel S; Pentland, Alex Sandy

    2014-01-01

    The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the individual. This lack of access and control is furthermore fueling growing concerns, as it prevents individuals from understanding and managing the risks associated with the collection and use of their data. Our contribution is two-fold: (1) we describe openPDS, a personal metadata management framework that allows individuals to collect, store, and give fine-grained access to their metadata to third parties. It has been implemented in two field studies; (2) we introduce and analyze SafeAnswers, a new and practical way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard anonymization problem into a more tractable security one. It allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals' metadata. The dimensionality of the data shared with the services is reduced from high-dimensional metadata to low-dimensional answers that are less likely to be re-identifiable and to contain sensitive information. These answers can then be directly shared individually or in aggregate. openPDS and SafeAnswers provide a new way of dynamically protecting personal metadata, thereby supporting the creation of smart data-driven services and data science research.

  20. openPDS: Protecting the Privacy of Metadata through SafeAnswers

    PubMed Central

    de Montjoye, Yves-Alexandre; Shmueli, Erez; Wang, Samuel S.; Pentland, Alex Sandy

    2014-01-01

    The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the individual. This lack of access and control is furthermore fueling growing concerns, as it prevents individuals from understanding and managing the risks associated with the collection and use of their data. Our contribution is two-fold: (1) we describe openPDS, a personal metadata management framework that allows individuals to collect, store, and give fine-grained access to their metadata to third parties. It has been implemented in two field studies; (2) we introduce and analyze SafeAnswers, a new and practical way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard anonymization problem into a more tractable security one. It allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals' metadata. The dimensionality of the data shared with the services is reduced from high-dimensional metadata to low-dimensional answers that are less likely to be re-identifiable and to contain sensitive information. These answers can then be directly shared individually or in aggregate. openPDS and SafeAnswers provide a new way of dynamically protecting personal metadata, thereby supporting the creation of smart data-driven services and data science research. PMID:25007320

  1. Aspergillus oryzae GB-107 fermentation improves nutritional quality of food soybeans and feed soybean meals.

    PubMed

    Hong, Kee-Jong; Lee, Chan-Ho; Kim, Sung Woo

    2004-01-01

    This study evaluated the effect of fermentation on the nutritional quality of food-grade soybeans and feed-grade soybean meals. Soybeans and soybean meals were fermented by Aspergillus oryzae GB-107 in a bed-packed solid fermentor for 48 hours. After fermentation, their nutrient contents as well as trypsin inhibitor were measured and compared with those of raw soybeans and soybean meals. Proteins were extracted from fermented and non-fermented soybeans and soybean meals, and the peptide characteristics were evaluated after electrophoresis. Fermented soybeans and fermented soybean meals contained 10% more (P < .05) crude protein than raw soybeans and soybean meals. The essential amino acid profile was unchanged after fermentation. Fermentation eliminated (P < .05) most of the trypsin inhibitor from both soybeans and soybean meals. Fermentation increased the amount of small-size peptides (<20 kDa) (P < .05) compared with raw soybeans, while significantly decreasing large-size peptides (>60 kDa) (P < .05). Fermented soybean meal contained more (P < .01) small-size peptides (<20 kDa) than soybean meal. Fermented soybean meal did not contain large-size peptides (>60 kDa), whereas 22.1% of peptides in soybean meal were large-size (>60 kDa). Collectively, fermentation increased protein content, eliminated trypsin inhibitors, and reduced peptide size in soybeans and soybean meals. These effects of fermentation might make soy foods more useful in human diets as a functional food and benefit livestock as a novel feed ingredient.

  2. A Metadata Element Set for Project Documentation

    NASA Technical Reports Server (NTRS)

    Hodge, Gail; Templeton, Clay; Allen, Robert B.

    2003-01-01

    Abstract NASA Goddard Space Flight Center is a large engineering enterprise with many projects. We describe our efforts to develop standard metadata sets across project documentation which we term the "Goddard Core". We also address broader issues for project management metadata.

  3. International Metadata Standards and Enterprise Data Quality Metadata Systems

    NASA Astrophysics Data System (ADS)

    Habermann, T.

    2016-12-01

    Well-documented data quality is critical in situations where scientists and decision-makers need to combine multiple datasets from different disciplines and collection systems to address scientific questions or difficult decisions. Standardized data quality metadata could be very helpful in these situations. Many efforts at developing data quality standards falter because of the diversity of approaches to measuring and reporting data quality. The "one size fits all" paradigm does not generally work well in this situation. The ISO data quality standard (ISO 19157) takes a different approach with the goal of systematically describing how data quality is measured rather than how it should be measured. It introduces the idea of standard data quality measures that can be well documented in a measure repository and used for consistently describing how data quality is measured across an enterprise. The standard includes recommendations for properties of these measures that include unique identifiers, references, illustrations and examples. Metadata records can reference these measures using the unique identifier and reuse them along with details (and references) that describe how the measure was applied to a particular dataset. A second important feature of ISO 19157 is the inclusion of citations to existing papers or reports that describe quality of a dataset. This capability allows users to find this information in a single location, i.e. the dataset metadata, rather than searching the web or other catalogs. I will describe these and other capabilities of ISO 19157 with examples of how they are being used to describe data quality across the NASA EOS Enterprise and also compare these approaches with other standards.

  4. Design and Implementation of a Metadata-rich File System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address thesemore » problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.« less

  5. The Current Status of the Soybean-Soybean Mosaic Virus (SMV) Pathosystem

    PubMed Central

    Liu, Jian-Zhong; Fang, Yuan; Pang, Hongxi

    2016-01-01

    Soybean mosaic virus (SMV) is one of the most devastating pathogens that cost huge economic losses in soybean production worldwide. Due to the duplicated genome, clustered and highly homologous nature of R genes, as well as recalcitrant to transformation, soybean disease resistance studies is largely lagging compared with other diploid crops. In this review, we focus on the major advances that have been made in identifying both the virulence/avirulence factors of SMV and mapping of SMV resistant genes in soybean. In addition, we review the progress in dissecting the SMV resistant signaling pathways in soybean, with a special focus on the studies using virus-induced gene silencing. The soybean genome has been fully sequenced, and the increasingly saturated SNP markers have been identified. With these resources available together with the newly developed genome editing tools, and more efficient soybean transformation system, cloning SMV resistant genes, and ultimately generating cultivars with a broader spectrum resistance to SMV are becoming more realistic than ever. PMID:27965641

  6. Evaluating and Evolving Metadata in Multiple Dialects

    NASA Technical Reports Server (NTRS)

    Kozimore, John; Habermann, Ted; Gordon, Sean; Powers, Lindsay

    2016-01-01

    Despite many long-term homogenization efforts, communities continue to develop focused metadata standards along with related recommendations and (typically) XML representations (aka dialects) for sharing metadata content. Different representations easily become obstacles to sharing information because each representation generally requires a set of tools and skills that are designed, built, and maintained specifically for that representation. In contrast, community recommendations are generally described, at least initially, at a more conceptual level and are more easily shared. For example, most communities agree that dataset titles should be included in metadata records although they write the titles in different ways.

  7. Stability of soybean aphid resistance in soybean across different temperatures

    USDA-ARS?s Scientific Manuscript database

    The soybean aphid, Aphis glycines Matsumura, is the most important insect pest posing a threat to soybean, Glycine max (L.) Merr., grain production in the United States. Soybean cultivars with resistance are currently being deployed to aid in management of the pest. Temperature has been reported to ...

  8. Enhancing SCORM Metadata for Assessment Authoring in E-Learning

    ERIC Educational Resources Information Center

    Chang, Wen-Chih; Hsu, Hui-Huang; Smith, Timothy K.; Wang, Chun-Chia

    2004-01-01

    With the rapid development of distance learning and the XML technology, metadata play an important role in e-Learning. Nowadays, many distance learning standards, such as SCORM, AICC CMI, IEEE LTSC LOM and IMS, use metadata to tag learning materials. However, most metadata models are used to define learning materials and test problems. Few…

  9. Registration of three soybean germplasm lines resistant to Phakopsora pachyrhizi (soybean rust)

    USDA-ARS?s Scientific Manuscript database

    Soybean rust, caused by Phakopsora pachyrhizi Sydow, is one of the most important foliar diseases of soybean [Glycine max (L.)Merr.]. Development of rust resistant lines is one objective of many soybean breeding programs. Three soybean germplasm lines esignated as TGx 1987-76F (Reg. No. xxx, PI 6577...

  10. Development of a Watershed Boundary Dataset for Mississippi

    USGS Publications Warehouse

    Van Wilson, K.; Clair, Michael G.; Turnipseed, D. Phil; Rebich, Richard A.

    2009-01-01

    The U.S. Geological Survey, in cooperation with the Mississippi Department of Environmental Quality, U.S. Department of Agriculture-Natural Resources Conservation Service, Mississippi Department of Transportation, U.S. Department of Agriculture-Forest Service, and the Mississippi Automated Resource Information System, developed a 1:24,000-scale Watershed Boundary Dataset for Mississippi including watershed and subwatershed boundaries, codes, names, and drainage areas. The Watershed Boundary Dataset for Mississippi provides a standard geographical framework for water-resources and selected land-resources planning. The original 8-digit subbasins (hydrologic unit codes) were further subdivided into 10-digit watersheds and 12-digit subwatersheds - the exceptions are the Lower Mississippi River Alluvial Plain (known locally as the Delta) and the Mississippi River inside levees, which were only subdivided into 10-digit watersheds. Also, large water bodies in the Mississippi Sound along the coast were not delineated as small as a typical 12-digit subwatershed. All of the data - including watershed and subwatershed boundaries, hydrologic unit codes and names, and drainage-area data - are stored in a Geographic Information System database.

  11. Improving Access to NASA Earth Science Data through Collaborative Metadata Curation

    NASA Astrophysics Data System (ADS)

    Sisco, A. W.; Bugbee, K.; Shum, D.; Baynes, K.; Dixon, V.; Ramachandran, R.

    2017-12-01

    The NASA-developed Common Metadata Repository (CMR) is a high-performance metadata system that currently catalogs over 375 million Earth science metadata records. It serves as the authoritative metadata management system of NASA's Earth Observing System Data and Information System (EOSDIS), enabling NASA Earth science data to be discovered and accessed by a worldwide user community. The size of the EOSDIS data archive is steadily increasing, and the ability to manage and query this archive depends on the input of high quality metadata to the CMR. Metadata that does not provide adequate descriptive information diminishes the CMR's ability to effectively find and serve data to users. To address this issue, an innovative and collaborative review process is underway to systematically improve the completeness, consistency, and accuracy of metadata for approximately 7,000 data sets archived by NASA's twelve EOSDIS data centers, or Distributed Active Archive Centers (DAACs). The process involves automated and manual metadata assessment of both collection and granule records by a team of Earth science data specialists at NASA Marshall Space Flight Center. The team communicates results to DAAC personnel, who then make revisions and reingest improved metadata into the CMR. Implementation of this process relies on a network of interdisciplinary collaborators leveraging a variety of communication platforms and long-range planning strategies. Curating metadata at this scale and resolving metadata issues through community consensus improves the CMR's ability to serve current and future users and also introduces best practices for stewarding the next generation of Earth Observing System data. This presentation will detail the metadata curation process, its outcomes thus far, and also share the status of ongoing curation activities.

  12. Land Use Strategies for Optimizing Carbon Sequestration within the Head of the Lower Mississippi Watershed

    NASA Astrophysics Data System (ADS)

    Weaver, L.

    2015-12-01

    The world is currently in a stage of extreme growth, characterized by increasing demands for food and increasing greenhouse gas emissions. The population for 2050 is forecasted to grow by 2.3 billion people, resulting in close to a 40% increase in food demand (Alexandratos, Bruinsma 2012). This will severely increase pressure on the earth and on crop harvesting processes to incorporate carbon emissions reduction strategies. Optimal land use analysis and innovation can provide feasible solutions for these problems. A key environmental feature around which land use systems should be carefully planned and maintained is the Mississippi River, the largest watershed system in the United States. Along head of the Lower Mississippi Watershed lie several farming communities including Cairo, Illinois. The primary land use for the area inhabited by these communities consists of soybeans, corn, and pasture. These crops have varying carbon storage capacities, economic and social benefits, and environmental consequences. In order to maximize social, economic, and environmental benefits and sustainability, these crops were analyzed over time, spatial correlation, and crop size area. When considering risks of carbon emissions, economic decline, landscape erosion and harmful runoff, a localized switchgrass buffer remains a feasible solution. Its strengths as a native, reliable plant with high carbon sequestration and biomass harvest potential yield it to be more prevalently implemented at the head of the Lower Mississippi Watershed. However, there are multiple factors that must be considered before implementing broad agricultural policies and practices. Thorough analyses should be performed frequently to assess the effects of major land use change and can be used to identify the optimized applications for farmers and communities.

  13. A metadata-driven approach to data repository design.

    PubMed

    Harvey, Matthew J; McLean, Andrew; Rzepa, Henry S

    2017-01-01

    The design and use of a metadata-driven data repository for research data management is described. Metadata is collected automatically during the submission process whenever possible and is registered with DataCite in accordance with their current metadata schema, in exchange for a persistent digital object identifier. Two examples of data preview are illustrated, including the demonstration of a method for integration with commercial software that confers rich domain-specific data analytics without introducing customisation into the repository itself.

  14. ASDC Collaborations and Processes to Ensure Quality Metadata and Consistent Data Availability

    NASA Astrophysics Data System (ADS)

    Trapasso, T. J.

    2017-12-01

    With the introduction of new tools, faster computing, and less expensive storage, increased volumes of data are expected to be managed with existing or fewer resources. Metadata management is becoming a heightened challenge from the increase in data volume, resulting in more metadata records needed to be curated for each product. To address metadata availability and completeness, NASA ESDIS has taken significant strides with the creation of the United Metadata Model (UMM) and Common Metadata Repository (CMR). These UMM helps address hurdles experienced by the increasing number of metadata dialects and the CMR provides a primary repository for metadata so that required metadata fields can be served through a growing number of tools and services. However, metadata quality remains an issue as metadata is not always inherent to the end-user. In response to these challenges, the NASA Atmospheric Science Data Center (ASDC) created the Collaboratory for quAlity Metadata Preservation (CAMP) and defined the Product Lifecycle Process (PLP) to work congruently. CAMP is unique in that it provides science team members a UI to directly supply metadata that is complete, compliant, and accurate for their data products. This replaces back-and-forth communication that often results in misinterpreted metadata. Upon review by ASDC staff, metadata is submitted to CMR for broader distribution through Earthdata. Further, approval of science team metadata in CAMP automatically triggers the ASDC PLP workflow to ensure appropriate services are applied throughout the product lifecycle. This presentation will review the design elements of CAMP and PLP as well as demonstrate interfaces to each. It will show the benefits that CAMP and PLP provide to the ASDC that could potentially benefit additional NASA Earth Science Data and Information System (ESDIS) Distributed Active Archive Centers (DAACs).

  15. Earthquakes in Mississippi and vicinity 1811-2010

    USGS Publications Warehouse

    Dart, Richard L.; Bograd, Michael B.E.

    2011-01-01

    This map summarizes two centuries of earthquake activity in Mississippi. Work on the Mississippi map was done in collaboration with the Mississippi Department of Environmental Quality, Office of Geology. The earthquake data plotted on the map are from several sources: the Mississippi Department of Environmental Quality, the Center for Earthquake Research and Information, the National Center for Earthquake Engineering Research, and the Arkansas Geological Survey. In addition to earthquake locations, other materials include seismic hazard and isoseismal maps and related text. Earthquakes are a legitimate concern in Mississippi and parts of adjacent States. Mississippi has undergone a number of felt earthquakes since 1811. At least two of these events caused property damage: a magnitude 4.7 earthquake in 1931, and a magnitude 4.3 earthquake in 1967. The map shows all historical and instrumentally located earthquakes in Mississippi and vicinity between 1811 and 2010. The largest historic earthquake in the vicinity of the State was an intensity XI event, on December 16, 1811; the first earthquake in the New Madrid sequence. This violent event and the earthquakes that followed caused considerable damage to the then sparsely settled region.

  16. Metadata Authoring with Versatility and Extensibility

    NASA Technical Reports Server (NTRS)

    Pollack, Janine; Olsen, Lola

    2004-01-01

    NASA's Global Change Master Directory (GCMD) assists the scientific community in the discovery of and linkage to Earth science data sets and related services. The GCMD holds over 13,800 data set descriptions in Directory Interchange Format (DIF) and 700 data service descriptions in Service Entry Resource Format (SERF), encompassing the disciplines of geology, hydrology, oceanography, meteorology, and ecology. Data descriptions also contain geographic coverage information and direct links to the data, thus allowing researchers to discover data pertaining to a geographic location of interest, then quickly acquire those data. The GCMD strives to be the preferred data locator for world-wide directory-level metadata. In this vein, scientists and data providers must have access to intuitive and efficient metadata authoring tools. Existing GCMD tools are attracting widespread usage; however, a need for tools that are portable, customizable and versatile still exists. With tool usage directly influencing metadata population, it has become apparent that new tools are needed to fill these voids. As a result, the GCMD has released a new authoring tool allowing for both web-based and stand-alone authoring of descriptions. Furthermore, this tool incorporates the ability to plug-and-play the metadata format of choice, offering users options of DIF, SERF, FGDC, ISO or any other defined standard. Allowing data holders to work with their preferred format, as well as an option of a stand-alone application or web-based environment, docBUlLDER will assist the scientific community in efficiently creating quality data and services metadata.

  17. NASA Spacecraft Eyes Mississippi Flooding

    NASA Image and Video Library

    2011-05-16

    At the time NASA Terra spacecraft acquired this image, the Mississippi River had reached a level of 53 feet 16.2 meters, 3 feet 1 meter above the major flood stage. Flood water had already inundated parts of Vicksburg, Mississippi.

  18. Interpreting the ASTM 'content standard for digital geospatial metadata'

    USGS Publications Warehouse

    Nebert, Douglas D.

    1996-01-01

    ASTM and the Federal Geographic Data Committee have developed a content standard for spatial metadata to facilitate documentation, discovery, and retrieval of digital spatial data using vendor-independent terminology. Spatial metadata elements are identifiable quality and content characteristics of a data set that can be tied to a geographic location or area. Several Office of Management and Budget Circulars and initiatives have been issued that specify improved cataloguing of and accessibility to federal data holdings. An Executive Order further requires the use of the metadata content standard to document digital spatial data sets. Collection and reporting of spatial metadata for field investigations performed for the federal government is an anticipated requirement. This paper provides an overview of the draft spatial metadata content standard and a description of how the standard could be applied to investigations collecting spatially-referenced field data.

  19. Utilizing soybean milk to culture soybean pathogens

    USDA-ARS?s Scientific Manuscript database

    Liquid and semi-solid culture media are used to maintain and proliferate bacteria, fungi, and Oomycetes for research in microbiology and plant pathology. In this study, a comparison was made between soybean milk medium, also referred to as soymilk, and media traditionally used for culturing soybean ...

  20. Publishing NASA Metadata as Linked Open Data for Semantic Mashups

    NASA Astrophysics Data System (ADS)

    Wilson, Brian; Manipon, Gerald; Hua, Hook

    2014-05-01

    Data providers are now publishing more metadata in more interoperable forms, e.g. Atom or RSS 'casts', as Linked Open Data (LOD), or as ISO Metadata records. A major effort on the part of the NASA's Earth Science Data and Information System (ESDIS) project is the aggregation of metadata that enables greater data interoperability among scientific data sets regardless of source or application. Both the Earth Observing System (EOS) ClearingHOuse (ECHO) and the Global Change Master Directory (GCMD) repositories contain metadata records for NASA (and other) datasets and provided services. These records contain typical fields for each dataset (or software service) such as the source, creation date, cognizant institution, related access URL's, and domain and variable keywords to enable discovery. Under a NASA ACCESS grant, we demonstrated how to publish the ECHO and GCMD dataset and services metadata as LOD in the RDF format. Both sets of metadata are now queryable at SPARQL endpoints and available for integration into "semantic mashups" in the browser. It is straightforward to reformat sets of XML metadata, including ISO, into simple RDF and then later refine and improve the RDF predicates by reusing known namespaces such as Dublin core, georss, etc. All scientific metadata should be part of the LOD world. In addition, we developed an "instant" drill-down and browse interface that provides faceted navigation so that the user can discover and explore the 25,000 datasets and 3000 services. The available facets and the free-text search box appear in the left panel, and the instantly updated results for the dataset search appear in the right panel. The user can constrain the value of a metadata facet simply by clicking on a word (or phrase) in the "word cloud" of values for each facet. The display section for each dataset includes the important metadata fields, a full description of the dataset, potentially some related URL's, and a "search" button that points to an Open

  1. Omics Metadata Management Software (OMMS).

    PubMed

    Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo

    2015-01-01

    Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. The OMMS can be obtained at http://omms.sandia.gov.

  2. Omics Metadata Management Software (OMMS)

    PubMed Central

    Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo

    2015-01-01

    Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. Availability The OMMS can be obtained at http://omms.sandia.gov PMID:26124554

  3. SOYBEAN.APHID.SD.2017

    USDA-ARS?s Scientific Manuscript database

    Infestations by soybean aphid (SA) can reduce soybean yield. Thus, SA-resistant soybean may be useful in reducing infestations and limiting yield loss. Expression of resistance was characterized among 746 soybean accessions in 56 growth chamber tests at the North Central Agricultural Research Labo...

  4. Occurrance in Korea of three major soybean viruses, Soybean mosaic virus (SMV), Soybean yellow mottle mosaic virus (SYCMV), and Soybean yellow common mosaic virus (SYCMV) revealed by a nationwide survey of soybean fields

    USDA-ARS?s Scientific Manuscript database

    Soybean yellow mottle mosaic virus (SYMMV) and soybean yellow common mosaic virus (SYCMV) were recently isolated in Korea, and it hasn’t been reported how these two viruses were dispersed in Korea. In 2012, we performed a nationwide survey of subsistence soybean farms in Korea. Leaves that appeared ...

  5. Applications of the LBA-ECO Metadata Warehouse

    NASA Astrophysics Data System (ADS)

    Wilcox, L.; Morrell, A.; Griffith, P. C.

    2006-05-01

    The LBA-ECO Project Office has developed a system to harvest and warehouse metadata resulting from the Large-Scale Biosphere Atmosphere Experiment in Amazonia. The harvested metadata is used to create dynamically generated reports, available at www.lbaeco.org, which facilitate access to LBA-ECO datasets. The reports are generated for specific controlled vocabulary terms (such as an investigation team or a geospatial region), and are cross-linked with one another via these terms. This approach creates a rich contextual framework enabling researchers to find datasets relevant to their research. It maximizes data discovery by association and provides a greater understanding of the scientific and social context of each dataset. For example, our website provides a profile (e.g. participants, abstract(s), study sites, and publications) for each LBA-ECO investigation. Linked from each profile is a list of associated registered dataset titles, each of which link to a dataset profile that describes the metadata in a user-friendly way. The dataset profiles are generated from the harvested metadata, and are cross-linked with associated reports via controlled vocabulary terms such as geospatial region. The region name appears on the dataset profile as a hyperlinked term. When researchers click on this link, they find a list of reports relevant to that region, including a list of dataset titles associated with that region. Each dataset title in this list is hyperlinked to its corresponding dataset profile. Moreover, each dataset profile contains hyperlinks to each associated data file at its home data repository and to publications that have used the dataset. We also use the harvested metadata in administrative applications to assist quality assurance efforts. These include processes to check for broken hyperlinks to data files, automated emails that inform our administrators when critical metadata fields are updated, dynamically generated reports of metadata records that link

  6. Potential Overwintering Locations of Soybean Aphid (Hemiptera: Aphididae) Colonizing Soybean in Ohio and Wisconsin.

    PubMed

    Crossley, Michael S; Hogg, David B

    2015-04-01

    Soybean aphids, Aphis glycines Matsumura, depend on long-distance, wind-aided dispersal to complete their life cycle. Despite our general understanding of soybean aphid biology, little is explicitly known about dispersal of soybean aphids between winter and summer hosts in North America. This study compared genotypic diversity of soybean aphids sampled from several overwintering locations in the Midwest and soybean fields in Ohio and Wisconsin to test the hypothesis that these overwintering locations are sources of the soybean colonists. In addition, air parcel trajectory analyses were used to demonstrate the potential for long-distance dispersal events to occur to or from these overwintering locations. Results suggest that soybean aphids from overwintering locations along the Illinois-Iowa border and northern Indiana-Ohio are potential colonists of soybean in Ohio and Wisconsin, but that Ohio is also colonized by soybean aphids from other unknown overwintering locations. Soybean aphids in Ohio and Wisconsin exhibit a small degree of population structure that is not associated with the locations of soybean fields in which they occur, but that may be related to specific overwintering environments, multiple introductions to North America, or spatial variation in aphid phenology. There may be a limited range of suitable habitat for soybean aphid overwintering, in which case management of soybean aphids may be more effective at their overwintering sites. Further research efforts should focus on discovering more overwintering locations of soybean aphid in North America, and the relative impact of short- and long-distance dispersal events on soybean aphid population dynamics. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Leveraging Metadata to Create Better Web Services

    ERIC Educational Resources Information Center

    Mitchell, Erik

    2012-01-01

    Libraries have been increasingly concerned with data creation, management, and publication. This increase is partly driven by shifting metadata standards in libraries and partly by the growth of data and metadata repositories being managed by libraries. In order to manage these data sets, libraries are looking for new preservation and discovery…

  8. Digital Initiatives and Metadata Use in Thailand

    ERIC Educational Resources Information Center

    SuKantarat, Wichada

    2008-01-01

    Purpose: This paper aims to provide information about various digital initiatives in libraries in Thailand and especially use of Dublin Core metadata in cataloguing digitized objects in academic and government digital databases. Design/methodology/approach: The author began researching metadata use in Thailand in 2003 and 2004 while on sabbatical…

  9. Automated Test Methods for XML Metadata

    DTIC Science & Technology

    2017-12-28

    Group under RCC Task TG-147. This document (Volume VI of the RCC Document 118 series) describes procedures used for evaluating XML metadata documents...including TMATS, MDL, IHAL, and DDML documents. These documents contain specifications or descriptions of artifacts and systems of importance to...the collection and management of telemetry data. The methods defined in this report provide a means of evaluating the suitability of such a metadata

  10. A Window to the World: Lessons Learned from NASA's Collaborative Metadata Curation Effort

    NASA Astrophysics Data System (ADS)

    Bugbee, K.; Dixon, V.; Baynes, K.; Shum, D.; le Roux, J.; Ramachandran, R.

    2017-12-01

    Well written descriptive metadata adds value to data by making data easier to discover as well as increases the use of data by providing the context or appropriateness of use. While many data centers acknowledge the importance of correct, consistent and complete metadata, allocating resources to curate existing metadata is often difficult. To lower resource costs, many data centers seek guidance on best practices for curating metadata but struggle to identify those recommendations. In order to assist data centers in curating metadata and to also develop best practices for creating and maintaining metadata, NASA has formed a collaborative effort to improve the Earth Observing System Data and Information System (EOSDIS) metadata in the Common Metadata Repository (CMR). This effort has taken significant steps in building consensus around metadata curation best practices. However, this effort has also revealed gaps in EOSDIS enterprise policies and procedures within the core metadata curation task. This presentation will explore the mechanisms used for building consensus on metadata curation, the gaps identified in policies and procedures, the lessons learned from collaborating with both the data centers and metadata curation teams, and the proposed next steps for the future.

  11. Syntactic and Semantic Validation without a Metadata Management System

    NASA Technical Reports Server (NTRS)

    Pollack, Janine; Gokey, Christopher D.; Kendig, David; Olsen, Lola; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    The ability to maintain quality information is essential to securing the confidence in any system for which the information serves as a data source. NASA's Global Change Master Directory (GCMD), an online Earth science data locator, holds over 9000 data set descriptions and is in a constant state of flux as metadata are created and updated on a daily basis. In such a system, the importance of maintaining the consistency and integrity of these-metadata is crucial. The GCMD has developed a metadata management system utilizing XML, controlled vocabulary, and Java technologies to ensure the metadata not only adhere to valid syntax, but also exhibit proper semantics.

  12. International Metadata Initiatives: Lessons in Bibliographic Control.

    ERIC Educational Resources Information Center

    Caplan, Priscilla

    This paper looks at a subset of metadata schemes, including the Text Encoding Initiative (TEI) header, the Encoded Archival Description (EAD), the Dublin Core Metadata Element Set (DCMES), and the Visual Resources Association (VRA) Core Categories for visual resources. It examines why they developed as they did, major point of difference from…

  13. A 2014 nationwide survey of the distribution of Soybean mosaic virus (SMV), Soybean yellow mottle mosaic virus (SYMMV) and Soybean yellow common mosaic virus (SYCMV) major viruses in South Korean soybean fields, and changes

    USDA-ARS?s Scientific Manuscript database

    In 2014 symptomatic soybean samples were collected throughout Korea, and were tested for the most important soybean viruses found in Korea, namely Soybean mosaic virus (SMV), Soybean yellow common mosaic virus (SYCMV), and Soybean yellow mottle mosaic virus (SYMMV). SYMMV was most commonly detected,...

  14. A Generic Metadata Editor Supporting System Using Drupal CMS

    NASA Astrophysics Data System (ADS)

    Pan, J.; Banks, N. G.; Leggott, M.

    2011-12-01

    Metadata handling is a key factor in preserving and reusing scientific data. In recent years, standardized structural metadata has become widely used in Geoscience communities. However, there exist many different standards in Geosciences, such as the current version of the Federal Geographic Data Committee's Content Standard for Digital Geospatial Metadata (FGDC CSDGM), the Ecological Markup Language (EML), the Geography Markup Language (GML), and the emerging ISO 19115 and related standards. In addition, there are many different subsets within the Geoscience subdomain such as the Biological Profile of the FGDC (CSDGM), or for geopolitical regions, such as the European Profile or the North American Profile in the ISO standards. It is therefore desirable to have a software foundation to support metadata creation and editing for multiple standards and profiles, without re-inventing the wheels. We have developed a software module as a generic, flexible software system to do just that: to facilitate the support for multiple metadata standards and profiles. The software consists of a set of modules for the Drupal Content Management System (CMS), with minimal inter-dependencies to other Drupal modules. There are two steps in using the system's metadata functions. First, an administrator can use the system to design a user form, based on an XML schema and its instances. The form definition is named and stored in the Drupal database as a XML blob content. Second, users in an editor role can then use the persisted XML definition to render an actual metadata entry form, for creating or editing a metadata record. Behind the scenes, the form definition XML is transformed into a PHP array, which is then rendered via Drupal Form API. When the form is submitted the posted values are used to modify a metadata record. Drupal hooks can be used to perform custom processing on metadata record before and after submission. It is trivial to store the metadata record as an actual XML file

  15. 33 CFR 80.825 - Mississippi Passes, LA.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28°54.5′ N., longitude 89°26.1′ W. (d) A line drawn from Mississippi River South Pass East Jetty Light 4 to Mississippi River South Pass West Jetty Light; thence following the general trend of the... general trend of the seaward, highwater shoreline in a southwesterly direction to Mississippi River...

  16. 33 CFR 80.825 - Mississippi Passes, LA.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28°54.5′ N., longitude 89°26.1′ W. (d) A line drawn from Mississippi River South Pass East Jetty Light 4 to Mississippi River South Pass West Jetty Light; thence following the general trend of the... general trend of the seaward, highwater shoreline in a southwesterly direction to Mississippi River...

  17. Reaction of maturity group III soybean plant introductions to Phomopsis seed decay in Arkansas Mississippi and Missouri 2009

    USDA-ARS?s Scientific Manuscript database

    Soybean Phomopsis seed decay (PSD) is the major cause of poor seed quality in the United States, especially in the mid-south region. The disease is primarily caused by Phomopsis longicolla along with other Phomopsis and Diaporthe spp. There are few management strategies for this disease, and these s...

  18. Field and laboratory evaluations of soybean lines against soybean aphid (Hemiptera: Aphididae).

    PubMed

    Hesler, Louis S; Prischmann, Deirdre A; Dashiell, Kenton E

    2012-04-01

    The soybean aphid, Aphis glycines Matsumura (Hemiptera: Aphididae), is a major pest of soybean, Glycine max (L.). Merr., that significantly reduces yield in northern production areas of North America. Insecticides are widely used to control soybean aphid outbreaks, but efforts are underway to develop host plant resistance as an effective alternative management strategy. Here, previously identified resistant lines were evaluated in laboratory tests against field-collected populations of soybean aphid and in field-plot tests over 2 yr in South Dakota. Six lines previously identified with resistance to soybean aphid--Jackson, Dowling, K1639, Cobb, Palmetto and Sennari--were resistant in this study, but relatively high aphid counts on Tie-feng 8 in field plots contrasted with its previously reported resistance. Bhart-PI 165989 showed resistance in one of two laboratory tests, but it had relatively large aphid infestations in both years of field tests. Intermediate levels of soybean aphid occurred in field plots on lines previously shown to have strong (Sugao Zairai, PI 230977, and D75-10169) or moderate resistance to soybean aphid (G93-9223, Bragg, Braxton, and Tracy-M). Sugao Zairai also failed to have a significant proportion of resistant plants in two laboratory tests against aphids field-collected in 2008, but it was resistant in laboratory tests with aphids collected in 2002, 2005, and 2006. Overall, results showed that lines with Rag (i.e., Jackson) or Rag1 gene (i.e., Dowling) had low aphid numbers, whereas lines with Rag2 (i.e., Sugao Zairai, Sennari) had mixed results. Collectively, responses of soybean aphid populations in laboratory and field tests in 2008 resembled a virulence pattern reported previously for biotype 3 soybean aphids, but virulence in soybean aphid populations was variable and dynamic over years of the study. These results, coupled with previous reports of biotypes virulent to Rag1, suggest that deployment of lines with a single aphid

  19. Metadata management and semantics in microarray repositories.

    PubMed

    Kocabaş, F; Can, T; Baykal, N

    2011-12-01

    The number of microarray and other high-throughput experiments on primary repositories keeps increasing as do the size and complexity of the results in response to biomedical investigations. Initiatives have been started on standardization of content, object model, exchange format and ontology. However, there are backlogs and inability to exchange data between microarray repositories, which indicate that there is a great need for a standard format and data management. We have introduced a metadata framework that includes a metadata card and semantic nets that make experimental results visible, understandable and usable. These are encoded in syntax encoding schemes and represented in RDF (Resource Description Frame-word), can be integrated with other metadata cards and semantic nets, and can be exchanged, shared and queried. We demonstrated the performance and potential benefits through a case study on a selected microarray repository. We concluded that the backlogs can be reduced and that exchange of information and asking of knowledge discovery questions can become possible with the use of this metadata framework.

  20. Metadata Design in the New PDS4 Standards - Something for Everybody

    NASA Astrophysics Data System (ADS)

    Raugh, Anne C.; Hughes, John S.

    2015-11-01

    The Planetary Data System (PDS) archives, supports, and distributes data of diverse targets, from diverse sources, to diverse users. One of the core problems addressed by the PDS4 data standard redesign was that of metadata - how to accommodate the increasingly sophisticated demands of search interfaces, analytical software, and observational documentation into label standards without imposing limits and constraints that would impinge on the quality or quantity of metadata that any particular observer or team could supply. And yet, as an archive, PDS must have detailed documentation for the metadata in the labels it supports, or the institutional knowledge encoded into those attributes will be lost - putting the data at risk.The PDS4 metadata solution is based on a three-step approach. First, it is built on two key ISO standards: ISO 11179 "Information Technology - Metadata Registries", which provides a common framework and vocabulary for defining metadata attributes; and ISO 14721 "Space Data and Information Transfer Systems - Open Archival Information System (OAIS) Reference Model", which provides the framework for the information architecture that enforces the object-oriented paradigm for metadata modeling. Second, PDS has defined a hierarchical system that allows it to divide its metadata universe into namespaces ("data dictionaries", conceptually), and more importantly to delegate stewardship for a single namespace to a local authority. This means that a mission can develop its own data model with a high degree of autonomy and effectively extend the PDS model to accommodate its own metadata needs within the common ISO 11179 framework. Finally, within a single namespace - even the core PDS namespace - existing metadata structures can be extended and new structures added to the model as new needs are identifiedThis poster illustrates the PDS4 approach to metadata management and highlights the expected return on the development investment for PDS, users and data

  1. Metadata squared: enhancing its usability for volunteered geographic information and the GeoWeb

    USGS Publications Warehouse

    Poore, Barbara S.; Wolf, Eric B.; Sui, Daniel Z.; Elwood, Sarah; Goodchild, Michael F.

    2013-01-01

    The Internet has brought many changes to the way geographic information is created and shared. One aspect that has not changed is metadata. Static spatial data quality descriptions were standardized in the mid-1990s and cannot accommodate the current climate of data creation where nonexperts are using mobile phones and other location-based devices on a continuous basis to contribute data to Internet mapping platforms. The usability of standard geospatial metadata is being questioned by academics and neogeographers alike. This chapter analyzes current discussions of metadata to demonstrate how the media shift that is occurring has affected requirements for metadata. Two case studies of metadata use are presented—online sharing of environmental information through a regional spatial data infrastructure in the early 2000s, and new types of metadata that are being used today in OpenStreetMap, a map of the world created entirely by volunteers. Changes in metadata requirements are examined for usability, the ease with which metadata supports coproduction of data by communities of users, how metadata enhances findability, and how the relationship between metadata and data has changed. We argue that traditional metadata associated with spatial data infrastructures is inadequate and suggest several research avenues to make this type of metadata more interactive and effective in the GeoWeb.

  2. The Importance of Metadata in System Development and IKM

    DTIC Science & Technology

    2003-02-01

    Defence R& D Canada The Importance of Metadata in System Development and IKM Anthony W. Isenor Technical Memorandum DRDC Atlantic TM 2003-011...Metadata in System Development and IKM Anthony W. Isenor Defence R& D Canada – Atlantic Technical Memorandum DRDC Atlantic TM 2003-011 February... it is important for searches and providing relevant information to the client. A comparison of metadata standards was conducted with emphasis on

  3. Metabolonote: A Wiki-Based Database for Managing Hierarchical Metadata of Metabolome Analyses

    PubMed Central

    Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu

    2015-01-01

    Metabolomics – technology for comprehensive detection of small molecules in an organism – lags behind the other “omics” in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called “Togo Metabolome Data” (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers’ understanding and use of data but also submitters’ motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http

  4. Metabolonote: a wiki-based database for managing hierarchical metadata of metabolome analyses.

    PubMed

    Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu

    2015-01-01

    Metabolomics - technology for comprehensive detection of small molecules in an organism - lags behind the other "omics" in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called "Togo Metabolome Data" (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers' understanding and use of data but also submitters' motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/.

  5. Metadata, PICS and Quality.

    ERIC Educational Resources Information Center

    Armstrong, C. J.

    1997-01-01

    Discusses PICS (Platform for Internet Content Selection), the Centre for Information Quality Management (CIQM), and metadata. Highlights include filtering networked information; the quality of information; and standardizing search engines. (LRW)

  6. The Role of Metadata Standards in EOSDIS Search and Retrieval Applications

    NASA Technical Reports Server (NTRS)

    Pfister, Robin

    1999-01-01

    Metadata standards play a critical role in data search and retrieval systems. Metadata tie software to data so the data can be processed, stored, searched, retrieved and distributed. Without metadata these actions are not possible. The process of populating metadata to describe science data is an important service to the end user community so that a user who is unfamiliar with the data, can easily find and learn about a particular dataset before an order decision is made. Once a good set of standards are in place, the accuracy with which data search can be performed depends on the degree to which metadata standards are adhered during product definition. NASA's Earth Observing System Data and Information System (EOSDIS) provides examples of how metadata standards are used in data search and retrieval.

  7. Metadata Creation, Management and Search System for your Scientific Data

    NASA Astrophysics Data System (ADS)

    Devarakonda, R.; Palanisamy, G.

    2012-12-01

    Mercury Search Systems is a set of tools for creating, searching, and retrieving of biogeochemical metadata. Mercury toolset provides orders of magnitude improvements in search speed, support for any metadata format, integration with Google Maps for spatial queries, multi-facetted type search, search suggestions, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. Mercury's metadata editor provides a easy way for creating metadata and Mercury's search interface provides a single portal to search for data and information contained in disparate data management systems, each of which may use any metadata format including FGDC, ISO-19115, Dublin-Core, Darwin-Core, DIF, ECHO, and EML. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury is being used more than 14 different projects across 4 federal agencies. It was originally developed for NASA, with continuing development funded by NASA, USGS, and DOE for a consortium of projects. Mercury search won the NASA's Earth Science Data Systems Software Reuse Award in 2008. References: R. Devarakonda, G. Palanisamy, B.E. Wilson, and J.M. Green, "Mercury: reusable metadata management data discovery and access system", Earth Science Informatics, vol. 3, no. 1, pp. 87-94, May 2010. R. Devarakonda, G. Palanisamy, J.M. Green, B.E. Wilson, "Data sharing and retrieval using OAI-PMH", Earth Science Informatics DOI: 10.1007/s12145-010-0073-0, (2010);

  8. Distributed metadata in a high performance computing environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination thatmore » a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.« less

  9. Survey data and metadata modelling using document-oriented NoSQL

    NASA Astrophysics Data System (ADS)

    Rahmatuti Maghfiroh, Lutfi; Gusti Bagus Baskara Nugraha, I.

    2018-03-01

    Survey data that are collected from year to year have metadata change. However it need to be stored integratedly to get statistical data faster and easier. Data warehouse (DW) can be used to solve this limitation. However there is a change of variables in every period that can not be accommodated by DW. Traditional DW can not handle variable change via Slowly Changing Dimension (SCD). Previous research handle the change of variables in DW to manage metadata by using multiversion DW (MVDW). MVDW is designed using relational model. Some researches also found that developing nonrelational model in NoSQL database has reading time faster than the relational model. Therefore, we propose changes to metadata management by using NoSQL. This study proposes a model DW to manage change and algorithms to retrieve data with metadata changes. Evaluation of the proposed models and algorithms result in that database with the proposed design can retrieve data with metadata changes properly. This paper has contribution in comprehensive data analysis with metadata changes (especially data survey) in integrated storage.

  10. Development of health information search engine based on metadata and ontology.

    PubMed

    Song, Tae-Min; Park, Hyeoun-Ae; Jin, Dal-Lae

    2014-04-01

    The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers.

  11. Archive of digital Chirp subbottom profile data collected during USGS cruises 09CCT03 and 09CCT04, Mississippi and Alabama Gulf Islands, June and July 2009

    USGS Publications Warehouse

    Forde, Arnell S.; Dadisman, Shawn V.; Flocks, James G.; Wiese, Dana S.

    2011-01-01

    In June and July of 2009, the U.S. Geological Survey (USGS) conducted geophysical surveys to investigate the geologic controls on island framework from Cat Island, Mississippi, to Dauphin Island, Alabama, as part of a broader USGS study on Coastal Change and Transport (CCT). The surveys were funded through the Northern Gulf of Mexico Ecosystem Change and Hazard Susceptibility Project as part of the Holocene Evolution of the Mississippi-Alabama Region Subtask (http://ngom.er.usgs.gov/task2_2/index.php). This report serves as an archive of unprocessed digital Chirp seismic profile data, trackline maps, navigation files, Geographic Information System (GIS) files, Field Activity Collection System (FACS) logs, and formal Federal Geographic Data Committee (FGDC) metadata. Single-beam and Swath bathymetry data were also collected during these cruises and will be published as a separate archive. Gained (a relative increase in signal amplitude) digital images of the seismic profiles are also provided. Refer to the Acronyms page for expansion of acronyms and abbreviations used in this report.

  12. In Interactive, Web-Based Approach to Metadata Authoring

    NASA Technical Reports Server (NTRS)

    Pollack, Janine; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    NASA's Global Change Master Directory (GCMD) serves a growing number of users by assisting the scientific community in the discovery of and linkage to Earth science data sets and related services. The GCMD holds over 8000 data set descriptions in Directory Interchange Format (DIF) and 200 data service descriptions in Service Entry Resource Format (SERF), encompassing the disciplines of geology, hydrology, oceanography, meteorology, and ecology. Data descriptions also contain geographic coverage information, thus allowing researchers to discover data pertaining to a particular geographic location, as well as subject of interest. The GCMD strives to be the preeminent data locator for world-wide directory level metadata. In this vein, scientists and data providers must have access to intuitive and efficient metadata authoring tools. Existing GCMD tools are not currently attracting. widespread usage. With usage being the prime indicator of utility, it has become apparent that current tools must be improved. As a result, the GCMD has released a new suite of web-based authoring tools that enable a user to create new data and service entries, as well as modify existing data entries. With these tools, a more interactive approach to metadata authoring is taken, as they feature a visual "checklist" of data/service fields that automatically update when a field is completed. In this way, the user can quickly gauge which of the required and optional fields have not been populated. With the release of these tools, the Earth science community will be further assisted in efficiently creating quality data and services metadata. Keywords: metadata, Earth science, metadata authoring tools

  13. Collaborative Metadata Curation in Support of NASA Earth Science Data Stewardship

    NASA Technical Reports Server (NTRS)

    Sisco, Adam W.; Bugbee, Kaylin; le Roux, Jeanne; Staton, Patrick; Freitag, Brian; Dixon, Valerie

    2018-01-01

    Growing collection of NASA Earth science data is archived and distributed by EOSDIS’s 12 Distributed Active Archive Centers (DAACs). Each collection and granule is described by a metadata record housed in the Common Metadata Repository (CMR). Multiple metadata standards are in use, and core elements of each are mapped to and from a common model – the Unified Metadata Model (UMM). Work done by the Analysis and Review of CMR (ARC) Team.

  14. A model for enhancing Internet medical document retrieval with "medical core metadata".

    PubMed

    Malet, G; Munoz, F; Appleyard, R; Hersh, W

    1999-01-01

    Finding documents on the World Wide Web relevant to a specific medical information need can be difficult. The goal of this work is to define a set of document content description tags, or metadata encodings, that can be used to promote disciplined search access to Internet medical documents. The authors based their approach on a proposed metadata standard, the Dublin Core Metadata Element Set, which has recently been submitted to the Internet Engineering Task Force. Their model also incorporates the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary and MEDLINE-type content descriptions. The model defines a medical core metadata set that can be used to describe the metadata for a wide variety of Internet documents. The authors propose that their medical core metadata set be used to assign metadata to medical documents to facilitate document retrieval by Internet search engines.

  15. Metadata: Standards for Retrieving WWW Documents (and Other Digitized and Non-Digitized Resources)

    NASA Astrophysics Data System (ADS)

    Rusch-Feja, Diann

    The use of metadata for indexing digitized and non-digitized resources for resource discovery in a networked environment is being increasingly implemented all over the world. Greater precision is achieved using metadata than relying on universal search engines and furthermore, meta-data can be used as filtering mechanisms for search results. An overview of various metadata sets is given, followed by a more focussed presentation of Dublin Core Metadata including examples of sub-elements and qualifiers. Especially the use of the Dublin Core Relation element provides connections between the metadata of various related electronic resources, as well as the metadata for physical, non-digitized resources. This facilitates more comprehensive search results without losing precision and brings together different genres of information which would otherwise be only searchable in separate databases. Furthermore, the advantages of Dublin Core Metadata in comparison with library cataloging and the use of universal search engines are discussed briefly, followed by a listing of types of implementation of Dublin Core Metadata.

  16. DIRAC File Replica and Metadata Catalog

    NASA Astrophysics Data System (ADS)

    Tsaregorodtsev, A.; Poss, S.

    2012-12-01

    File replica and metadata catalogs are essential parts of any distributed data management system, which are largely determining its functionality and performance. A new File Catalog (DFC) was developed in the framework of the DIRAC Project that combines both replica and metadata catalog functionality. The DFC design is based on the practical experience with the data management system of the LHCb Collaboration. It is optimized for the most common patterns of the catalog usage in order to achieve maximum performance from the user perspective. The DFC supports bulk operations for replica queries and allows quick analysis of the storage usage globally and for each Storage Element separately. It supports flexible ACL rules with plug-ins for various policies that can be adopted by a particular community. The DFC catalog allows to store various types of metadata associated with files and directories and to perform efficient queries for the data based on complex metadata combinations. Definition of file ancestor-descendent relation chains is also possible. The DFC catalog is implemented in the general DIRAC distributed computing framework following the standard grid security architecture. In this paper we describe the design of the DFC and its implementation details. The performance measurements are compared with other grid file catalog implementations. The experience of the DFC Catalog usage in the CLIC detector project are discussed.

  17. Metadata Evaluation and Improvement: Evolving Analysis and Reporting

    NASA Technical Reports Server (NTRS)

    Habermann, Ted; Kozimor, John; Gordon, Sean

    2017-01-01

    ESIP Community members create and manage a large collection of environmental datasets that span multiple decades, the entire globe, and many parts of the solar system. Metadata are critical for discovering, accessing, using and understanding these data effectively and ESIP community members have successfully created large collections of metadata describing these data. As part of the White House Big Earth Data Initiative (BEDI), ESDIS has developed a suite of tools for evaluating these metadata in native dialects with respect to recommendations from many organizations. We will describe those tools and demonstrate evolving techniques for sharing results with data providers.

  18. Omics Metadata Management Software v. 1 (OMMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and to perform bioinformatics analyses and information management tasks via a simple and intuitive web-based interface. Several use cases with short-read sequence datasets are provided to showcase the full functionality of the OMMS, from metadata curation tasks, to bioinformatics analyses and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for web-based deployment supporting geographically dispersed research teams. Our software was developed with open-source bundles, is flexible, extensible and easily installedmore » and run by operators with general system administration and scripting language literacy.« less

  19. Forum Guide to Metadata: The Meaning behind Education Data. NFES 2009-805

    ERIC Educational Resources Information Center

    National Forum on Education Statistics, 2009

    2009-01-01

    The purpose of this guide is to empower people to more effectively use data as information. To accomplish this, the publication explains what metadata are; why metadata are critical to the development of sound education data systems; what components comprise a metadata system; what value metadata bring to data management and use; and how to…

  20. EPA Metadata Style Guide Keywords and EPA Organization Names

    EPA Pesticide Factsheets

    The following keywords and EPA organization names listed below, along with EPA’s Metadata Style Guide, are intended to provide suggestions and guidance to assist with the standardization of metadata records.

  1. 7 CFR 1220.127 - Soybean products.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Soybean products. 1220.127 Section 1220.127... CONSUMER INFORMATION Soybean Promotion and Research Order Definitions § 1220.127 Soybean products. The term soybean products means products produced in whole or in part from soybeans or soybean byproducts. ...

  2. Validation of a hairy roots system to study soybean-soybean aphid interactions

    PubMed Central

    Morriss, Stephanie C.; Studham, Matthew E.; Tylka, Gregory L.

    2017-01-01

    The soybean aphid (Aphis glycines) is one of the main insect pests of soybean (Glycine max) worldwide. Genomics approaches have provided important data on transcriptome changes, both in the insect and in the plant, in response to the plant-aphid interaction. However, the difficulties to transform soybean and to rear soybean aphid on artificial media have hindered our ability to systematically test the function of genes identified by those analyses as mediators of plant resistance to the insect. An efficient approach to produce transgenic soybean material is the production of transformed hairy roots using Agrobacterium rhizogenes; however, soybean aphids colonize leaves or stems and thus this approach has not been utilized. Here, we developed a hairy root system that allowed effective aphid feeding. We show that this system supports aphid performance similar to that observed in leaves. The use of hairy roots to study plant resistance is validated by experiments showing that roots generated from cotyledons of resistant lines carrying the Rag1 or Rag2 resistance genes are also resistant to aphid feeding, while related susceptible lines are not. Our results demonstrate that hairy roots are a good system to study soybean aphid-soybean interactions, providing a quick and effective method that could be used for functional analysis of the resistance response to this insect. PMID:28358854

  3. Detection of genetically modified soybean in crude soybean oil.

    PubMed

    Nikolić, Zorica; Vasiljević, Ivana; Zdjelar, Gordana; Ðorđević, Vuk; Ignjatov, Maja; Jovičić, Dušica; Milošević, Dragana

    2014-02-15

    In order to detect presence and quantity of Roundup Ready (RR) soybean in crude oil extracted from soybean seed with a different percentage of GMO seed two extraction methods were used, CTAB and DNeasy Plant Mini Kit. The amplifications of lectin gene, used to check the presence of soybean DNA, were not achieved in all CTAB extracts of DNA, while commercial kit gave satisfactory results. Comparing actual and estimated GMO content between two extraction methods, root mean square deviation for kit is 0.208 and for CTAB is 2.127, clearly demonstrated superiority of kit over CTAB extraction. The results of quantification evidently showed that if the oil samples originate from soybean seed with varying percentage of RR, it is possible to monitor the GMO content at the first stage of processing crude oil. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Evaluating and Improving Metadata for Data Use and Understanding

    NASA Astrophysics Data System (ADS)

    Habermann, T.

    2013-12-01

    The last several decades have seen an extraordinary increase in the number and breadth of environmental data available to the scientific community and the general public. These increases have focused the environmental data community on creating metadata for discovering data and on the creation and population of catalogs and portals for facilitating discovery. This focus is reflected in the fields required by commonly used metadata standards and has resulted in collections populated with metadata that meet, but don't go far beyond, minimal discovery requirements. Discovery is the first step towards addressing scientific questions using data. As more data are discovered and accessed, users need metadata that 1) automates use and integration of these data in tools and 2) facilitates understanding the data when it is compared to similar datasets or as internal variations are observed. When data discovery is the primary goal, it is important to create records for as many datasets as possible. The content of these records is controlled by minimum requirements, and evaluation is generally limited to testing for required fields and counting records. As the use and understanding needs become more important, more comprehensive evaluation tools are needed. An approach is described for evaluating existing metadata in the light of these new requirements and for improving the metadata to meet them.

  5. Soybean defense responses to the soybean aphid.

    PubMed

    Li, Yan; Zou, Jijun; Li, Min; Bilgin, Damla D; Vodkin, Lila O; Hartman, Glen L; Clough, Steven J

    2008-01-01

    Transcript profiles in aphid (Aphis glycines)-resistant (cv. Dowling) and -susceptible (cv. Williams 82) soybean (Glycine max) cultivars using soybean cDNA microarrays were investigated. Large-scale soybean cDNA microarrays representing approx. 18 000 genes or c. 30% of the soybean genome were compared at 6 and 12 h post-application of aphids. In a separate experiment utilizing clip cages, expression of three defense-related genes were examined at 6, 12, 24, 48, and 72 h in both cultivars by quantitative real-time PCR. One hundred and forty genes showed specific responses for resistance; these included genes related to cell wall, defense, DNA/RNA, secondary metabolism, signaling and other processes. When an extended time period of sampling was investigated, earlier and greater induction of three defense-related genes was observed in the resistant cultivar; however, the induction declined after 24 or 48 h in the resistant cultivar but continued to increase in the susceptible cultivar after 24 h. Aphid-challenged resistant plants showed rapid differential gene expression patterns similar to the incompatible response induced by avirulent Pseudomonas syringae. Five genes were identified as differentially expressed between the two genotypes in the absence of aphids.

  6. Managing biomedical image metadata for search and retrieval of similar images.

    PubMed

    Korenblum, Daniel; Rubin, Daniel; Napel, Sandy; Rodriguez, Cesar; Beaulieu, Chris

    2011-08-01

    Radiology images are generally disconnected from the metadata describing their contents, such as imaging observations ("semantic" metadata), which are usually described in text reports that are not directly linked to the images. We developed a system, the Biomedical Image Metadata Manager (BIMM) to (1) address the problem of managing biomedical image metadata and (2) facilitate the retrieval of similar images using semantic feature metadata. Our approach allows radiologists, researchers, and students to take advantage of the vast and growing repositories of medical image data by explicitly linking images to their associated metadata in a relational database that is globally accessible through a Web application. BIMM receives input in the form of standard-based metadata files using Web service and parses and stores the metadata in a relational database allowing efficient data query and maintenance capabilities. Upon querying BIMM for images, 2D regions of interest (ROIs) stored as metadata are automatically rendered onto preview images included in search results. The system's "match observations" function retrieves images with similar ROIs based on specific semantic features describing imaging observation characteristics (IOCs). We demonstrate that the system, using IOCs alone, can accurately retrieve images with diagnoses matching the query images, and we evaluate its performance on a set of annotated liver lesion images. BIMM has several potential applications, e.g., computer-aided detection and diagnosis, content-based image retrieval, automating medical analysis protocols, and gathering population statistics like disease prevalences. The system provides a framework for decision support systems, potentially improving their diagnostic accuracy and selection of appropriate therapies.

  7. A Shared Infrastructure for Federated Search Across Distributed Scientific Metadata Catalogs

    NASA Astrophysics Data System (ADS)

    Reed, S. A.; Truslove, I.; Billingsley, B. W.; Grauch, A.; Harper, D.; Kovarik, J.; Lopez, L.; Liu, M.; Brandt, M.

    2013-12-01

    The vast amount of science metadata can be overwhelming and highly complex. Comprehensive analysis and sharing of metadata is difficult since institutions often publish to their own repositories. There are many disjoint standards used for publishing scientific data, making it difficult to discover and share information from different sources. Services that publish metadata catalogs often have different protocols, formats, and semantics. The research community is limited by the exclusivity of separate metadata catalogs and thus it is desirable to have federated search interfaces capable of unified search queries across multiple sources. Aggregation of metadata catalogs also enables users to critique metadata more rigorously. With these motivations in mind, the National Snow and Ice Data Center (NSIDC) and Advanced Cooperative Arctic Data and Information Service (ACADIS) implemented two search interfaces for the community. Both the NSIDC Search and ACADIS Arctic Data Explorer (ADE) use a common infrastructure which keeps maintenance costs low. The search clients are designed to make OpenSearch requests against Solr, an Open Source search platform. Solr applies indexes to specific fields of the metadata which in this instance optimizes queries containing keywords, spatial bounds and temporal ranges. NSIDC metadata is reused by both search interfaces but the ADE also brokers additional sources. Users can quickly find relevant metadata with minimal effort and ultimately lowers costs for research. This presentation will highlight the reuse of data and code between NSIDC and ACADIS, discuss challenges and milestones for each project, and will identify creation and use of Open Source libraries.

  8. Agricultural practices altered soybean seed protein, oil, fatty acids, sugars, and minerals in the Midsouth USA.

    PubMed

    Bellaloui, Nacer; Bruns, H Arnold; Abbas, Hamed K; Mengistu, Alemu; Fisher, Daniel K; Reddy, Krishna N

    2015-01-01

    Information on the effects of management practices on soybean seed composition is scarce. Therefore, the objective of this research was to investigate the effects of planting date (PD) and seeding rate (SR) on seed composition (protein, oil, fatty acids, and sugars) and seed minerals (B, P, and Fe) in soybean grown in two row-types (RTs) on the Mississippi Delta region of the Midsouth USA. Two field experiments were conducted in 2009 and 2010 on Sharkey clay and Beulah fine sandy loam soil at Stoneville, MS, USA, under irrigated conditions. Soybean were grown in 102 cm single-rows and 25 cm twin-rows in 102 cm centers at SRs of 20, 30, 40, and 50 seeds m(-2). The results showed that in May and June planting, protein, glucose, P, and B concentrations increased with increased SR, but at the highest SRs (40 and 50 seeds m(-2)), the concentrations remained constant or declined. Palmitic, stearic, and linoleic acid concentrations were the least responsive to SR increases. Early planting resulted in higher oil, oleic acid, sucrose, B, and P on both single and twin-rows. Late planting resulted in higher protein and linolenic acid, but lower oleic acid and oil concentrations. The changes in seed constituents could be due to changes in environmental factors (drought and temperature), and nutrient accumulation in seeds and leaves. The increase of stachyose sugar in 2010 may be due to a drier year and high temperature in 2010 compared to 2009; suggesting the possible role of stachyose as an environmental stress compound. Our research demonstrated that PD, SR, and RT altered some seed constituents, but the level of alteration in each year dependent on environmental factors such as drought and temperature. This information benefits growers and breeders for considering agronomic practices to select for soybean seed nutritional qualities under drought and high heat conditions.

  9. The Mississippi Years (1969-1974)

    ERIC Educational Resources Information Center

    Agras, W. Stewart

    2012-01-01

    The 4 years that Michel Hersen spent at the University of Mississippi Medical Center (1970-1974) are described in this article from the viewpoint of his place in the history of the development of behavior analysis and therapy. The Department of Psychiatry at the University of Mississippi Medical Center became a leader in enhancing the role of…

  10. 106-17 Telemetry Standards Metadata Configuration Chapter 23

    DTIC Science & Technology

    2017-07-01

    23-1 23.2 Metadata Description Language ...Chapter 23, July 2017 iii Acronyms HTML Hypertext Markup Language MDL Metadata Description Language PCM pulse code modulation TMATS Telemetry...Attributes Transfer Standard W3C World Wide Web Consortium XML eXtensible Markup Language XSD XML schema document Telemetry Network Standard

  11. Residual and Systemic Efficacy of Chlorantraniliprole and Flubendiamide Against Corn Earworm (Lepidoptera: Noctuidae) in Soybean

    PubMed Central

    Adams, A.; Gore, J.; Catchot, A.; Musser, F.; Cook, D.; Krishnan, N.; Irby, T.

    2016-01-01

    Experiments were conducted in Mississippi from 2013 to 2015 to determine the systemic and residual efficacy of chlorantraniliprole and flubendiamide against corn earworm, Helicoverpa zea (Boddie), in soybean. Both insecticides were applied at V4 and R3. Ten leaves that were present at the time of application and 10 newly emerged leaves that were not present at the time of application were collected to measure residual and systemic efficacy, respectively. Ten pods were removed from each plot at R5.5. For all assays, corn earworm larvae were placed on plant material. Chlorantraniliprole appeared to provide systemic control of H. zea, but was dependent on soybean growth stage at the time of application. In the V4 experiment, chlorantraniliprole resulted in greater mortality than the control on new leaves at 7 d after treatment, but not at 14 d. In the R3 experiment, chlorantraniliprole resulted in greater than 90% mortality on new leaves at all evaluation intervals. Mortality of H. zea on new leaves was <17% for flubendiamide and was not different than the control. Both insecticides resulted in significant mortality of H. zea on leaves that were present at the time of application for at least 31 d after application. Chlorantraniliprole resulted in greater mortality than flubendiamide at 24 and 31 d. Neither insecticide resulted in mortality of H. zea feeding on reproductive structures. These results suggest that chlorantraniliprole moves to new vegetative structures but not to reproductive structures of soybean, and that flubendiamide does not move systemically. PMID:27707947

  12. Effect of new auxin herbicide formulations on control of herbicide resistant weeds and on microbial activities in the rhizosphere

    USDA-ARS?s Scientific Manuscript database

    Widespread distribution of glyphosate-resistant weeds in soybean-growing areas across Mississippi has economically affected soybean planting and follow-up crop management operations. New multiple herbicide-resistant crop (including soybean) technologies with associated formulations will soon be comm...

  13. Development of Health Information Search Engine Based on Metadata and Ontology

    PubMed Central

    Song, Tae-Min; Jin, Dal-Lae

    2014-01-01

    Objectives The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Methods Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. Results A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Conclusions Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers. PMID:24872907

  14. FRAMES Metadata Reporting Templates for Ecohydrological Observations, version 1.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christianson, Danielle; Varadharajan, Charuleka; Christoffersen, Brad

    FRAMES is a a set of Excel metadata files and package-level descriptive metadata that are designed to facilitate and improve capture of desired metadata for ecohydrological observations. The metadata are bundled with data files into a data package and submitted to a data repository (e.g. the NGEE Tropics Data Repository) via a web form. FRAMES standardizes reporting of diverse ecohydrological and biogeochemical data for synthesis across a range of spatiotemporal scales and incorporates many best data science practices. This version of FRAMES supports observations for primarily automated measurements collected by permanently located sensors, including sap flow (tree water use), leafmore » surface temperature, soil water content, dendrometry (stem diameter growth increment), and solar radiation. Version 1.1 extend the controlled vocabulary and incorporates functionality to facilitate programmatic use of data and FRAMES metadata (R code available at NGEE Tropics Data Repository).« less

  15. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal.

    PubMed

    Baker, Ed

    2013-01-01

    Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity) have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC) extraction and mapping.

  16. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal

    PubMed Central

    2013-01-01

    Abstract Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity) have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC) extraction and mapping. PMID:24723768

  17. 7 CFR 1220.614 - Soybeans.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Soybeans. 1220.614 Section 1220.614 Agriculture... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE SOYBEAN PROMOTION, RESEARCH, AND CONSUMER INFORMATION Procedures To Request a Referendum Definitions § 1220.614 Soybeans. Soybeans means all...

  18. 7 CFR 1220.614 - Soybeans.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false Soybeans. 1220.614 Section 1220.614 Agriculture... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE SOYBEAN PROMOTION, RESEARCH, AND CONSUMER INFORMATION Procedures To Request a Referendum Definitions § 1220.614 Soybeans. Soybeans means all...

  19. 7 CFR 1220.128 - Soybeans.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false Soybeans. 1220.128 Section 1220.128 Agriculture... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE SOYBEAN PROMOTION, RESEARCH, AND CONSUMER INFORMATION Soybean Promotion and Research Order Definitions § 1220.128 Soybeans. The term...

  20. 7 CFR 1220.128 - Soybeans.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Soybeans. 1220.128 Section 1220.128 Agriculture... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE SOYBEAN PROMOTION, RESEARCH, AND CONSUMER INFORMATION Soybean Promotion and Research Order Definitions § 1220.128 Soybeans. The term...

  1. Building Opportunity in Mississippi through Higher Education: A Report from the Steering Committee for the Mississippi Leadership Summit on Higher Education.

    ERIC Educational Resources Information Center

    Association of Governing Boards of Universities and Colleges, 2002

    2002-01-01

    The Steering Committee of the Mississippi Leadership Summit on Higher Education anticipates a future in which Mississippi is regarded as a state of promise and opportunity. This report discusses the priorities and initiatives of a shared framework for educational, economic, and social progress. All of Mississippi's children and their families, as…

  2. Predicting biomedical metadata in CEDAR: A study of Gene Expression Omnibus (GEO).

    PubMed

    Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier

    2017-08-01

    A crucial and limiting factor in data reuse is the lack of accurate, structured, and complete descriptions of data, known as metadata. Towards improving the quantity and quality of metadata, we propose a novel metadata prediction framework to learn associations from existing metadata that can be used to predict metadata values. We evaluate our framework in the context of experimental metadata from the Gene Expression Omnibus (GEO). We applied four rule mining algorithms to the most common structured metadata elements (sample type, molecular type, platform, label type and organism) from over 1.3million GEO records. We examined the quality of well supported rules from each algorithm and visualized the dependencies among metadata elements. Finally, we evaluated the performance of the algorithms in terms of accuracy, precision, recall, and F-measure. We found that PART is the best algorithm outperforming Apriori, Predictive Apriori, and Decision Table. All algorithms perform significantly better in predicting class values than the majority vote classifier. We found that the performance of the algorithms is related to the dimensionality of the GEO elements. The average performance of all algorithm increases due of the decreasing of dimensionality of the unique values of these elements (2697 platforms, 537 organisms, 454 labels, 9 molecules, and 5 types). Our work suggests that experimental metadata such as present in GEO can be accurately predicted using rule mining algorithms. Our work has implications for both prospective and retrospective augmentation of metadata quality, which are geared towards making data easier to find and reuse. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. An Approach to Information Management for AIR7000 with Metadata and Ontologies

    DTIC Science & Technology

    2009-10-01

    metadata. We then propose an approach based on Semantic Technologies including the Resource Description Framework (RDF) and Upper Ontologies, for the...mandating specific metadata schemas can result in interoperability problems. For example, many standards within the ADO mandate the use of XML for metadata...such problems, we propose an archi- tecture in which different metadata schemes can inter operate. By using RDF (Resource Description Framework ) as a

  4. Metadata Exporter for Scientific Photography Management

    NASA Astrophysics Data System (ADS)

    Staudigel, D.; English, B.; Delaney, R.; Staudigel, H.; Koppers, A.; Hart, S.

    2005-12-01

    Photographs have become an increasingly important medium, especially with the advent of digital cameras. It has become inexpensive to take photographs and quickly post them on a website. However informative photos may be, they still need to be displayed in a convenient way, and be cataloged in such a manner that makes them easily locatable. Managing the great number of photographs that digital cameras allow and creating a format for efficient dissemination of the information related to the photos is a tedious task. Products such as Apple's iPhoto have greatly eased the task of managing photographs, However, they often have limitations. Un-customizable metadata fields and poor metadata extraction tools limit their scientific usefulness. A solution to this persistent problem is a customizable metadata exporter. On the ALIA expedition, we successfully managed the thousands of digital photos we took. We did this with iPhoto and a version of the exporter that is now available to the public under the name "CustomHTMLExport" (http://www.versiontracker.com/dyn/moreinfo/macosx/27777), currently undergoing formal beta testing This software allows the use of customized metadata fields (including description, time, date, GPS data, etc.), which is exported along with the photo. It can also produce webpages with this data straight from iPhoto, in a much more flexible way than is already allowed. With this tool it becomes very easy to manage and distribute scientific photos.

  5. Creating FGDC and NBII metadata with Metavist 2005.

    Treesearch

    David J. Rugg

    2004-01-01

    This report documents a computer program for creating metadata compliant with the Federal Geographic Data Committee (FGDC) 1998 metadata standard or the National Biological Information Infrastructure (NBII) 1999 Biological Data Profile for the FGDC standard. The software runs under the Microsoft Windows 2000 and XP operating systems, and requires the presence of...

  6. A Model for the Creation of Human-Generated Metadata within Communities

    ERIC Educational Resources Information Center

    Brasher, Andrew; McAndrew, Patrick

    2005-01-01

    This paper considers situations for which detailed metadata descriptions of learning resources are necessary, and focuses on human generation of such metadata. It describes a model which facilitates human production of good quality metadata by the development and use of structured vocabularies. Using examples, this model is applied to single and…

  7. A high throughput soybean gene identification system developed using soybean yellow common mosaic virus (SYCMV)

    USDA-ARS?s Scientific Manuscript database

    Soybean yellow common mosaic virus (SYCMV) was recently reported from Korea, and a subsequent survey of soybean fields found that SYCMV, Soybean yellow mottle mosaic virus (SYMMV), and Soybean mosaic virus (SMV) infections were widespread. SYCMV has recently been developed into a Virus Inducing Gene...

  8. Mortality of Mississippi Sandhill Crane chicks

    USGS Publications Warehouse

    Olsen, Glenn H.

    2004-01-01

    Mississippi sandhill cranes (Grus canadensis pulla) are a highly endangered species that live in the wild in 1 county in Mississippi. As part of a large effort to restore these endangered cranes, we are conducting a project to look at the causes of mortality in crane chicks on the Mississippi Sandhill Crane National Wildlife Refuge in Gautier, MS, USA. This includes surgically implanting miniature radio transmitters in crane chicks to gather data on mortality. This article describes some of the practical difficulties in conducting this type of project in a savannah and swamp location along the Gulf Coast of the USA.

  9. Notice of Violation Hercules Inc., Hattiesburg, Mississippi

    EPA Pesticide Factsheets

    Letter dated November 20, 2008 from Mississippi Department of Environmental Quality to Hercules, Inc. in Hattiesburg, Mississippi about a review of Hercules' impounding basin response package and the apparent violations.

  10. Newly identified resistance to soybean aphid (Aphis glycines) in soybean plant introduction lines

    USDA-ARS?s Scientific Manuscript database

    Host-plant resistance is potentially efficacious in managing the soybean aphid (SA, Aphis glycines Matsumura), a major invasive pest in northern soybean-production regions of North America. However, development of aphid-resistant soybean has been complicated by the presence of virulent SA biotypes,...

  11. Pragmatic Metadata Management for Integration into Multiple Spatial Data Infrastructure Systems and Platforms

    NASA Astrophysics Data System (ADS)

    Benedict, K. K.; Scott, S.

    2013-12-01

    While there has been a convergence towards a limited number of standards for representing knowledge (metadata) about geospatial (and other) data objects and collections, there exist a variety of community conventions around the specific use of those standards and within specific data discovery and access systems. This combination of limited (but multiple) standards and conventions creates a challenge for system developers that aspire to participate in multiple data infrastrucutres, each of which may use a different combination of standards and conventions. While Extensible Markup Language (XML) is a shared standard for encoding most metadata, traditional direct XML transformations (XSLT) from one standard to another often result in an imperfect transfer of information due to incomplete mapping from one standard's content model to another. This paper presents the work at the University of New Mexico's Earth Data Analysis Center (EDAC) in which a unified data and metadata management system has been developed in support of the storage, discovery and access of heterogeneous data products. This system, the Geographic Storage, Transformation and Retrieval Engine (GSTORE) platform has adopted a polyglot database model in which a combination of relational and document-based databases are used to store both data and metadata, with some metadata stored in a custom XML schema designed as a superset of the requirements for multiple target metadata standards: ISO 19115-2/19139/19110/19119, FGCD CSDGM (both with and without remote sensing extensions) and Dublin Core. Metadata stored within this schema is complemented by additional service, format and publisher information that is dynamically "injected" into produced metadata documents when they are requested from the system. While mapping from the underlying common metadata schema is relatively straightforward, the generation of valid metadata within each target standard is necessary but not sufficient for integration into

  12. 78 FR 54955 - Mississippi Central Railroad Co.-Lease and Change in Operators Exemption-Line of Mississippi...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-06

    ... Central Railroad Co.--Lease and Change in Operators Exemption--Line of Mississippi-Alabama Railroad Authority Mississippi Central Railroad Co. (MSCI), a Class III rail carrier, has filed a verified notice of exemption under 49 CFR 1150.41 to lease and operate a 41.5-mile line of railroad between milepost IC-529.5...

  13. Metadata and annotations for multi-scale electrophysiological data.

    PubMed

    Bower, Mark R; Stead, Matt; Brinkmann, Benjamin H; Dufendach, Kevin; Worrell, Gregory A

    2009-01-01

    The increasing use of high-frequency (kHz), long-duration (days) intracranial monitoring from multiple electrodes during pre-surgical evaluation for epilepsy produces large amounts of data that are challenging to store and maintain. Descriptive metadata and clinical annotations of these large data sets also pose challenges to simple, often manual, methods of data analysis. The problems of reliable communication of metadata and annotations between programs, the maintenance of the meanings within that information over long time periods, and the flexibility to re-sort data for analysis place differing demands on data structures and algorithms. Solutions to these individual problem domains (communication, storage and analysis) can be configured to provide easy translation and clarity across the domains. The Multi-scale Annotation Format (MAF) provides an integrated metadata and annotation environment that maximizes code reuse, minimizes error probability and encourages future changes by reducing the tendency to over-fit information technology solutions to current problems. An example of a graphical utility for generating and evaluating metadata and annotations for "big data" files is presented.

  14. The strategy of sustainable soybean development to increase soybean needs in North Sumatera

    NASA Astrophysics Data System (ADS)

    Handayani, L.; Rauf, A.; Rahmawaty; Supriana, T.

    2018-02-01

    The objective of the research was to analyze both internal and external factors influencing the strategy of sustainable soybean development to increase soybean needs in North Sumatera. SWOT analysis was used as the method of the research through identifying internal factors in the development of sustainable soybean the strategy to increase soybean production in research area is aggressive strategy or strategy of SO (Strengths - Oppurtunities) that is using force to exploit existing opportunity with activities as follows: (1). Use certified seeds in accordance with government regulations and policies. (2). Utilizing the level of soil fertility and cropping patterns to be able to meet the demand for soybeans. (3). Utilizing human resources by becoming a member of farmer groups.

  15. Effect of γ irradiation on the fatty acid composition of soybean and soybean oil.

    PubMed

    Minami, Ikuko; Nakamura, Yoshimasa; Todoriki, Setsuko; Murata, Yoshiyuki

    2012-01-01

    Food irradiation is a form of food processing to extend the shelf life and reduce spoilage of food. We examined the effects of γ radiation on the fatty acid composition, lipid peroxidation level, and antioxidative activity of soybean and soybean oil which both contain a large amount of unsaturated fatty acids. Irradiation at 10 to 80 kGy under aerobic conditions did not markedly change the fatty acid composition of soybean. While 10-kGy irradiation did not markedly affect the fatty acid composition of soybean oil under either aerobic or anaerobic conditions, 40-kGy irradiation considerably altered the fatty acid composition of soybean oil under aerobic conditions, but not under anaerobic conditions. Moreover, 40-kGy irradiation produced a significant amount of trans fatty acids under aerobic conditions, but not under anaerobic conditions. Irradiating soybean oil induced lipid peroxidation and reduced the radical scavenging activity under aerobic conditions, but had no effect under anaerobic conditions. These results indicate that the fatty acid composition of soybean was not markedly affected by radiation at 10 kGy, and that anaerobic conditions reduced the degradation of soybean oil that occurred with high doses of γ radiation.

  16. Waterfowl density on agricultural fields managed to retain water in winter

    USGS Publications Warehouse

    Twedt, D.J.; Nelms, C.O.

    1999-01-01

    Managed water on private and public land provides habitat for wintering waterfowl in the Mississippi Valley, where flood control projects have reduced the area of natural flooding. We compared waterfowl densities on rice, soybean, and moist-soil fields under cooperative agreements to retain water from 1 November through 28 February in Arkansas and Mississippi and assessed temporal changes in waterfowl density during winter in 1991-1992 and 1992-1993. Fields flooded earlier in Arkansas, but retained water later in Mississippi. Over winter, waterfowl densities decreased in Arkansas and increased in Mississippi. Densities of waterfowl, including mallard (Anas platyrhynchos), the most abundant species observed, were greatest on moist-soil fields. However, soybean fields had the greatest densities of northern shoveler (Spatula clypeata).

  17. A metadata reporting framework (FRAMES) for synthesis of ecohydrological observations

    DOE PAGES

    Christianson, Danielle S.; Varadharajan, Charuleka; Christoffersen, Bradley; ...

    2017-06-20

    Metadata describe the ancillary information needed for data interpretation, comparison across heterogeneous datasets, and quality control and quality assessment (QA/QC). Metadata enable the synthesis of diverse ecohydrological and biogeochemical observations, an essential step in advancing a predictive understanding of earth systems. Environmental observations can be taken across a wide range of spatiotemporal scales in a variety of measurement settings and approaches, and saved in multiple formats. Thus, well-organized, consistent metadata are required to produce usable data products from diverse observations collected in disparate field sites. However, existing metadata reporting protocols do not support the complex data synthesis needs of interdisciplinarymore » earth system research. We developed a metadata reporting framework (FRAMES) to enable predictive understanding of carbon cycling in tropical forests under global change. FRAMES adheres to best practices for data and metadata organization, enabling consistent data reporting and thus compatibility with a variety of standardized data protocols. We used an iterative scientist-centered design process to develop FRAMES. The resulting modular organization streamlines metadata reporting and can be expanded to incorporate additional data types. The flexible data reporting format incorporates existing field practices to maximize data-entry efficiency. With FRAMES’s multi-scale measurement position hierarchy, data can be reported at observed spatial resolutions and then easily aggregated and linked across measurement types to support model-data integration. FRAMES is in early use by both data providers and users. Here in this article, we describe FRAMES, identify lessons learned, and discuss areas of future development.« less

  18. A metadata reporting framework (FRAMES) for synthesis of ecohydrological observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christianson, Danielle S.; Varadharajan, Charuleka; Christoffersen, Bradley

    Metadata describe the ancillary information needed for data interpretation, comparison across heterogeneous datasets, and quality control and quality assessment (QA/QC). Metadata enable the synthesis of diverse ecohydrological and biogeochemical observations, an essential step in advancing a predictive understanding of earth systems. Environmental observations can be taken across a wide range of spatiotemporal scales in a variety of measurement settings and approaches, and saved in multiple formats. Thus, well-organized, consistent metadata are required to produce usable data products from diverse observations collected in disparate field sites. However, existing metadata reporting protocols do not support the complex data synthesis needs of interdisciplinarymore » earth system research. We developed a metadata reporting framework (FRAMES) to enable predictive understanding of carbon cycling in tropical forests under global change. FRAMES adheres to best practices for data and metadata organization, enabling consistent data reporting and thus compatibility with a variety of standardized data protocols. We used an iterative scientist-centered design process to develop FRAMES. The resulting modular organization streamlines metadata reporting and can be expanded to incorporate additional data types. The flexible data reporting format incorporates existing field practices to maximize data-entry efficiency. With FRAMES’s multi-scale measurement position hierarchy, data can be reported at observed spatial resolutions and then easily aggregated and linked across measurement types to support model-data integration. FRAMES is in early use by both data providers and users. Here in this article, we describe FRAMES, identify lessons learned, and discuss areas of future development.« less

  19. Content standards for medical image metadata

    NASA Astrophysics Data System (ADS)

    d'Ornellas, Marcos C.; da Rocha, Rafael P.

    2003-12-01

    Medical images are at the heart of the healthcare diagnostic procedures. They have provided not only a noninvasive mean to view anatomical cross-sections of internal organs but also a mean for physicians to evaluate the patient"s diagnosis and monitor the effects of the treatment. For a Medical Center, the emphasis may shift from the generation of image to post processing and data management since the medical staff may generate even more processed images and other data from the original image after various analyses and post processing. A medical image data repository for health care information system is becoming a critical need. This data repository would contain comprehensive patient records, including information such as clinical data and related diagnostic images, and post-processed images. Due to the large volume and complexity of the data as well as the diversified user access requirements, the implementation of the medical image archive system will be a complex and challenging task. This paper discusses content standards for medical image metadata. In addition it also focuses on the image metadata content evaluation and metadata quality management.

  20. Heating affects the content and distribution profile of isoflavones in steamed black soybeans and black soybean koji.

    PubMed

    Huang, Ru-Yue; Chou, Cheng-Chun

    2008-09-24

    Steamed black soybeans and black soybean koji, a potentially functional food additive, were subjected to heating at 40-100 degrees C for 30 min. It was found that steamed black soybeans and black soybean koji after heating at 80 degrees C or higher generally showed reduced contents of malonylglucoside, acetylglucoside, and aglycone isoflavone and an increased content of beta-glucoside. A lower reduction in malonylglucoside and acetylglucoside isoflavone but greater reduction in aglycone content was noted in steamed black soybeans compared to black soybean koji after a similar heat treatment. After 30 min of heating at 100 degrees C, steamed black soybean retained ca. 90.3 and 83.8%, respectively, of its original malonylglucoside and acetylglucoside isoflavone, compared to lower residuals of 80.9 and 78.8%, respectively, for black soybean koji. In contrast, the heated black soybeans showed an aglycone residual of 68.0%, which is less than the 80.0% noted with the heated black soybean koji.

  1. PIMMS tools for capturing metadata about simulations

    NASA Astrophysics Data System (ADS)

    Pascoe, Charlotte; Devine, Gerard; Tourte, Gregory; Pascoe, Stephen; Lawrence, Bryan; Barjat, Hannah

    2013-04-01

    PIMMS (Portable Infrastructure for the Metafor Metadata System) provides a method for consistent and comprehensive documentation of modelling activities that enables the sharing of simulation data and model configuration information. The aim of PIMMS is to package the metadata infrastructure developed by Metafor for CMIP5 so that it can be used by climate modelling groups in UK Universities. PIMMS tools capture information about simulations from the design of experiments to the implementation of experiments via simulations that run models. PIMMS uses the Metafor methodology which consists of a Common Information Model (CIM), Controlled Vocabularies (CV) and software tools. PIMMS software tools provide for the creation and consumption of CIM content via a web services infrastructure and portal developed by the ES-DOC community. PIMMS metadata integrates with the ESGF data infrastructure via the mapping of vocabularies onto ESGF facets. There are three paradigms of PIMMS metadata collection: Model Intercomparision Projects (MIPs) where a standard set of questions is asked of all models which perform standard sets of experiments. Disciplinary level metadata collection where a standard set of questions is asked of all models but experiments are specified by users. Bespoke metadata creation where the users define questions about both models and experiments. Examples will be shown of how PIMMS has been configured to suit each of these three paradigms. In each case PIMMS allows users to provide additional metadata beyond that which is asked for in an initial deployment. The primary target for PIMMS is the UK climate modelling community where it is common practice to reuse model configurations from other researchers. This culture of collaboration exists in part because climate models are very complex with many variables that can be modified. Therefore it has become common practice to begin a series of experiments by using another climate model configuration as a starting

  2. Mississippi and Louisiana Estuarine Areas. Freshwater Diversion to Lake Pontchartrain Basin and Mississippi Sound. Feasibility Study. Volume 1. Main Report.

    DTIC Science & Technology

    1984-04-01

    study area encompasses 2,960,000 acres. In Louisiana , the area includes the lower Mississippi River from Bayou Manchac to Bayou Terre Aux Boeufs...1985 SUBJECT: Mississippi and Louisiana Eatuarine Areas Orleans and permits navi-ation access between the Mississippi River and the Gulf !ntraccastal...to migrate back and forth across ;jhat i rc- coutheast Louisiana . Ar the river migrated, it deposited sediment in the form of deltaic marshes. The

  3. Federating Metadata Catalogs

    NASA Astrophysics Data System (ADS)

    Baru, C.; Lin, K.

    2009-04-01

    The Geosciences Network project (www.geongrid.org) has been developing cyberinfrastructure for data sharing in the Earth Science community based on a service-oriented architecture. The project defines a standard "software stack", which includes a standardized set of software modules and corresponding service interfaces. The system employs Grid certificates for distributed user authentication. The GEON Portal provides online access to these services via a set of portlets. This service-oriented approach has enabled the GEON network to easily expand to new sites and deploy the same infrastructure in new projects. To facilitate interoperation with other distributed geoinformatics environments, service standards are being defined and implemented for catalog services and federated search across distributed catalogs. The need arises because there may be multiple metadata catalogs in a distributed system, for example, for each institution, agency, geographic region, and/or country. Ideally, a geoinformatics user should be able to search across all such catalogs by making a single search request. In this paper, we describe our implementation for such a search capability across federated metadata catalogs in the GEON service-oriented architecture. The GEON catalog can be searched using spatial, temporal, and other metadata-based search criteria. The search can be invoked as a Web service and, thus, can be imbedded in any software application. The need for federated catalogs in GEON arises because, (i) GEON collaborators at the University of Hyderabad, India have deployed their own catalog, as part of the iGEON-India effort, to register information about local resources for broader access across the network, (ii) GEON collaborators in the GEO Grid (Global Earth Observations Grid) project at AIST, Japan have implemented a catalog for their ASTER data products, and (iii) we have recently deployed a search service to access all data products from the EarthScope project in the US

  4. Archive of Side Scan Sonar and Swath Bathymetry Data collected during USGS Cruise 10CCT02 Offshore of Petit Bois Island Including Petit Bois Pass, Gulf Islands National Seashore, Mississippi, March 2010

    USGS Publications Warehouse

    Pfeiffer, William R.; Flocks, James G.; DeWitt, Nancy T.; Forde, Arnell S.; Kelso, Kyle; Thompson, Phillip R.; Wiese, Dana S.

    2011-01-01

    In March of 2010, the U.S. Geological Survey (USGS) conducted geophysical surveys offshore of Petit Bois Island, Mississippi, and Dauphin Island, Alabama (fig. 1). These efforts were part of the USGS Gulf of Mexico Science Coordination partnership with the U.S. Army Corps of Engineers (USACE) to assist the Mississippi Coastal Improvements Program (MsCIP) and the Northern Gulf of Mexico (NGOM) Ecosystem Change and Hazards Susceptibility Project by mapping the shallow geologic stratigraphic framework of the Mississippi Barrier Island Complex. These geophysical surveys will provide the data necessary for scientists to define, interpret, and provide baseline bathymetry and seafloor habitat for this area and to aid scientists in predicting future geomorphological changes of the islands with respect to climate change, storm impact, and sea-level rise. Furthermore, these data will provide information for barrier island restoration, particularly in Camille Cut, and protection for the historical Fort Massachusetts on Ship Island, Mississippi. For more information please refer to http://ngom.usgs.gov/gomsc/mscip/index.html. This report serves as an archive of the processed swath bathymetry and side scan sonar data (SSS). Data products herein include gridded and interpolated surfaces, seabed backscatter images, and ASCII x,y,z data products for both swath bathymetry and side scan sonar imagery. Additional files include trackline maps, navigation files, GIS files, Field Activity Collection System (FACS) logs, and formal FGDC metadata. Scanned images of the handwritten and digital FACS logs are also provided as PDF files. Refer to the Acronyms page for expansion of acronyms and abbreviations used in this report.

  5. Towards Precise Metadata-set for Discovering 3D Geospatial Models in Geo-portals

    NASA Astrophysics Data System (ADS)

    Zamyadi, A.; Pouliot, J.; Bédard, Y.

    2013-09-01

    Accessing 3D geospatial models, eventually at no cost and for unrestricted use, is certainly an important issue as they become popular among participatory communities, consultants, and officials. Various geo-portals, mainly established for 2D resources, have tried to provide access to existing 3D resources such as digital elevation model, LIDAR or classic topographic data. Describing the content of data, metadata is a key component of data discovery in geo-portals. An inventory of seven online geo-portals and commercial catalogues shows that the metadata referring to 3D information is very different from one geo-portal to another as well as for similar 3D resources in the same geo-portal. The inventory considered 971 data resources affiliated with elevation. 51% of them were from three geo-portals running at Canadian federal and municipal levels whose metadata resources did not consider 3D model by any definition. Regarding the remaining 49% which refer to 3D models, different definition of terms and metadata were found, resulting in confusion and misinterpretation. The overall assessment of these geo-portals clearly shows that the provided metadata do not integrate specific and common information about 3D geospatial models. Accordingly, the main objective of this research is to improve 3D geospatial model discovery in geo-portals by adding a specific metadata-set. Based on the knowledge and current practices on 3D modeling, and 3D data acquisition and management, a set of metadata is proposed to increase its suitability for 3D geospatial models. This metadata-set enables the definition of genuine classes, fields, and code-lists for a 3D metadata profile. The main structure of the proposal contains 21 metadata classes. These classes are classified in three packages as General and Complementary on contextual and structural information, and Availability on the transition from storage to delivery format. The proposed metadata set is compared with Canadian Geospatial

  6. Streamlining Metadata and Data Management for Evolving Digital Libraries

    NASA Astrophysics Data System (ADS)

    Clark, D.; Miller, S. P.; Peckman, U.; Smith, J.; Aerni, S.; Helly, J.; Sutton, D.; Chase, A.

    2003-12-01

    What began two years ago as an effort to stabilize the Scripps Institution of Oceanography (SIO) data archives from more than 700 cruises going back 50 years, has now become the operational fully-searchable "SIOExplorer" digital library, complete with thousands of historic photographs, images, maps, full text documents, binary data files, and 3D visualization experiences, totaling nearly 2 terabytes of digital content. Coping with data diversity and complexity has proven to be more challenging than dealing with large volumes of digital data. SIOExplorer has been built with scalability in mind, so that the addition of new data types and entire new collections may be accomplished with ease. It is a federated system, currently interoperating with three independent data-publishing authorities, each responsible for their own quality control, metadata specifications, and content selection. The IT architecture implemented at the San Diego Supercomputer Center (SDSC) streamlines the integration of additional projects in other disciplines with a suite of metadata management and collection building tools for "arbitrary digital objects." Metadata are automatically harvested from data files into domain-specific metadata blocks, and mapped into various specification standards as needed. Metadata can be browsed and objects can be viewed onscreen or downloaded for further analysis, with automatic proprietary-hold request management.

  7. ALE: automated label extraction from GEO metadata.

    PubMed

    Giles, Cory B; Brown, Chase A; Ripperger, Michael; Dennis, Zane; Roopnarinesingh, Xiavan; Porter, Hunter; Perz, Aleksandra; Wren, Jonathan D

    2017-12-28

    NCBI's Gene Expression Omnibus (GEO) is a rich community resource containing millions of gene expression experiments from human, mouse, rat, and other model organisms. However, information about each experiment (metadata) is in the format of an open-ended, non-standardized textual description provided by the depositor. Thus, classification of experiments for meta-analysis by factors such as gender, age of the sample donor, and tissue of origin is not feasible without assigning labels to the experiments. Automated approaches are preferable for this, primarily because of the size and volume of the data to be processed, but also because it ensures standardization and consistency. While some of these labels can be extracted directly from the textual metadata, many of the data available do not contain explicit text informing the researcher about the age and gender of the subjects with the study. To bridge this gap, machine-learning methods can be trained to use the gene expression patterns associated with the text-derived labels to refine label-prediction confidence. Our analysis shows only 26% of metadata text contains information about gender and 21% about age. In order to ameliorate the lack of available labels for these data sets, we first extract labels from the textual metadata for each GEO RNA dataset and evaluate the performance against a gold standard of manually curated labels. We then use machine-learning methods to predict labels, based upon gene expression of the samples and compare this to the text-based method. Here we present an automated method to extract labels for age, gender, and tissue from textual metadata and GEO data using both a heuristic approach as well as machine learning. We show the two methods together improve accuracy of label assignment to GEO samples.

  8. 40 CFR 282.74 - Mississippi State-Administered Program.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... provisions include: (1) Mississippi Groundwater Protection Trust Fund Regulations. Section XXEnforcement... XIIIEnforcement and Appeals Section XIVProperty Rights (2) Mississippi Groundwater Protection Trust Fund...

  9. 40 CFR 282.74 - Mississippi State-Administered Program.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... provisions include: (1) Mississippi Groundwater Protection Trust Fund Regulations. Section XXEnforcement... XIIIEnforcement and Appeals Section XIVProperty Rights (2) Mississippi Groundwater Protection Trust Fund...

  10. 40 CFR 282.74 - Mississippi State-Administered Program.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... provisions include: (1) Mississippi Groundwater Protection Trust Fund Regulations. Section XXEnforcement... XIIIEnforcement and Appeals Section XIVProperty Rights (2) Mississippi Groundwater Protection Trust Fund...

  11. 40 CFR 282.74 - Mississippi State-Administered Program.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... provisions include: (1) Mississippi Groundwater Protection Trust Fund Regulations. Section XXEnforcement... XIIIEnforcement and Appeals Section XIVProperty Rights (2) Mississippi Groundwater Protection Trust Fund...

  12. 40 CFR 282.74 - Mississippi State-Administered Program.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... provisions include: (1) Mississippi Groundwater Protection Trust Fund Regulations. Section XXEnforcement... XIIIEnforcement and Appeals Section XIVProperty Rights (2) Mississippi Groundwater Protection Trust Fund...

  13. A document centric metadata registration tool constructing earth environmental data infrastructure

    NASA Astrophysics Data System (ADS)

    Ichino, M.; Kinutani, H.; Ono, M.; Shimizu, T.; Yoshikawa, M.; Masuda, K.; Fukuda, K.; Kawamoto, H.

    2009-12-01

    DIAS (Data Integration and Analysis System) is one of GEOSS activities in Japan. It is also a leading part of the GEOSS task with the same name defined in GEOSS Ten Year Implementation Plan. The main mission of DIAS is to construct data infrastructure that can effectively integrate earth environmental data such as observation data, numerical model outputs, and socio-economic data provided from the fields of climate, water cycle, ecosystem, ocean, biodiversity and agriculture. Some of DIAS's data products are available at the following web site of http://www.jamstec.go.jp/e/medid/dias. Most of earth environmental data commonly have spatial and temporal attributes such as the covering geographic scope or the created date. The metadata standards including these common attributes are published by the geographic information technical committee (TC211) in ISO (the International Organization for Standardization) as specifications of ISO 19115:2003 and 19139:2007. Accordingly, DIAS metadata is developed with basing on ISO/TC211 metadata standards. From the viewpoint of data users, metadata is useful not only for data retrieval and analysis but also for interoperability and information sharing among experts, beginners and nonprofessionals. On the other hand, from the viewpoint of data providers, two problems were pointed out after discussions. One is that data providers prefer to minimize another tasks and spending time for creating metadata. Another is that data providers want to manage and publish documents to explain their data sets more comprehensively. Because of solving these problems, we have been developing a document centric metadata registration tool. The features of our tool are that the generated documents are available instantly and there is no extra cost for data providers to generate metadata. Also, this tool is developed as a Web application. So, this tool does not demand any software for data providers if they have a web-browser. The interface of the tool

  14. Mississippi River Delta

    NASA Image and Video Library

    2002-06-11

    As the Mississippi River enters the Gulf of Mexico, it loses energy and dumps its load of sediment that it has carried on its journey through the mid continent. This pile of sediment, or mud, accumulates over the years building up the delta front. As one part of the delta becomes clogged with sediment, the delta front will migrate in search of new areas to grow. The area shown on this image is the currently active delta front of the Mississippi. The migratory nature of the delta forms natural traps for oil. Most of the land in the image consists of mud flats and marsh lands. There is little human settlement in this area due to the instability of the sediments. The main shipping channel of the Mississippi River is the broad stripe running northwest to southeast. This image was acquired on May 24, 2001 by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA's Terra satellite. With its 14 spectral bands from the visible to the thermal infrared wavelength region, and its high spatial resolution of 15 to 90 meters (about 50 to 300 feet), ASTER will image Earth for the next 6 years to map and monitor the changing surface of our planet. http://photojournal.jpl.nasa.gov/catalog/PIA03497

  15. An Induced Infiltration and Groundwater Transfer Project to Enhance Recharge in the Lower Mississippi River Valley Alluvial Aquifer: Modeling and Analysis

    NASA Astrophysics Data System (ADS)

    Rigby, J.; Haugh, C. J.; Barlow, J.

    2015-12-01

    The Lower Mississippi River Basin is one of the major agricultural production regions in the United States producing over two-thirds of the rice, nearly half of sugarcane produced in the U.S., as well as significant amounts of soybeans, corn, and cotton. While the region experiences over 50 inches of precipitation annually, reaching yield potential for crops requires irrigation. Approximately 75% of crop acres in the alluvial valley are irrigated, and the expectation is that all acreage will eventually be irrigated. Currently over 90% of water for crop irrigation is derived from the shallow alluvial aquifer outpacing net recharge by several million acre-feet per year. This has resulted in severe groundwater declines in Arkansas and an increasingly threatening situation in northwestern Mississippi. In Mississippi, direct injection has received increasing attention as a means of artificial recharge, though water quality remains a concern both for the integrity of the aquifer and efficiency of injection. This project considers the use of pumping wells near major rivers known to be in connection with the aquifer to induce additional infiltration of surface water by steepening local gradients. The pumped water would be transferred by pipeline to areas within the regional cone of depression where it is then injected to enhance groundwater recharge. Groundwater flow modeling with zone budget analysis is used to evaluate the potential for net supply gains from induced infiltration at potential sites along major rivers in the region. The groundwater model will further evaluate the impact of the transfer and direct injection on regional water tables.

  16. OlyMPUS - The Ontology-based Metadata Portal for Unified Semantics

    NASA Astrophysics Data System (ADS)

    Huffer, E.; Gleason, J. L.

    2015-12-01

    The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support data consumers and data providers, enabling the latter to register their data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS leverages the semantics and reasoning capabilities of ODISEES to provide data producers with a semi-automated interface for producing the semantically rich metadata needed to support ODISEES' data discovery and access services. It integrates the ODISEES metadata search system with multiple NASA data delivery tools to enable data consumers to create customized data sets for download to their computers, or for NASA Advanced Supercomputing (NAS) facility registered users, directly to NAS storage resources for access by applications running on NAS supercomputers. A core function of NASA's Earth Science Division is research and analysis that uses the full spectrum of data products available in NASA archives. Scientists need to perform complex analyses that identify correlations and non-obvious relationships across all types of Earth System phenomena. Comprehensive analytics are hindered, however, by the fact that many Earth science data products are disparate and hard to synthesize. Variations in how data are collected, processed, gridded, and stored, create challenges for data interoperability and synthesis, which are exacerbated by the sheer volume of available data. Robust, semantically rich metadata can support tools for data discovery and facilitate machine-to-machine transactions with services such as data subsetting, regridding, and reformatting. Such capabilities are critical to enabling the research activities integral to NASA's strategic plans. However, as metadata requirements increase and competing standards emerge

  17. 75 FR 25283 - Mississippi; Major Disaster and Related Determinations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-07

    ... counties in the State of Mississippi are eligible to apply for assistance under the Hazard Mitigation Grant... declaration of a major disaster for the State of Mississippi (FEMA-1906-DR), dated April 29, 2010, and related... in certain areas of the State of Mississippi resulting from severe storms, tornadoes, and flooding...

  18. Long-term measurements of agronomic crop irrigation in the Mississippi Delta portion of the Lower Mississippi River Valley

    USDA-ARS?s Scientific Manuscript database

    With over 4 million ha irrigated cropland, the Lower Mississippi River Valley (LMRV) is a highly productive agricultural region where irrigation practices are similar and the Mississippi River Valley alluvial aquifer (MRVA) is a primary source of on-demand irrigation. Owing to agricultural exports, ...

  19. Metadata management for high content screening in OMERO

    PubMed Central

    Li, Simon; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Lindner, Dominik; Linkert, Melissa; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Allan, Chris; Burel, Jean-Marie; Moore, Josh; Swedlow, Jason R.

    2016-01-01

    High content screening (HCS) experiments create a classic data management challenge—multiple, large sets of heterogeneous structured and unstructured data, that must be integrated and linked to produce a set of “final” results. These different data include images, reagents, protocols, analytic output, and phenotypes, all of which must be stored, linked and made accessible for users, scientists, collaborators and where appropriate the wider community. The OME Consortium has built several open source tools for managing, linking and sharing these different types of data. The OME Data Model is a metadata specification that supports the image data and metadata recorded in HCS experiments. Bio-Formats is a Java library that reads recorded image data and metadata and includes support for several HCS screening systems. OMERO is an enterprise data management application that integrates image data, experimental and analytic metadata and makes them accessible for visualization, mining, sharing and downstream analysis. We discuss how Bio-Formats and OMERO handle these different data types, and how they can be used to integrate, link and share HCS experiments in facilities and public data repositories. OME specifications and software are open source and are available at https://www.openmicroscopy.org. PMID:26476368

  20. Metadata for data rescue and data at risk

    USGS Publications Warehouse

    Anderson, William L.; Faundeen, John L.; Greenberg, Jane; Taylor, Fraser

    2011-01-01

    Scientific data age, become stale, fall into disuse and run tremendous risks of being forgotten and lost. These problems can be addressed by archiving and managing scientific data over time, and establishing practices that facilitate data discovery and reuse. Metadata documentation is integral to this work and essential for measuring and assessing high priority data preservation cases. The International Council for Science: Committee on Data for Science and Technology (CODATA) has a newly appointed Data-at-Risk Task Group (DARTG), participating in the general arena of rescuing data. The DARTG primary objective is building an inventory of scientific data that are at risk of being lost forever. As part of this effort, the DARTG is testing an approach for documenting endangered datasets. The DARTG is developing a minimal and easy to use set of metadata properties for sufficiently describing endangered data, which will aid global data rescue missions. The DARTG metadata framework supports rapid capture, and easy documentation, across an array of scientific domains. This paper reports on the goals and principles supporting the DARTG metadata schema, and provides a description of the preliminary implementation.

  1. A novel framework for assessing metadata quality in epidemiological and public health research settings

    PubMed Central

    McMahon, Christiana; Denaxas, Spiros

    2016-01-01

    Metadata are critical in epidemiological and public health research. However, a lack of biomedical metadata quality frameworks and limited awareness of the implications of poor quality metadata renders data analyses problematic. In this study, we created and evaluated a novel framework to assess metadata quality of epidemiological and public health research datasets. We performed a literature review and surveyed stakeholders to enhance our understanding of biomedical metadata quality assessment. The review identified 11 studies and nine quality dimensions; none of which were specifically aimed at biomedical metadata. 96 individuals completed the survey; of those who submitted data, most only assessed metadata quality sometimes, and eight did not at all. Our framework has four sections: a) general information; b) tools and technologies; c) usability; and d) management and curation. We evaluated the framework using three test cases and sought expert feedback. The framework can assess biomedical metadata quality systematically and robustly. PMID:27570670

  2. A novel framework for assessing metadata quality in epidemiological and public health research settings.

    PubMed

    McMahon, Christiana; Denaxas, Spiros

    2016-01-01

    Metadata are critical in epidemiological and public health research. However, a lack of biomedical metadata quality frameworks and limited awareness of the implications of poor quality metadata renders data analyses problematic. In this study, we created and evaluated a novel framework to assess metadata quality of epidemiological and public health research datasets. We performed a literature review and surveyed stakeholders to enhance our understanding of biomedical metadata quality assessment. The review identified 11 studies and nine quality dimensions; none of which were specifically aimed at biomedical metadata. 96 individuals completed the survey; of those who submitted data, most only assessed metadata quality sometimes, and eight did not at all. Our framework has four sections: a) general information; b) tools and technologies; c) usability; and d) management and curation. We evaluated the framework using three test cases and sought expert feedback. The framework can assess biomedical metadata quality systematically and robustly.

  3. 78 FR 64397 - Mississippi Regulatory Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-29

    ... text of the program amendment available at www.regulations.gov . A. Mississippi Surface Coal Mining... DEPARTMENT OF THE INTERIOR Office of Surface Mining Reclamation and Enforcement 30 CFR Part 924...; S2D2SSS08011000SX066A00033F13XS501520] Mississippi Regulatory Program AGENCY: Office of Surface Mining Reclamation and Enforcement...

  4. Differences in phosphorus and nitrogen delivery to the Gulf of Mexico from the Mississippi River Basin

    USGS Publications Warehouse

    Alexander, R.B.; Smith, R.A.; Schwarz, G.E.; Boyer, E.W.; Nolan, J.V.; Brakebill, J.W.

    2008-01-01

    Seasonal hypoxia in the northern Gulf of Mexico has been linked to increased nitrogen fluxes from the Mississippi and Atchafalaya River Basins, though recent evidence shows that phosphorus also influences productivity in the Gulf. We developed a spatially explicit and structurally detailed SPARROW water-quality model that reveals important differences in the sources and transport processes that control nitrogen (N) and phosphorus (P) delivery to the Gulf. Our model simulations indicate that agricultural sources in the watersheds contribute more than 70% of the delivered N and P. However, corn and soybean cultivation is the largest contributor of N (52%), followed by atmospheric deposition sources (16%); whereas P originates primarily from animal manure on pasture and rangelands (37%), followed by corn and soybeans (25%), other crops (18%), and urban sources (12%). The fraction of in-stream P and N load delivered to the Gulf increases with stream size, but reservoir trapping of P causes large local- and regional-scale differences in delivery. Our results indicate the diversity of management approaches required to achieve efficient control of nutrient loads to the Gulf. These include recognition of important differences in the agricultural sources of N and P, the role of atmospheric N, attention to P sources downstream from reservoirs, and better control of both N and P in close proximity to large rivers. ?? 2008 American Chemical Society.

  5. Studies of Big Data metadata segmentation between relational and non-relational databases

    NASA Astrophysics Data System (ADS)

    Golosova, M. V.; Grigorieva, M. A.; Klimentov, A. A.; Ryabinkin, E. A.; Dimitrov, G.; Potekhin, M.

    2015-12-01

    In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe data and workflows. These metadata are used to obtain information about current system state and for statistical and trend analysis of the processes these systems drive. Over the time the amount of the stored metadata can grow dramatically. In this article we present our studies to demonstrate how metadata storage scalability and performance can be improved by using hybrid RDBMS/NoSQL architecture.

  6. Metadata improvements driving new tools and services at a NASA data center

    NASA Astrophysics Data System (ADS)

    Moroni, D. F.; Hausman, J.; Foti, G.; Armstrong, E. M.

    2011-12-01

    The NASA Physical Oceanography DAAC (PO.DAAC) is responsible for distributing and maintaining satellite derived oceanographic data from a number of NASA and non-NASA missions for the physical disciplines of ocean winds, sea surface temperature, ocean topography and gravity. Currently its holdings consist of over 600 datasets with a data archive in excess of 200 Terrabytes. The PO.DAAC has recently embarked on a metadata quality and completeness project to migrate, update and improve metadata records for over 300 public datasets. An interactive database management tool has been developed to allow data scientists to enter, update and maintain metadata records. This tool communicates directly with PO.DAAC's Data Management and Archiving System (DMAS), which serves as the new archival and distribution backbone as well as a permanent repository of dataset and granule-level metadata. Although we will briefly discuss the tool, more important ramifications are the ability to now expose, propagate and leverage the metadata in a number of ways. First, the metadata are exposed directly through a faceted and free text search interface directly from drupal-based PO.DAAC web pages allowing for quick browsing and data discovery especially by "drilling" through the various facet levels that organize datasets by time/space resolution, processing level, sensor, measurement type etc. Furthermore, the metadata can now be exposed through web services to produce metadata records in a number of different formats such as FGDC and ISO 19115, or potentially propagated to visualization and subsetting tools, and other discovery interfaces. The fundamental concept is that the metadata forms the essential bridge between the user, and the tool or discovery mechanism for a broad range of ocean earth science data records.

  7. Inter-University Upper Atmosphere Global Observation Network (IUGONET) Metadata Database and Its Interoperability

    NASA Astrophysics Data System (ADS)

    Yatagai, A. I.; Iyemori, T.; Ritschel, B.; Koyama, Y.; Hori, T.; Abe, S.; Tanaka, Y.; Shinbori, A.; Umemura, N.; Sato, Y.; Yagi, M.; Ueno, S.; Hashiguchi, N. O.; Kaneda, N.; Belehaki, A.; Hapgood, M. A.

    2013-12-01

    The IUGONET is a Japanese program to build a metadata database for ground-based observations of the upper atmosphere [1]. The project began in 2009 with five Japanese institutions which archive data observed by radars, magnetometers, photometers, radio telescopes and helioscopes, and so on, at various altitudes from the Earth's surface to the Sun. Systems have been developed to allow searching of the above described metadata. We have been updating the system and adding new and updated metadata. The IUGONET development team adopted the SPASE metadata model [2] to describe the upper atmosphere data. This model is used as the common metadata format by the virtual observatories for solar-terrestrial physics. It includes metadata referring to each data file (called a 'Granule'), which enable a search for data files as well as data sets. Further details are described in [2] and [3]. Currently, three additional Japanese institutions are being incorporated in IUGONET. Furthermore, metadata of observations of the troposphere, taken at the observatories of the middle and upper atmosphere radar at Shigaraki and the Meteor radar in Indonesia, have been incorporated. These additions will contribute to efficient interdisciplinary scientific research. In the beginning of 2013, the registration of the 'Observatory' and 'Instrument' metadata was completed, which makes it easy to overview of the metadata database. The number of registered metadata as of the end of July, totalled 8.8 million, including 793 observatories and 878 instruments. It is important to promote interoperability and/or metadata exchange between the database development groups. A memorandum of agreement has been signed with the European Near-Earth Space Data Infrastructure for e-Science (ESPAS) project, which has similar objectives to IUGONET with regard to a framework for formal collaboration. Furthermore, observations by satellites and the International Space Station are being incorporated with a view for

  8. Interoperable Solar Data and Metadata via LISIRD 3

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D. M.; Pankratz, C. K.; Snow, M. A.; Woods, T. N.

    2015-12-01

    LISIRD 3 is a major upgrade of the LASP Interactive Solar Irradiance Data Center (LISIRD), which serves several dozen space based solar irradiance and related data products to the public. Through interactive plots, LISIRD 3 provides data browsing supported by data subsetting and aggregation. Incorporating a semantically enabled metadata repository, LISIRD 3 users see current, vetted, consistent information about the datasets offered. Users can now also search for datasets based on metadata fields such as dataset type and/or spectral or temporal range. This semantic database enables metadata browsing, so users can discover the relationships between datasets, instruments, spacecraft, mission and PI. The database also enables creation and publication of metadata records in a variety of formats, such as SPASE or ISO, making these datasets more discoverable. The database also enables the possibility of a public SPARQL endpoint, making the metadata browsable in an automated fashion. LISIRD 3's data access middleware, LaTiS, provides dynamic, on demand reformatting of data and timestamps, subsetting and aggregation, and other server side functionality via a RESTful OPeNDAP compliant API, enabling interoperability between LASP datasets and many common tools. LISIRD 3's templated front end design, coupled with the uniform data interface offered by LaTiS, allows easy integration of new datasets. Consequently the number and variety of datasets offered by LISIRD has grown to encompass several dozen, with many more to come. This poster will discuss design and implementation of LISIRD 3, including tools used, capabilities enabled, and issues encountered.

  9. Integrated Array/Metadata Analytics

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Baumann, Peter

    2015-04-01

    Data comes in various forms and types, and integration usually presents a problem that is often simply ignored and solved with ad-hoc solutions. Multidimensional arrays are an ubiquitous data type, that we find at the core of virtually all science and engineering domains, as sensor, model, image, statistics data. Naturally, arrays are richly described by and intertwined with additional metadata (alphanumeric relational data, XML, JSON, etc). Database systems, however, a fundamental building block of what we call "Big Data", lack adequate support for modelling and expressing these array data/metadata relationships. Array analytics is hence quite primitive or non-existent at all in modern relational DBMS. Recognizing this, we extended SQL with a new SQL/MDA part seamlessly integrating multidimensional array analytics into the standard database query language. We demonstrate the benefits of SQL/MDA with real-world examples executed in ASQLDB, an open-source mediator system based on HSQLDB and rasdaman, that already implements SQL/MDA.

  10. A case for user-generated sensor metadata

    NASA Astrophysics Data System (ADS)

    Nüst, Daniel

    2015-04-01

    Cheap and easy to use sensing technology and new developments in ICT towards a global network of sensors and actuators promise previously unthought of changes for our understanding of the environment. Large professional as well as amateur sensor networks exist, and they are used for specific yet diverse applications across domains such as hydrology, meteorology or early warning systems. However the impact this "abundance of sensors" had so far is somewhat disappointing. There is a gap between (community-driven) sensor networks that could provide very useful data and the users of the data. In our presentation, we argue this is due to a lack of metadata which allows determining the fitness of use of a dataset. Syntactic or semantic interoperability for sensor webs have made great progress and continue to be an active field of research, yet they often are quite complex, which is of course due to the complexity of the problem at hand. But still, we see the most generic information to determine fitness for use is a dataset's provenance, because it allows users to make up their own minds independently from existing classification schemes for data quality. In this work we will make the case how curated user-contributed metadata has the potential to improve this situation. This especially applies for scenarios in which an observed property is applicable in different domains, and for set-ups where the understanding about metadata concepts and (meta-)data quality differs between data provider and user. On the one hand a citizen does not understand the ISO provenance metadata. On the other hand a researcher might find issues in publicly accessible time series published by citizens, which the latter might not be aware of or care about. Because users will have to determine fitness for use for each application on their own anyway, we suggest an online collaboration platform for user-generated metadata based on an extremely simplified data model. In the most basic fashion

  11. Double-crested cormorants along the upper Mississippi River

    USGS Publications Warehouse

    Kirsch, E.M.

    1995-01-01

    The Upper Mississippi River is an important habitat corridor for migratory birds and other wildlife, and it supports an important commercial and sport fishery. A study was initiated by the U.S. Fish and Wildlife Service in 1991 to describe Double-crested cormorant (Phalacrocorax auritus) distribution and abundance on the Upper Mississippi River throughout the year to better understand the possible impacts of cormorants on fish resources and populations of other piscivorous birds. Double-crested Cormorants were common breeders and abundant during migration on the Upper Mississippi River during the 1940s. Numbers of cormorants declined in the 1960s and 1970s along the Upper Mississippi River as they did in other parts of the United States. In 1992, 418 cormorant pairs were estimated to have nested in four colonies on the Upper Mississippi River, and less than 7,000 cormorants were estimated to have migrated along the river during the fall and spring of 1991 and 1992. Recent public concern for fish resources has grown with a perceived growth of the local cormorant population. Migrating cormorants collected on the Upper Mississippi River took Gizzard Shad (Dorosoma cepedianum) primarily, but chicks were fed a wide variety of fish species.

  12. The Metadata Cloud: The Last Piece of a Distributed Data System Model

    NASA Astrophysics Data System (ADS)

    King, T. A.; Cecconi, B.; Hughes, J. S.; Walker, R. J.; Roberts, D.; Thieman, J. R.; Joy, S. P.; Mafi, J. N.; Gangloff, M.

    2012-12-01

    Distributed data systems have existed ever since systems were networked together. Over the years the model for distributed data systems have evolved from basic file transfer to client-server to multi-tiered to grid and finally to cloud based systems. Initially metadata was tightly coupled to the data either by embedding the metadata in the same file containing the data or by co-locating the metadata in commonly named files. As the sources of data multiplied, data volumes have increased and services have specialized to improve efficiency; a cloud system model has emerged. In a cloud system computing and storage are provided as services with accessibility emphasized over physical location. Computation and data clouds are common implementations. Effectively using the data and computation capabilities requires metadata. When metadata is stored separately from the data; a metadata cloud is formed. With a metadata cloud information and knowledge about data resources can migrate efficiently from system to system, enabling services and allowing the data to remain efficiently stored until used. This is especially important with "Big Data" where movement of the data is limited by bandwidth. We examine how the metadata cloud completes a general distributed data system model, how standards play a role and relate this to the existing types of cloud computing. We also look at the major science data systems in existence and compare each to the generalized cloud system model.

  13. The Impact of Colorectal Cancer (CRC) in Mississippi, and the need for Mississippi to Eliminate its CRC Burden.

    PubMed

    Duhé, Roy J

    2016-03-01

    Colorectal cancer (CRC), while highly preventable and highly treatable, is a major public health problem in Mississippi. This article reviews solutions to this problem, beginning with the relationship between modifiable behavioral risk factors and CRC incidence. It then describes the impact of CRC screening on national downward trends in CRC incidence and mortality and summarizes recent data on the burden of CRC in Mississippi. While other states have created Comprehensive Colorectal Cancer Control Programs in an organized effort to manage this public health problem, Mississippi has not. Responding to Mississippi's situation, the 70x2020 Colorectal Cancer Screening Initiative arose as an unconventional approach to increase CRC screening rates throughout the state. This article concludes by considering the current limits of CRC treatment success and proposes that improved clinical outcomes should result from research to translate recently-identified colorectal cancer subtype information into novel clinical paradigms for the treatment of early-stage colorectal cancer.

  14. Discovery of a seventh Rpp soybean rust resistance locus in soybean accession PI 605823

    USDA-ARS?s Scientific Manuscript database

    Soybean rust, caused by the obligate biotrophic fungal pathogen Phakopsora pachyrhizi Syd. & Syd, is a disease threat to soybean production in regions of the world with mild winters. Host plant resistance to P. pachyrhizi conditioned by Rpp genes has been found in numerous soybean accessions, and at...

  15. Characterization and genetics of multiple soybean aphid biotype resistance in five soybean plant introductions

    USDA-ARS?s Scientific Manuscript database

    Soybean aphid (Aphis glycines Matsumura) is the most important soybean [Glycine max (L.) Merr.] insect pest in the USA. The objectives of this study were to characterize the resistance expressed in the five plant introductions (PIs) to four soybean aphid biotypes, determine the mode of resistance in...

  16. Overexpression of a soybean salicylic acid methlyltransferase gene confers resistance to soybean cyst nematode

    USDA-ARS?s Scientific Manuscript database

    Soybean cyst nematode (Heterodera glycines Ichinohe, SCN) is the most pervasive pest of soybean [Glycine max (L.) Merr.] in the USA and worldwide. SCN reduced soybean yields worldwide by an estimated billion dollars annually. These losses remained stable with the use of resistant cultivars but over ...

  17. Creating Access Points to Instrument-Based Atmospheric Data: Perspectives from the ARM Metadata Manager

    NASA Astrophysics Data System (ADS)

    Troyan, D.

    2016-12-01

    The Atmospheric Radiation Measurement (ARM) program has been collecting data from instruments in diverse climate regions for nearly twenty-five years. These data are made available to all interested parties at no cost via specially designed tools found on the ARM website (www.arm.gov). Metadata is created and applied to the various datastreams to facilitate information retrieval using the ARM website, the ARM Data Discovery Tool, and data quality reporting tools. Over the last year, the Metadata Manager - a relatively new position within the ARM program - created two documents that summarize the state of ARM metadata processes: ARM Metadata Workflow, and ARM Metadata Standards. These documents serve as guides to the creation and management of ARM metadata. With many of ARM's data functions spread around the Department of Energy national laboratory complex and with many of the original architects of the metadata structure no longer working for ARM, there is increased importance on using these documents to resolve issues from data flow bottlenecks and inaccurate metadata to improving data discovery and organizing web pages. This presentation will provide some examples from the workflow and standards documents. The examples will illustrate the complexity of the ARM metadata processes and the efficiency by which the metadata team works towards achieving the goal of providing access to data collected under the auspices of the ARM program.

  18. Finding Atmospheric Composition (AC) Metadata

    NASA Technical Reports Server (NTRS)

    Strub, Richard F..; Falke, Stefan; Fiakowski, Ed; Kempler, Steve; Lynnes, Chris; Goussev, Oleg

    2015-01-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not all

  19. Reducing runoff and nutrient loss from agricultural land in the Lower Mississippi River Basin

    NASA Astrophysics Data System (ADS)

    Reba, M. L.; Bouldin, J.; Teague, T.; Choate, J.

    2011-12-01

    The Lower Mississippi River Basin (LMRB) yields suspended sediment, total phosphorus, total nitrogen and silicate that are disproportionately high for the area. In addition, groundwater pumping of the alluvial aquifer has been deemed unsustainable under current practices. Much of the LMRB is used for large-scale agricultural production of primarily cotton, soybeans and rice. The incorporation of conservation practices may improve nutrient use efficiency and reduce runoff from agricultural fields. Three paired fields have been instrumented at the edge-of-field to quantify nutrients and runoff. The fields are located in northeastern Arkansas in the Little River Ditches and St. Francis watersheds. Nutrient use efficiency will be gained by utilizing variable rate fertilizer application technology. Reduced runoff will be gained through improved irrigation management. This study quantifies the runoff and nutrient loss from the first year of a 5-year study and will serve as a baseline for a comparative study of conservation practices employed on the paired fields.

  20. Separation of metadata and pixel data to speed DICOM tag morphing.

    PubMed

    Ismail, Mahmoud; Philbin, James

    2013-01-01

    The DICOM information model combines pixel data and metadata in single DICOM object. It is not possible to access the metadata separately from the pixel data. There are use cases where only metadata is accessed. The current DICOM object format increases the running time of those use cases. Tag morphing is one of those use cases. Tag morphing includes deletion, insertion or manipulation of one or more of the metadata attributes. It is typically used for order reconciliation on study acquisition or to localize the issuer of patient ID (IPID) and the patient ID attributes when data from one domain is transferred to a different domain. In this work, we propose using Multi-Series DICOM (MSD) objects, which separate metadata from pixel data and remove duplicate attributes, to reduce the time required for Tag Morphing. The time required to update a set of study attributes in each format is compared. The results show that the MSD format significantly reduces the time required for tag morphing.

  1. Mercury- Distributed Metadata Management, Data Discovery and Access System

    NASA Astrophysics Data System (ADS)

    Palanisamy, Giri; Wilson, Bruce E.; Devarakonda, Ranjeet; Green, James M.

    2007-12-01

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and ORNL- developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports various metadata standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115 (under development). Mercury provides a single portal to information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury supports various projects including: ORNL DAAC, NBII, DADDI, LBA, NARSTO, CDIAC, OCEAN, I3N, IAI, ESIP and ARM. The new Mercury system is based on a Service Oriented Architecture and supports various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. This system also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.

  2. 7 CFR 1220.127 - Soybean products.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false Soybean products. 1220.127 Section 1220.127... AGREEMENTS AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE SOYBEAN PROMOTION, RESEARCH, AND CONSUMER INFORMATION Soybean Promotion and Research Order Definitions § 1220.127 Soybean products. The term...

  3. Nutritional requirements for soybean cyst nematode

    USDA-ARS?s Scientific Manuscript database

    Soybeans [Glycine max] are the second largest cash crop in US Agriculture, but the soybean yield is compromised by infections from Heterodera glycines, also known as Soybean Cyst Nematodes [SCN]. SCN are the most devastating pathogen or plant disease soybean producers confront. This obligate parasi...

  4. Identification and molecular mapping of two soybean aphid resistance genes in soybean PI 587732

    USDA-ARS?s Scientific Manuscript database

    Soybean [Glycine max (L.) Merr.] continues to be plagued by the soybean aphid (Aphis glycines Matsumura: SA) in North America. New soybean resistance sources are needed to combat the four identified SA biotypes. The objectives of this study were to determine the inheritance of SA resistance in PI 58...

  5. Cytometry metadata in XML

    NASA Astrophysics Data System (ADS)

    Leif, Robert C.; Leif, Stephanie H.

    2016-04-01

    Introduction: The International Society for Advancement of Cytometry (ISAC) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). CytometryML will serve as a common metadata standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). The datatypes are primarily based on the Flow Cytometry and the Digital Imaging and Communication (DICOM) standards. A small section of the code was formatted with standard HTML formatting elements (p, h1, h2, etc.). Results:1) The part of MIFlowCyt that describes the Experimental Overview including the specimen and substantial parts of several other major elements has been implemented as CytometryML XML schemas (www.cytometryml.org). 2) The feasibility of using MIFlowCyt to provide the combination of an overview, table of contents, and/or an index of a scientific paper or a report has been demonstrated. Previously, a sample electronic publication, EPUB, was created that could contain both MIFlowCyt metadata as well as the binary data. Conclusions: The use of CytometryML technology together with XHTML5 and CSS permits the metadata to be directly formatted and together with the binary data to be stored in an EPUB container. This will facilitate: formatting, data- mining, presentation, data verification, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate a publication's adherence to the MIFlowCyt standard, promote interoperability and should also result in the textual and numeric data being published using web technology without any change in composition.

  6. 33 CFR 117.1103 - Upper Mississippi River.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Upper Mississippi River. 117.1103 Section 117.1103 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Wisconsin § 117.1103 Upper Mississippi River. See...

  7. 33 CFR 117.1103 - Upper Mississippi River.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Upper Mississippi River. 117.1103 Section 117.1103 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Wisconsin § 117.1103 Upper Mississippi River. See...

  8. 33 CFR 117.1103 - Upper Mississippi River.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Upper Mississippi River. 117.1103 Section 117.1103 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Wisconsin § 117.1103 Upper Mississippi River. See...

  9. 33 CFR 117.1103 - Upper Mississippi River.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Upper Mississippi River. 117.1103 Section 117.1103 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Wisconsin § 117.1103 Upper Mississippi River. See...

  10. 33 CFR 117.1103 - Upper Mississippi River.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Upper Mississippi River. 117.1103 Section 117.1103 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Wisconsin § 117.1103 Upper Mississippi River. See...

  11. 75 FR 79064 - Mississippi Disaster #MS-00042

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-17

    ... SMALL BUSINESS ADMINISTRATION [Disaster Declaration 12409 and 12410] Mississippi Disaster MS-00042 AGENCY: U.S. Small Business Administration. ACTION: Notice. SUMMARY: This is a notice of an Administrative declaration of a disaster for the State of Mississippi dated 12/07/2010. Incident: Severe storms...

  12. 78 FR 3494 - Mississippi Disaster #MS-00063

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-16

    ... SMALL BUSINESS ADMINISTRATION [Disaster Declaration 13439 and 13440] Mississippi Disaster MS-00063 AGENCY: U.S. Small Business Administration. ACTION: Notice. SUMMARY: This is a notice of an Administrative declaration of a disaster for the State of Mississippi dated 01/04/2013. Incident: Severe storms...

  13. Soybean irrigation management

    USDA-ARS?s Scientific Manuscript database

    Soybean is an important crop and a major component of the agricultural economy in the Missouri Bootheel and throughout Missouri. USDA’s National Agricultural Statistics Service (NASS) reported that in 2012, 960 thousand acres of soybeans were harvested in Southeast Missouri (Butler, Cape Girardeau, ...

  14. Host Adaptation of Soybean Dwarf Virus Following Serial Passages on Pea (Pisum sativum) and Soybean (Glycine max)

    PubMed Central

    Tian, Bin; Gildow, Frederick E.; Stone, Andrew L.; Sherman, Diana J.; Damsteegt, Vernon D.; Schneider, William L.

    2017-01-01

    Soybean Dwarf Virus (SbDV) is an important plant pathogen, causing economic losses in soybean. In North America, indigenous strains of SbDV mainly infect clover, with occasional outbreaks in soybean. To evaluate the risk of a US clover strain of SbDV adapting to other plant hosts, the clover isolate SbDV-MD6 was serially transmitted to pea and soybean by aphid vectors. Sequence analysis of SbDV-MD6 from pea and soybean passages identified 11 non-synonymous mutations in soybean, and six mutations in pea. Increasing virus titers with each sequential transmission indicated that SbDV-MD6 was able to adapt to the plant host. However, aphid transmission efficiency on soybean decreased until the virus was no longer transmissible. Our results clearly demonstrated that the clover strain of SbDV-MD6 is able to adapt to soybean crops. However, mutations that improve replication and/or movement may have trade-off effects resulting in decreased vector transmission. PMID:28635666

  15. Scalable PGAS Metadata Management on Extreme Scale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Agarwal, Khushbu; Straatsma, TP

    Programming models intended to run on exascale systems have a number of challenges to overcome, specially the sheer size of the system as measured by the number of concurrent software entities created and managed by the underlying runtime. It is clear from the size of these systems that any state maintained by the programming model has to be strictly sub-linear in size, in order not to overwhelm memory usage with pure overhead. A principal feature of Partitioned Global Address Space (PGAS) models is providing easy access to global-view distributed data structures. In order to provide efficient access to these distributedmore » data structures, PGAS models must keep track of metadata such as where array sections are located with respect to processes/threads running on the HPC system. As PGAS models and applications become ubiquitous on very large transpetascale systems, a key component to their performance and scalability will be efficient and judicious use of memory for model overhead (metadata) compared to application data. We present an evaluation of several strategies to manage PGAS metadata that exhibit different space/time tradeoffs. We use two real-world PGAS applications to capture metadata usage patterns and gain insight into their communication behavior.« less

  16. Metadata management for high content screening in OMERO.

    PubMed

    Li, Simon; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Lindner, Dominik; Linkert, Melissa; Moore, William J; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Allan, Chris; Burel, Jean-Marie; Moore, Josh; Swedlow, Jason R

    2016-03-01

    High content screening (HCS) experiments create a classic data management challenge-multiple, large sets of heterogeneous structured and unstructured data, that must be integrated and linked to produce a set of "final" results. These different data include images, reagents, protocols, analytic output, and phenotypes, all of which must be stored, linked and made accessible for users, scientists, collaborators and where appropriate the wider community. The OME Consortium has built several open source tools for managing, linking and sharing these different types of data. The OME Data Model is a metadata specification that supports the image data and metadata recorded in HCS experiments. Bio-Formats is a Java library that reads recorded image data and metadata and includes support for several HCS screening systems. OMERO is an enterprise data management application that integrates image data, experimental and analytic metadata and makes them accessible for visualization, mining, sharing and downstream analysis. We discuss how Bio-Formats and OMERO handle these different data types, and how they can be used to integrate, link and share HCS experiments in facilities and public data repositories. OME specifications and software are open source and are available at https://www.openmicroscopy.org. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  17. 33 CFR 117.671 - Upper Mississippi River.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Upper Mississippi River. 117.671 Section 117.671 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Minnesota § 117.671 Upper Mississippi River. (a) The...

  18. 33 CFR 117.671 - Upper Mississippi River.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Upper Mississippi River. 117.671 Section 117.671 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Minnesota § 117.671 Upper Mississippi River. (a) The...

  19. 33 CFR 117.671 - Upper Mississippi River.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Upper Mississippi River. 117.671 Section 117.671 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Minnesota § 117.671 Upper Mississippi River. (a) The...

  20. Data Mining for Forecasting Mississippi Cropland Data Layers

    NASA Astrophysics Data System (ADS)

    Shore, F. L.; Gregory, T. L.

    2011-12-01

    In 1999, Mississippi became an early adopter of the National Agricultural Statistics Service (NASS) Cropland Data Layer (CDL) program. With the support of the NASS Spatial Analysis Research Section (SARS), we have progressed from an annual crop picture to a pixel by pixel history of Mississippi farming. Much of our early work for Mississippi agriculture is now easily provided from the web based application CropScape, released by SARS in 2011. In this study, pixel history data from CDLs has been mined to give forecasts of Mississippi crop acres. Traditionally, such agricultural data mining emphasizes the trends of early adopters driven by factors such as global warming, technology, practices, or the marketplace. These studies provide forecasted CDL products produced using See5° and Imagine°, the same software used in Mississippi CDL production since 2006. Mississippi CDL forecasts were made using historical information available as soon as the CDL for the previous year was completed. For example, the CDL forecast for winter wheat, produced at a date when winter wheat was planted but not most crops, gave results of 104.6 +/- 5.4% of the official NASS estimates for winter wheat for the years 2009-2011. In 2012, all of the states of the contiguous US will have the historical CDL data to do this type of study. A CDL forecast is proposed as a useful addition to CropScape.

  1. OSCAR/Surface: Metadata for the WMO Integrated Observing System WIGOS

    NASA Astrophysics Data System (ADS)

    Klausen, Jörg; Pröscholdt, Timo; Mannes, Jürg; Cappelletti, Lucia; Grüter, Estelle; Calpini, Bertrand; Zhang, Wenjian

    2016-04-01

    The World Meteorological Organization (WMO) Integrated Global Observing System (WIGOS) is a key WMO priority underpinning all WMO Programs and new initiatives such as the Global Framework for Climate Services (GFCS). It does this by better integrating WMO and co-sponsored observing systems, as well as partner networks. For this, an important aspect is the description of the observational capabilities by way of structured metadata. The 17th Congress of the Word Meteorological Organization (Cg-17) has endorsed the semantic WIGOS metadata standard (WMDS) developed by the Task Team on WIGOS Metadata (TT-WMD). The standard comprises of a set of metadata classes that are considered to be of critical importance for the interpretation of observations and the evolution of observing systems relevant to WIGOS. The WMDS serves all recognized WMO Application Areas, and its use for all internationally exchanged observational data generated by WMO Members is mandatory. The standard will be introduced in three phases between 2016 and 2020. The Observing Systems Capability Analysis and Review (OSCAR) platform operated by MeteoSwiss on behalf of WMO is the official repository of WIGOS metadata and an implementation of the WMDS. OSCAR/Surface deals with all surface-based observations from land, air and oceans, combining metadata managed by a number of complementary, more domain-specific systems (e.g., GAWSIS for the Global Atmosphere Watch, JCOMMOPS for the marine domain, the WMO Radar database). It is a modern, web-based client-server application with extended information search, filtering and mapping capabilities including a fully developed management console to add and edit observational metadata. In addition, a powerful application programming interface (API) is being developed to allow machine-to-machine metadata exchange. The API is based on an ISO/OGC-compliant XML schema for the WMDS using the Observations and Measurements (ISO19156) conceptual model. The purpose of the

  2. Soybean Aphid Population Dynamics, Soybean Yield Loss and Development of Stage-Specific Economic Injury Levels

    USDA-ARS?s Scientific Manuscript database

    Stage-specific economic injury levels form the basis of an integrated pest management approach for soybean aphid (Aphis glycines Matsumura) population management in soybeans (Glycine max L.). Experimental objectives were to develop a procedure for calculating economic injury levels of the soybean a...

  3. The Rivers of the Mississippi Watershed

    NASA Image and Video Library

    2017-12-08

    The Mississippi Watershed is the largest drainage basin in North America at 3.2 million square kilometers in area. The USGS has created a database of this area which indicates the direction of waterflow at each point. By assembling these directions into streamflows, it is possible to trace the path of water from every point of the area to the mouth of the Mississippi in the Gulf of Mexico. This animation starts with the points furthest from the Gulf and reveals the streams and rivers as a steady progression towards the mouth of the Mississippi until all the major rivers are revealed. The speed of the reveal of the rivers is not dependent on the actual speed of the water flow. The reveal proceeds at a constant velocity along each river path, timed so that all reveals reach the mouth of the Mississippi at the same time. This animation does not show actual flow rates of the rivers. All rivers are shown with identical rates. The river colors and widths correspond to the relative lengths of river segments. Credit: NASA's Scientific Visualization Studio/Horace Mitchell Go here to download this video: svs.gsfc.nasa.gov/4493

  4. Estimated water use in Mississippi, 1980

    USGS Publications Warehouse

    Callahan, J.A.

    1980-01-01

    Large quantities of good quality ground and surface water are readily available in nearly all parts of Mississippi, and there is also an abundant supply of saline water in the estuaries along the Mississippi Gulf Coast. The total estimated water use in the State in 1980 from groundwater and surface water was 3532 million gallons/day (mgd), including 662 mgd of saline water. Freshwater used from all sources in Mississippi during the period 1975 through 1980 increased from 2510 mgd to > 2870 mgd, a 14% increase. Although modest increases of freshwater use may be expected in public, self-supplied industrial, and thermoelectric supplies, large future increases in the use of freshwater may be expected primarily as a result of growth in irrigation and aquaculture. Management and protection of the quantity and quality of the available freshwater supply are often problems associated with increased use. Water use data, both temporal and spatial, are needed by the State of Mississippi to provide for intelligent, long-term management of the resources; one table gives data on the principal categories of water use, sources, and use by county. (Lantz-PTT)

  5. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format.

    PubMed

    Ismail, Mahmoud; Philbin, James

    2015-04-01

    The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies' metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata.

  6. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format

    PubMed Central

    Ismail, Mahmoud; Philbin, James

    2015-01-01

    Abstract. The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies’ metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata. PMID:26158117

  7. ISO, FGDC, DIF and Dublin Core - Making Sense of Metadata Standards for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Jones, P. R.; Ritchey, N. A.; Peng, G.; Toner, V. A.; Brown, H.

    2014-12-01

    Metadata standards provide common definitions of metadata fields for information exchange across user communities. Despite the broad adoption of metadata standards for Earth science data, there are still heterogeneous and incompatible representations of information due to differences between the many standards in use and how each standard is applied. Federal agencies are required to manage and publish metadata in different metadata standards and formats for various data catalogs. In 2014, the NOAA National Climatic data Center (NCDC) managed metadata for its scientific datasets in ISO 19115-2 in XML, GCMD Directory Interchange Format (DIF) in XML, DataCite Schema in XML, Dublin Core in XML, and Data Catalog Vocabulary (DCAT) in JSON, with more standards and profiles of standards planned. Of these standards, the ISO 19115-series metadata is the most complete and feature-rich, and for this reason it is used by NCDC as the source for the other metadata standards. We will discuss the capabilities of metadata standards and how these standards are being implemented to document datasets. Successful implementations include developing translations and displays using XSLTs, creating links to related data and resources, documenting dataset lineage, and establishing best practices. Benefits, gaps, and challenges will be highlighted with suggestions for improved approaches to metadata storage and maintenance.

  8. The Use of Metadata Visualisation Assist Information Retrieval

    DTIC Science & Technology

    2007-10-01

    album title, the track length and the genre of music . Again, any of these pieces of information can be used to quickly search and locate specific...that person. Music files also have metadata tags, in a format called ID3. This usually contains information such as the artist, the song title, the...tracks, to provide more information about the entire music collection, or to find similar or diverse tracks within the collection. Metadata is

  9. Vegetable soybean tolerance to pyroxasulfone

    USDA-ARS?s Scientific Manuscript database

    If registered for use on vegetable soybean, pyroxasulfone would fill an important gap in weed management systems in the crop. In order to determine the potential crop injury risk of pyroxasulfone on vegetable soybean, the objective of this work was to quantify vegetable soybean tolerance to pyroxasu...

  10. Archive of digital Boomer and Chirp seismic reflection data collected during USGS Cruises 01RCE05 and 02RCE01 in the Lower Atchafalaya River, Mississippi River Delta, and offshore southeastern Louisiana, October 23-30, 2001, and August 18-19, 2002

    USGS Publications Warehouse

    Calderon, Karynna; Dadisman, Shawn V.; Kindinger, Jack G.; Flocks, James G.; Ferina, Nicholas F.; Wiese, Dana S.

    2004-01-01

    In October of 2001 and August of 2002, the U.S. Geological Survey conducted geophysical surveys of the Lower Atchafalaya River, the Mississippi River Delta, Barataria Bay, and the Gulf of Mexico south of East Timbalier Island, Louisiana. This report serves as an archive of unprocessed digital marine seismic reflection data, trackline maps, navigation files, observers' logbooks, GIS information, and formal FGDC metadata. In addition, a filtered and gained GIF image of each seismic profile is provided. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry and othes, 1975) and may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU). Examples of SU processing scripts and in-house (USGS) software for viewing SEG-Y files (Zihlman, 1992) are also provided. Processed profile images, trackline maps, navigation files, and formal metadata may be viewed with a web browser. Scanned handwritten logbooks and Field Activity Collection System (FACS) logs may be viewed with Adobe Reader.

  11. Establishing semantic interoperability of biomedical metadata registries using extended semantic relationships.

    PubMed

    Park, Yu Rang; Yoon, Young Jo; Kim, Hye Hyeon; Kim, Ju Han

    2013-01-01

    Achieving semantic interoperability is critical for biomedical data sharing between individuals, organizations and systems. The ISO/IEC 11179 MetaData Registry (MDR) standard has been recognized as one of the solutions for this purpose. The standard model, however, is limited. Representing concepts consist of two or more values, for instance, are not allowed including blood pressure with systolic and diastolic values. We addressed the structural limitations of ISO/IEC 11179 by an integrated metadata object model in our previous research. In the present study, we introduce semantic extensions for the model by defining three new types of semantic relationships; dependency, composite and variable relationships. To evaluate our extensions in a real world setting, we measured the efficiency of metadata reduction by means of mapping to existing others. We extracted metadata from the College of American Pathologist Cancer Protocols and then evaluated our extensions. With no semantic loss, one third of the extracted metadata could be successfully eliminated, suggesting better strategy for implementing clinical MDRs with improved efficiency and utility.

  12. NASA Space Day in Mississippi - House of Representatives

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Astronaut Michael Foale (center) and Stennis Space Center officials met with Mississippi House of Representatives Gulf Coast delegation, including Speaker William 'Billy' McCoy (far right), during NASA Space Day in Mississippi on January 30.

  13. Design and implementation of a fault-tolerant and dynamic metadata database for clinical trials

    NASA Astrophysics Data System (ADS)

    Lee, J.; Zhou, Z.; Talini, E.; Documet, J.; Liu, B.

    2007-03-01

    In recent imaging-based clinical trials, quantitative image analysis (QIA) and computer-aided diagnosis (CAD) methods are increasing in productivity due to higher resolution imaging capabilities. A radiology core doing clinical trials have been analyzing more treatment methods and there is a growing quantity of metadata that need to be stored and managed. These radiology centers are also collaborating with many off-site imaging field sites and need a way to communicate metadata between one another in a secure infrastructure. Our solution is to implement a data storage grid with a fault-tolerant and dynamic metadata database design to unify metadata from different clinical trial experiments and field sites. Although metadata from images follow the DICOM standard, clinical trials also produce metadata specific to regions-of-interest and quantitative image analysis. We have implemented a data access and integration (DAI) server layer where multiple field sites can access multiple metadata databases in the data grid through a single web-based grid service. The centralization of metadata database management simplifies the task of adding new databases into the grid and also decreases the risk of configuration errors seen in peer-to-peer grids. In this paper, we address the design and implementation of a data grid metadata storage that has fault-tolerance and dynamic integration for imaging-based clinical trials.

  14. Recipes for Semantic Web Dog Food — The ESWC and ISWC Metadata Projects

    NASA Astrophysics Data System (ADS)

    Möller, Knud; Heath, Tom; Handschuh, Siegfried; Domingue, John

    Semantic Web conferences such as ESWC and ISWC offer prime opportunities to test and showcase semantic technologies. Conference metadata about people, papers and talks is diverse in nature and neither too small to be uninteresting or too big to be unmanageable. Many metadata-related challenges that may arise in the Semantic Web at large are also present here. Metadata must be generated from sources which are often unstructured and hard to process, and may originate from many different players, therefore suitable workflows must be established. Moreover, the generated metadata must use appropriate formats and vocabularies, and be served in a way that is consistent with the principles of linked data. This paper reports on the metadata efforts from ESWC and ISWC, identifies specific issues and barriers encountered during the projects, and discusses how these were approached. Recommendations are made as to how these may be addressed in the future, and we discuss how these solutions may generalize to metadata production for the Semantic Web at large.

  15. Metadata to Describe Genomic Information.

    PubMed

    Delgado, Jaime; Naro, Daniel; Llorente, Silvia; Gelpí, Josep Lluís; Royo, Romina

    2018-01-01

    Interoperable metadata is key for the management of genomic information. We propose a flexible approach that we contribute to the standardization by ISO/IEC of a new format for efficient and secure compressed storage and transmission of genomic information.

  16. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  17. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  18. Soybean Breeding in the US

    USDA-ARS?s Scientific Manuscript database

    Soybean provides approximately 71% of the world’s protein meal and about 29% of the world’s vegetable oil. The U.S., Brazil, and Argentina supply approximately 80% of the world’s soybean production, accounting for approximately 88% of world soybean exports. In the U.S., approximately 30 million me...

  19. Sensor metadata blueprints and computer-aided editing for disciplined SensorML

    NASA Astrophysics Data System (ADS)

    Tagliolato, Paolo; Oggioni, Alessandro; Fugazza, Cristiano; Pepe, Monica; Carrara, Paola

    2016-04-01

    The need for continuous, accurate, and comprehensive environmental knowledge has led to an increase in sensor observation systems and networks. The Sensor Web Enablement (SWE) initiative has been promoted by the Open Geospatial Consortium (OGC) to foster interoperability among sensor systems. The provision of metadata according to the prescribed SensorML schema is a key component for achieving this and nevertheless availability of correct and exhaustive metadata cannot be taken for granted. On the one hand, it is awkward for users to provide sensor metadata because of the lack in user-oriented, dedicated tools. On the other, the specification of invariant information for a given sensor category or model (e.g., observed properties and units of measurement, manufacturer information, etc.), can be labor- and timeconsuming. Moreover, the provision of these details is error prone and subjective, i.e., may differ greatly across distinct descriptions for the same system. We provide a user-friendly, template-driven metadata authoring tool composed of a backend web service and an HTML5/javascript client. This results in a form-based user interface that conceals the high complexity of the underlying format. This tool also allows for plugging in external data sources providing authoritative definitions for the aforementioned invariant information. Leveraging these functionalities, we compiled a set of SensorML profiles, that is, sensor metadata blueprints allowing end users to focus only on the metadata items that are related to their specific deployment. The natural extension of this scenario is the involvement of end users and sensor manufacturers in the crowd-sourced evolution of this collection of prototypes. We describe the components and workflow of our framework for computer-aided management of sensor metadata.

  20. Quality Assurance for Digital Learning Object Repositories: Issues for the Metadata Creation Process

    ERIC Educational Resources Information Center

    Currier, Sarah; Barton, Jane; O'Beirne, Ronan; Ryan, Ben

    2004-01-01

    Metadata enables users to find the resources they require, therefore it is an important component of any digital learning object repository. Much work has already been done within the learning technology community to assure metadata quality, focused on the development of metadata standards, specifications and vocabularies and their implementation…

  1. Glyphosate-tolerant soybeans remain compositionally equivalent to conventional soybeans (Glycine max L.) during three years of field testing.

    PubMed

    McCann, Melinda C; Liu, Keshun; Trujillo, William A; Dobert, Raymond C

    2005-06-29

    Previous studies have shown that the composition of glyphosate-tolerant soybeans (GTS) and selected processed fractions was substantially equivalent to that of conventional soybeans over a wide range of analytes. This study was designed to determine if the composition of GTS remains substantially equivalent to conventional soybeans over the course of several years and when introduced into multiple genetic backgrounds. Soybean seed samples of both GTS and conventional varieties were harvested during 2000, 2001, and 2002 and analyzed for the levels of proximates, lectin, trypsin inhibitor, and isoflavones. The measured analytes are representative of the basic nutritional and biologically active components in soybeans. Results show a similar range of natural variability for the GTS soybeans as well as conventional soybeans. It was concluded that the composition of commercial GTS over the three years of breeding into multiple varieties remains equivalent to that of conventional soybeans.

  2. Noteworthy collections from the Yazoo-Mississippu delta region of Mississippi

    Treesearch

    Daniel A. Jr. Skojac; Charles T. Bryson; Charles H. II. Walker

    2007-01-01

    The flora of the Yazoo-Mississippi Delta Region is the least represented in the checklist of Mississippi plants currently being compiled for the state. This paper reports 20 noteworthy collections from the region and discusses their distributions within the state. Typha angustifolia is reported new to Mississippi and Bowlesia incana,...

  3. Shared Geospatial Metadata Repository for Ontario University Libraries: Collaborative Approaches

    ERIC Educational Resources Information Center

    Forward, Erin; Leahey, Amber; Trimble, Leanne

    2015-01-01

    Successfully providing access to special collections of digital geospatial data in academic libraries relies upon complete and accurate metadata. Creating and maintaining metadata using specialized standards is a formidable challenge for libraries. The Ontario Council of University Libraries' Scholars GeoPortal project, which created a shared…

  4. A Standard Greenhouse Method for Assessing Soybean Cyst Nematode Resistance in Soybean: SCE08 (Standardized Cyst Evaluation 2008)

    USDA-ARS?s Scientific Manuscript database

    The soybean cyst nematode (SCN), Heterodera glycines Ichinohe, is distributed throughout the soybean (Glycine max [L.] Merr.) production areas of the United States and Canada. SCN remains the most economically important pathogen of soybean in North America; the most recent estimate of soybean yield...

  5. Mississippi's New Forestry Best Management Practices Video

    Treesearch

    Andrew James Londo; John Benkert Auel

    2004-01-01

    Mississippi's latest version of forestry best management practices (BMPs) for water quality was released in 2000. In conjunction with this release, funds were obtained through a Section 319H grant from the Mississippi Department of Environmental Quality to create a new BMPs video. Additional assistance was obtained from Georgia Pacific, PlumCreek, Weyerhaeuser,...

  6. Digitized analog boomer seismic-reflection data collected during U.S. Geological Survey cruises Erda 90-1_HC, Erda 90-1_PBP, and Erda 91-3 in Mississippi Sound, June 1990 and September 1991

    USGS Publications Warehouse

    Bosse, Stephen T.; Flocks, James G.; Forde, Arnell S.

    2017-04-21

    The U.S. Geological Survey (USGS) Coastal and Marine Geology Program has actively collected geophysical and sedimentological data in the northern Gulf of Mexico for several decades, including shallow subsurface data in the form of high-resolution seismic-reflection profiles (HRSP). Prior to the mid-1990s most HRSP data were collected in analog format as paper rolls of continuous profiles up to 25 meters long. A large portion of this data resides in a single repository with minimal metadata. As part of the National Geological and Geophysical Data Preservation Program, scientists at the USGS St. Petersburg Coastal and Marine Science Center are converting the analog paper records to digital format using a large-format continuous scanner.This report, along with the accompanying USGS data release (Bosse and others, 2017), serves as an archive of seismic profiles with headers, converted Society of Exploration Geophysicists Y format (SEG-Y) files, navigation data, and geographic information system data files for digitized boomer seismic-reflection data collected from the Research Vessel (R/V) Erda during two cruises in 1990 and 1991. The Erda 90-1 geophysical cruise was conducted in two legs. The first leg included seismic data collected from the Hancock County region of the Mississippi Sound (Erda 90-1_HC) from June 4 to June 6, 1990. The second leg included seismic data collected from the Petit Bois Pass area of Mississippi Sound (Erda 90-1_PBP) from June 8 to June 9, 1990. The Erda 91-3 cruise occurred between September 12 and September 23, 1991, and surveyed the Mississippi Sound region just west of Horn Island, Mississippi.

  7. Compositional differences in soybeans on the market: glyphosate accumulates in Roundup Ready GM soybeans.

    PubMed

    Bøhn, T; Cuhra, M; Traavik, T; Sanden, M; Fagan, J; Primicerio, R

    2014-06-15

    This article describes the nutrient and elemental composition, including residues of herbicides and pesticides, of 31 soybean batches from Iowa, USA. The soy samples were grouped into three different categories: (i) genetically modified, glyphosate-tolerant soy (GM-soy); (ii) unmodified soy cultivated using a conventional "chemical" cultivation regime; and (iii) unmodified soy cultivated using an organic cultivation regime. Organic soybeans showed the healthiest nutritional profile with more sugars, such as glucose, fructose, sucrose and maltose, significantly more total protein, zinc and less fibre than both conventional and GM-soy. Organic soybeans also contained less total saturated fat and total omega-6 fatty acids than both conventional and GM-soy. GM-soy contained high residues of glyphosate and AMPA (mean 3.3 and 5.7 mg/kg, respectively). Conventional and organic soybean batches contained none of these agrochemicals. Using 35 different nutritional and elemental variables to characterise each soy sample, we were able to discriminate GM, conventional and organic soybeans without exception, demonstrating "substantial non-equivalence" in compositional characteristics for 'ready-to-market' soybeans. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Installation Restoration Program. Preliminary Assessment: 186th Tactical Reconnaissance Group, Mississippi Air National Guard, Meridian Airport, Key Field, Meridian, Mississippi

    DTIC Science & Technology

    1988-10-01

    previous directives and meoranda. Although the DOD IRP and the USEPA Superfund pr"gr ware essentially the sase, differences in the definition of progr...Midway and Wilcox inJ Mississimp, Bulletin 102, Mississippi Geologic Research Papers, Mississippi Geological, Ecnomic and Toporapic Survey, 1964. 16

  9. A statistical metadata model for clinical trials' data management.

    PubMed

    Vardaki, Maria; Papageorgiou, Haralambos; Pentaris, Fragkiskos

    2009-08-01

    We introduce a statistical, process-oriented metadata model to describe the process of medical research data collection, management, results analysis and dissemination. Our approach explicitly provides a structure for pieces of information used in Clinical Study Data Management Systems, enabling a more active role for any associated metadata. Using the object-oriented paradigm, we describe the classes of our model that participate during the design of a clinical trial and the subsequent collection and management of the relevant data. The advantage of our approach is that we focus on presenting the structural inter-relation of these classes when used during datasets manipulation by proposing certain transformations that model the simultaneous processing of both data and metadata. Our solution reduces the possibility of human errors and allows for the tracking of all changes made during datasets lifecycle. The explicit modeling of processing steps improves data quality and assists in the problem of handling data collected in different clinical trials. The case study illustrates the applicability of the proposed framework demonstrating conceptually the simultaneous handling of datasets collected during two randomized clinical studies. Finally, we provide the main considerations for implementing the proposed framework into a modern Metadata-enabled Information System.

  10. Interaction of soybean and Phakopsora pachyrhizi, the cause of soybean rust

    USDA-ARS?s Scientific Manuscript database

    Soybean rust, caused by Phakopsora pachyrhizi H. Sydow & Sydow, is a major disease limiting soybean [Glycine max (L.) Merr.] production in many areas of the world. Yield losses of up to 80% were reported in experimental plots in Taiwan. Although the disease is not always yield limiting, it has the p...

  11. Mississippi, 2010 forest inventory and analysis factsheet

    Treesearch

    S.N. Oswalt; J. Bentley

    2011-01-01

    This science update provides an overview of forest resources in Mississippi based on an inventory conducted by the U.S. Department of Agriculture Forest Service, Forest Inventory and Analysis (FIA) program at the Southern Research Station in cooperation with the Mississippi Forestry Commission. This update compares data from the periodic 2006 survey (field dates 2005...

  12. Mississippi, 2012 forest inventory and analysis factsheet

    Treesearch

    Sonja N. Oswalt

    2013-01-01

    This science update provides an overview of forest resources in Mississippi based on an inventory conducted by the U.S. Department of Agriculture Forest Service, Forest Inventory and Analysis (FIA) program at the Southern Research Station in cooperation with the Mississippi Forestry Commission. Data estimates are based on field data collected using the FIA annualized...

  13. Mississippi, 2011 forest inventory and analysis factsheet

    Treesearch

    Sonja N. Oswalt

    2013-01-01

    This science update provides an overview of forest resources in Mississippi based on an inventory conducted by the U.S. Department of Agriculture Forest Service, Forest Inventory and Analysis (FIA) program at the Southern Research Station in cooperation with the Mississippi Forestry Commission. Data estimates are based on field data collected using the FIA annualized...

  14. Mississippi Department of Transportation research peer exchange : 2015.

    DOT National Transportation Integrated Search

    2015-11-19

    From October 20th to 22nd, 2015, the Mississippi Department of Transportation, with the assistance of The University of Southern Mississippi, hosted a peer exchange focusing on best practices. The goal of the peer exchange was to develop actionable r...

  15. Improved Soybean Oil for Biodiesel Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom Clemente; Jon Van Gerpen

    2007-11-30

    The goal of this program was to generate information on the utility of soybean germplasm that produces oil, high in oleic acid and low in saturated fatty acids, for its use as a biodiesel. Moreover, data was ascertained on the quality of the derived soybean meal (protein component), and the agronomic performance of this novel soybean germplasm. Gathering data on these later two areas is critical, with respect to the first, soybean meal (protein) component is a major driver for commodity soybean, which is utilized as feed supplements in cattle, swine, poultry and more recently aquaculture production. Hence, it ismore » imperative that the resultant modulation in the fatty acid profile of the oil does not compromise the quality of the derived meal, for if it does, the net value of the novel soybean will be drastically reduced. Similarly, if the improved oil trait negative impacts the agronomics (i.e. yield) of the soybean, this in turn will reduce the value of the trait. Over the course of this program oil was extruded from approximately 350 bushels of soybean designated 335-13, which produces oil high in oleic acid (>85%) and low in saturated fatty acid (<6%). As predicted improvement in cold flow parameters were observed as compared to standard commodity soybean oil. Moreover, engine tests revealed that biodiesel derived from this novel oil mitigated NOx emissions. Seed quality of this soybean was not compromised with respect to total oil and protein, nor was the amino acid profile of the derived meal as compared to the respective control soybean cultivar with a conventional fatty acid profile. Importantly, the high oleic acid/low saturated fatty acids oil trait was not impacted by environment and yield was not compromised. Improving the genetic potential of soybean by exploiting the tools of biotechnology to improve upon the lipid quality of the seed for use in industrial applications such as biodiesel will aid in expanding the market for the crop. This in turn

  16. To Teach or Not to Teach: The Ethics of Metadata

    ERIC Educational Resources Information Center

    Barnes, Cynthia; Cavaliere, Frank

    2009-01-01

    Metadata is information about computer-generated documents that is often inadvertently transmitted to others. The problems associated with metadata have become more acute over time as word processing and other popular programs have become more receptive to the concept of collaboration. As more people become involved in the preparation of…

  17. Using URIs to effectively transmit sensor data and metadata

    NASA Astrophysics Data System (ADS)

    Kokkinaki, Alexandra; Buck, Justin; Darroch, Louise; Gardner, Thomas

    2017-04-01

    Autonomous ocean observation is massively increasing the number of sensors in the ocean. Accordingly, the continuing increase in datasets produced, makes selecting sensors that are fit for purpose a growing challenge. Decision making on selecting quality sensor data, is based on the sensor's metadata, i.e. manufacturer specifications, history of calibrations etc. The Open Geospatial Consortium (OGC) has developed the Sensor Web Enablement (SWE) standards to facilitate integration and interoperability of sensor data and metadata. The World Wide Web Consortium (W3C) Semantic Web technologies enable machine comprehensibility promoting sophisticated linking and processing of data published on the web. Linking the sensor's data and metadata according to the above-mentioned standards can yield practical difficulties, because of internal hardware bandwidth restrictions and a requirement to constrain data transmission costs. Our approach addresses these practical difficulties by uniquely identifying sensor and platform models and instances through URIs, which resolve via content negotiation to either OGC's sensor meta language, sensorML or W3C's Linked Data. Data transmitted by a sensor incorporate the sensor's unique URI to refer to its metadata. Sensor and platform model URIs and descriptions are created and hosted by the British Oceanographic Data Centre (BODC) linked systems service. The sensor owner creates the sensor and platform instance URIs prior and during sensor deployment, through an updatable web form, the Sensor Instance Form (SIF). SIF enables model and instance URI association but also platform and sensor linking. The use of URIs, which are dynamically generated through the SIF, offers both practical and economical benefits to the implementation of SWE and Linked Data standards in near real time systems. Data can be linked to metadata dynamically in-situ while saving on the costs associated to the transmission of long metadata descriptions. The transmission

  18. State Education Finance and Governance Profile: Mississippi

    ERIC Educational Resources Information Center

    Poulin, Nicole S.

    2010-01-01

    This article presents the state education finance and governance profile of Mississippi. Mississippians compose 0.95% of the total U.S. population, and the average density of the state is 60.7 people per square mile. In terms of education finance, the property tax is the sole form of local revenue for public education in Mississippi. In 1997, the…

  19. The Mississippi Chinese: Between Black and White.

    ERIC Educational Resources Information Center

    Loewen, James W.

    Society in the Delta region of Mississippi is still rigidly segregated. A vast social and economic gulf yawns between the dominant white and subordinate black. Yet one group in Mississippi, a "third race," the Chinese, has managed to leap that chasm. This book focuses on the causes of their changes in status, the processes by which it…

  20. Bottomland oak afforestation in the lower Mississippi

    Treesearch

    Emile S. Gardiner; Brian Roy Lockhard

    2007-01-01

    The 11 million hectare Lower Mississippi Alluvial Valley (LMAV), which is the geologic floodplain of the lower Mississippi River, is a prominent physiographic region in the southern United States. Seven states (Arkansas, Louisiana, Missis- 1 sippi, Missouri, Kentucky, Illinois, and Tennessee) border the lower stretch of the II River, and have a portion of their land...

  1. ASTER Images Flooding from Mississippi River Levee Breach

    NASA Image and Video Library

    2011-05-10

    NASA Terra spacecraft shows the resultant flooding of farmland west of the Mississippi 20 miles south of the Mississippi River levee breach. U.S. Army Corps of Engineers detonated explosives at the Birds Point levee near Wyatt, Missouri, on May 2, 2011.

  2. EUDAT B2FIND : A Cross-Discipline Metadata Service and Discovery Portal

    NASA Astrophysics Data System (ADS)

    Widmann, Heinrich; Thiemann, Hannes

    2016-04-01

    The European Data Infrastructure (EUDAT) project aims at a pan-European environment that supports a variety of multiple research communities and individuals to manage the rising tide of scientific data by advanced data management technologies. This led to the establishment of the community-driven Collaborative Data Infrastructure that implements common data services and storage resources to tackle the basic requirements and the specific challenges of international and interdisciplinary research data management. The metadata service B2FIND plays a central role in this context by providing a simple and user-friendly discovery portal to find research data collections stored in EUDAT data centers or in other repositories. For this we store the diverse metadata collected from heterogeneous sources in a comprehensive joint metadata catalogue and make them searchable in an open data portal. The implemented metadata ingestion workflow consists of three steps. First the metadata records - provided either by various research communities or via other EUDAT services - are harvested. Afterwards the raw metadata records are converted and mapped to unified key-value dictionaries as specified by the B2FIND schema. The semantic mapping of the non-uniform, community specific metadata to homogenous structured datasets is hereby the most subtle and challenging task. To assure and improve the quality of the metadata this mapping process is accompanied by • iterative and intense exchange with the community representatives, • usage of controlled vocabularies and community specific ontologies and • formal and semantic validation. Finally the mapped and checked records are uploaded as datasets to the catalogue, which is based on the open source data portal software CKAN. CKAN provides a rich RESTful JSON API and uses SOLR for dataset indexing that enables users to query and search in the catalogue. The homogenization of the community specific data models and vocabularies enables not

  3. Mississippi Survey Completed

    Treesearch

    R. L. Johnson

    1958-01-01

    Mississippi forests are now growing more pine but less hardwood than is being cut, according to a new survey of the State's forest resources. But because many timber stands are not fully stocked, growth is only about half the potential.

  4. Inferring Metadata for a Semantic Web Peer-to-Peer Environment

    ERIC Educational Resources Information Center

    Brase, Jan; Painter, Mark

    2004-01-01

    Learning Objects Metadata (LOM) aims at describing educational resources in order to allow better reusability and retrieval. In this article we show how additional inference rules allows us to derive additional metadata from existing ones. Additionally, using these rules as integrity constraints helps us to define the constraints on LOM elements,…

  5. Identification of Proteins Differentially Regulated in Response to Soybean Aphid Infestation of Soybean Near Isogenic Lines differing in Aphid Resistance

    USDA-ARS?s Scientific Manuscript database

    The soybean aphid, a plant sap sucking insect, has become an important soybean pest in the USA and infestation of soybean by this insect can lead to significant yield losses. The Rag2 gene of soybean, providing resistance to soybean aphid biotypes I (IL) and II (OH), was identified by researchers in...

  6. 7 CFR 1220.313 - Qualified State Soybean Boards.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false Qualified State Soybean Boards. 1220.313 Section 1220... SERVICE (MARKETING AGREEMENTS AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE SOYBEAN... Soybean Boards. The following State soybean promotion organizations shall be Qualified State Soybean...

  7. Infestation ratings database for soybean aphid on early-maturity wild soybean lines

    USDA-ARS?s Scientific Manuscript database

    Soybean aphid (Aphis glycines Matsumura; SA) is a major invasive pest of soybean [Glycine max (L.) Merr.] in northern production regions of North America. Although insecticides are currently the main method for controlling this pest, SA-resistant cultivars are being developed to sustainably manage ...

  8. Landscape correlates along mourning dove call-count routes in Mississippi

    USGS Publications Warehouse

    Elmore, R.D.; Vilella, F.J.; Gerard, P.D.

    2007-01-01

    Mourning dove (Zenaida macroura) call-count surveys in Mississippi, USA, suggest declining populations. We used available mourning dove call-count data to evaluate long-term mourning dove habitat relationships. Dove routes were located in the Mississippi Alluvial Valley, Deep Loess Province, Mid Coastal Plain, and Hilly Coastal Plain physiographic provinces of Mississippi. We also included routes in the Blackbelt Prairie region of Mississippi and Alabama, USA. We characterized landscape structure and composition within 1.64-km buffers around 10 selected mourning dove call-count routes during 3 time periods. Habitat classes included agriculture, forest, urban, regeneration stands, wetland, and woodlot. We used Akaike's Information Criterion to select the best candidate model. We selected a model containing percent agriculture and edge density that contained approximately 40% of the total variability in the data set. Percent agriculture was positively correlated with relative dove abundance. Interestingly, we found a negative relationship between edge density and dove abundance. Researchers should conduct future research on dove nesting patterns in Mississippi and threshold levels of edge necessary to maximize dove density. During the last 20 years, Mississippi lost more than 800,000 ha of cropland while forest cover represented largely by pine (Pinus taeda) plantations increased by more than 364,000 ha. Our results suggest observed localized declines in mourning dove abundance in Mississippi may be related to the documented conversion of agricultural lands to pine plantations.

  9. Soybean Knowledge Base (SoyKB): a Web Resource for Soybean Translational Genomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, Trupti; Patil, Kapil; Fitzpatrick, Michael R.

    2012-01-17

    Background: Soybean Knowledge Base (SoyKB) is a comprehensive all-inclusive web resource for soybean translational genomics. SoyKB is designed to handle the management and integration of soybean genomics, transcriptomics, proteomics and metabolomics data along with annotation of gene function and biological pathway. It contains information on four entities, namely genes, microRNAs, metabolites and single nucleotide polymorphisms (SNPs). Methods: SoyKB has many useful tools such as Affymetrix probe ID search, gene family search, multiple gene/ metabolite search supporting co-expression analysis, and protein 3D structure viewer as well as download and upload capacity for experimental data and annotations. It has four tiers ofmore » registration, which control different levels of access to public and private data. It allows users of certain levels to share their expertise by adding comments to the data. It has a user-friendly web interface together with genome browser and pathway viewer, which display data in an intuitive manner to the soybean researchers, producers and consumers. Conclusions: SoyKB addresses the increasing need of the soybean research community to have a one-stop-shop functional and translational omics web resource for information retrieval and analysis in a user-friendly way. SoyKB can be publicly accessed at http://soykb.org/.« less

  10. Ontology-Based Search of Genomic Metadata.

    PubMed

    Fernandez, Javier D; Lenzerini, Maurizio; Masseroli, Marco; Venco, Francesco; Ceri, Stefano

    2016-01-01

    The Encyclopedia of DNA Elements (ENCODE) is a huge and still expanding public repository of more than 4,000 experiments and 25,000 data files, assembled by a large international consortium since 2007; unknown biological knowledge can be extracted from these huge and largely unexplored data, leading to data-driven genomic, transcriptomic, and epigenomic discoveries. Yet, search of relevant datasets for knowledge discovery is limitedly supported: metadata describing ENCODE datasets are quite simple and incomplete, and not described by a coherent underlying ontology. Here, we show how to overcome this limitation, by adopting an ENCODE metadata searching approach which uses high-quality ontological knowledge and state-of-the-art indexing technologies. Specifically, we developed S.O.S. GeM (http://www.bioinformatics.deib.polimi.it/SOSGeM/), a system supporting effective semantic search and retrieval of ENCODE datasets. First, we constructed a Semantic Knowledge Base by starting with concepts extracted from ENCODE metadata, matched to and expanded on biomedical ontologies integrated in the well-established Unified Medical Language System. We prove that this inference method is sound and complete. Then, we leveraged the Semantic Knowledge Base to semantically search ENCODE data from arbitrary biologists' queries. This allows correctly finding more datasets than those extracted by a purely syntactic search, as supported by the other available systems. We empirically show the relevance of found datasets to the biologists' queries.

  11. Mercury: Reusable software application for Metadata Management, Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.

    2009-12-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury is itself a reusable toolset for metadata, with current use in 12 different projects. Mercury also supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects To balance these common and project-specific needs, Mercury’s architecture includes three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of configuration files. The harvested files are then passed to the Indexing system, where each of the fields in these structured metadata records are indexed properly, so that the query engine can perform

  12. Genetic architecture of wild soybean (Glycine soja) response to soybean cyst nematode (Heterodera glycines).

    PubMed

    Zhang, Hengyou; Song, Qijian; Griffin, Joshua D; Song, Bao-Hua

    2017-12-01

    The soybean cyst nematode (SCN) is one of the most destructive pathogens of soybean plants worldwide. Host-plant resistance is an environmentally friendly method to mitigate SCN damage. To date, the resistant soybean cultivars harbor limited genetic variation, and some are losing resistance. Thus, a better understanding of the genetic mechanisms of the SCN resistance, as well as developing diverse resistant soybean cultivars, is urgently needed. In this study, a genome-wide association study (GWAS) was conducted using 1032 wild soybean (Glycine soja) accessions with over 42,000 single-nucleotide polymorphisms (SNPs) to understand the genetic architecture of G. soja resistance to SCN race 1. Ten SNPs were significantly associated with the response to race 1. Three SNPs on chromosome 18 were localized within the previously identified quantitative trait loci (QTLs), and two of which were localized within a strong linkage disequilibrium block encompassing a nucleotide-binding (NB)-ARC disease resistance gene (Glyma.18G102600). Genes encoding methyltransferases, the calcium-dependent signaling protein, the leucine-rich repeat kinase family protein, and the NB-ARC disease resistance protein, were identified as promising candidate genes. The identified SNPs and candidate genes can not only shed light on the molecular mechanisms underlying SCN resistance, but also can facilitate soybean improvement employing wild genetic resources.

  13. Serving Fisheries and Ocean Metadata to Communities Around the World

    NASA Technical Reports Server (NTRS)

    Meaux, Melanie

    2006-01-01

    NASA's Global Change Master Directory (GCMD) assists the oceanographic community in the discovery, access, and sharing of scientific data by serving on-line fisheries and ocean metadata to users around the globe. As of January 2006, the directory holds more than 16,300 Earth Science data descriptions and over 1,300 services descriptions. Of these, nearly 4,000 unique ocean-related metadata records are available to the public, with many having direct links to the data. In 2005, the GCMD averaged over 5 million hits a month, with nearly a half million unique hosts for the year. Through the GCMD portal (http://qcrnd.nasa.qov/), users can search vast and growing quantities of data and services using controlled keywords, free-text searches or a combination of both. Users may now refine a search based on topic, location, instrument, platform, project, data center, spatial and temporal coverage. The directory also offers data holders a means to post and search their data through customized portals, i.e. online customized subset metadata directories. The discovery metadata standard used is the Directory Interchange Format (DIF), adopted in 1994. This format has evolved to accommodate other national and international standards such as FGDC and IS019115. Users can submit metadata through easy-to-use online and offline authoring tools. The directory, which also serves as a coordinating node of the International Directory Network (IDN), has been active at the international, regional and national level for many years through its involvement with the Committee on Earth Observation Satellites (CEOS), federal agencies (such as NASA, NOAA, and USGS), international agencies (such as IOC/IODE, UN, and JAXA) and partnerships (such as ESIP, IOOS/DMAC, GOSIC, GLOBEC, OBIS, and GoMODP), sharing experience, knowledge related to metadata and/or data management and interoperability.

  14. Preceding crop affects soybean aphid abundance and predator-prey dynamics in soybean

    USDA-ARS?s Scientific Manuscript database

    Crop rotations alter the soil environment and physiology of the subsequent crop in ways that may affect herbivore abundance. Soybean aphids are a consistent pest of soybean throughout North America, but little work has focused on how preceding crops may affect aphid populations. In a replicated expe...

  15. Using RDF and Git to Realize a Collaborative Metadata Repository.

    PubMed

    Stöhr, Mark R; Majeed, Raphael W; Günther, Andreas

    2018-01-01

    The German Center for Lung Research (DZL) is a research network with the aim of researching respiratory diseases. The participating study sites' register data differs in terms of software and coding system as well as data field coverage. To perform meaningful consortium-wide queries through one single interface, a uniform conceptual structure is required covering the DZL common data elements. No single existing terminology includes all our concepts. Potential candidates such as LOINC and SNOMED only cover specific subject areas or are not granular enough for our needs. To achieve a broadly accepted and complete ontology, we developed a platform for collaborative metadata management. The DZL data management group formulated detailed requirements regarding the metadata repository and the user interfaces for metadata editing. Our solution builds upon existing standard technologies allowing us to meet those requirements. Its key parts are RDF and the distributed version control system Git. We developed a software system to publish updated metadata automatically and immediately after performing validation tests for completeness and consistency.

  16. OpenFlow arbitrated programmable network channels for managing quantum metadata

    DOE PAGES

    Dasari, Venkat R.; Humble, Travis S.

    2016-10-10

    Quantum networks must classically exchange complex metadata between devices in order to carry out information for protocols such as teleportation, super-dense coding, and quantum key distribution. Demonstrating the integration of these new communication methods with existing network protocols, channels, and data forwarding mechanisms remains an open challenge. Software-defined networking (SDN) offers robust and flexible strategies for managing diverse network devices and uses. We adapt the principles of SDN to the deployment of quantum networks, which are composed from unique devices that operate according to the laws of quantum mechanics. We show how quantum metadata can be managed within a software-definedmore » network using the OpenFlow protocol, and we describe how OpenFlow management of classical optical channels is compatible with emerging quantum communication protocols. We next give an example specification of the metadata needed to manage and control quantum physical layer (QPHY) behavior and we extend the OpenFlow interface to accommodate this quantum metadata. Here, we conclude by discussing near-term experimental efforts that can realize SDN’s principles for quantum communication.« less

  17. OpenFlow arbitrated programmable network channels for managing quantum metadata

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasari, Venkat R.; Humble, Travis S.

    Quantum networks must classically exchange complex metadata between devices in order to carry out information for protocols such as teleportation, super-dense coding, and quantum key distribution. Demonstrating the integration of these new communication methods with existing network protocols, channels, and data forwarding mechanisms remains an open challenge. Software-defined networking (SDN) offers robust and flexible strategies for managing diverse network devices and uses. We adapt the principles of SDN to the deployment of quantum networks, which are composed from unique devices that operate according to the laws of quantum mechanics. We show how quantum metadata can be managed within a software-definedmore » network using the OpenFlow protocol, and we describe how OpenFlow management of classical optical channels is compatible with emerging quantum communication protocols. We next give an example specification of the metadata needed to manage and control quantum physical layer (QPHY) behavior and we extend the OpenFlow interface to accommodate this quantum metadata. Here, we conclude by discussing near-term experimental efforts that can realize SDN’s principles for quantum communication.« less

  18. Assessing the Need for Community College Baccalaureate Degree Programs in Mississippi

    ERIC Educational Resources Information Center

    Williams, Johannah Bell

    2010-01-01

    This study involved assessing the professional and personal opinions of Mississippi community college students, faculty and administrators regarding the need for community college baccalaureate degree programs in Mississippi. The goal of this study was to determine if students, faculty and administrators at Mississippi community colleges believed…

  19. Describing environmental public health data: implementing a descriptive metadata standard on the environmental public health tracking network.

    PubMed

    Patridge, Jeff; Namulanda, Gonza

    2008-01-01

    The Environmental Public Health Tracking (EPHT) Network provides an opportunity to bring together diverse environmental and health effects data by integrating}?> local, state, and national databases of environmental hazards, environmental exposures, and health effects. To help users locate data on the EPHT Network, the network will utilize descriptive metadata that provide critical information as to the purpose, location, content, and source of these data. Since 2003, the Centers for Disease Control and Prevention's EPHT Metadata Subgroup has been working to initiate the creation and use of descriptive metadata. Efforts undertaken by the group include the adoption of a metadata standard, creation of an EPHT-specific metadata profile, development of an open-source metadata creation tool, and promotion of the creation of descriptive metadata by changing the perception of metadata in the public health culture.

  20. Extraction of CT dose information from DICOM metadata: automated Matlab-based approach.

    PubMed

    Dave, Jaydev K; Gingold, Eric L

    2013-01-01

    The purpose of this study was to extract exposure parameters and dose-relevant indexes of CT examinations from information embedded in DICOM metadata. DICOM dose report files were identified and retrieved from a PACS. An automated software program was used to extract from these files information from the structured elements in the DICOM metadata relevant to exposure. Extracting information from DICOM metadata eliminated potential errors inherent in techniques based on optical character recognition, yielding 100% accuracy.

  1. Soybean Aphid Infestation Induces Changes in Fatty Acid Metabolism in Soybean

    PubMed Central

    Kanobe, Charles; McCarville, Michael T.; O’Neal, Matthew E.; Tylka, Gregory L.; MacIntosh, Gustavo C.

    2015-01-01

    The soybean aphid (Aphis glycines Matsumura) is one of the most important insect pests of soybeans in the North-central region of the US. It has been hypothesized that aphids avoid effective defenses by inhibition of jasmonate-regulated plant responses. Given the role fatty acids play in jasmonate-induced plant defenses, we analyzed the fatty acid profile of soybean leaves and seeds from aphid-infested plants. Aphid infestation reduced levels of polyunsaturated fatty acids in leaves with a concomitant increase in palmitic acid. In seeds, a reduction in polyunsaturated fatty acids was associated with an increase in stearic acid and oleic acid. Soybean plants challenged with the brown stem rot fungus or with soybean cyst nematodes did not present changes in fatty acid levels in leaves or seeds, indicating that the changes induced by aphids are not a general response to pests. One of the polyunsaturated fatty acids, linolenic acid, is the precursor of jasmonate; thus, these changes in fatty acid metabolism may be examples of “metabolic hijacking” by the aphid to avoid the induction of effective defenses. Based on the changes in fatty acid levels observed in seeds and leaves, we hypothesize that aphids potentially induce interference in the fatty acid desaturation pathway, likely reducing FAD2 and FAD6 activity that leads to a reduction in polyunsaturated fatty acids. Our data support the idea that aphids block jasmonate-dependent defenses by reduction of the hormone precursor. PMID:26684003

  2. Discovering Physical Samples Through Identifiers, Metadata, and Brokering

    NASA Astrophysics Data System (ADS)

    Arctur, D. K.; Hills, D. J.; Jenkyns, R.

    2015-12-01

    Physical samples, particularly in the geosciences, are key to understanding the Earth system, its history, and its evolution. Our record of the Earth as captured by physical samples is difficult to explain and mine for understanding, due to incomplete, disconnected, and evolving metadata content. This is further complicated by differing ways of classifying, cataloguing, publishing, and searching the metadata, especially when specimens do not fit neatly into a single domain—for example, fossils cross disciplinary boundaries (mineral and biological). Sometimes even the fundamental classification systems evolve, such as the geological time scale, triggering daunting processes to update existing specimen databases. Increasingly, we need to consider ways of leveraging permanent, unique identifiers, as well as advancements in metadata publishing that link digital records with physical samples in a robust, adaptive way. An NSF EarthCube Research Coordination Network (RCN) called the Internet of Samples (iSamples) is now working to bridge the metadata schemas for biological and geological domains. We are leveraging the International Geo Sample Number (IGSN) that provides a versatile system of registering physical samples, and working to harmonize this with the DataCite schema for Digital Object Identifiers (DOI). A brokering approach for linking disparate catalogues and classification systems could help scale discovery and access to the many large collections now being managed (sometimes millions of specimens per collection). This presentation is about our community building efforts, research directions, and insights to date.

  3. The Value of Data and Metadata Standardization for Interoperability in Giovanni Or: Why Your Product's Metadata Causes Us Headaches!

    NASA Technical Reports Server (NTRS)

    Smit, Christine; Hegde, Mahabaleshwara; Strub, Richard; Bryant, Keith; Li, Angela; Petrenko, Maksym

    2017-01-01

    Giovanni is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization.

  4. MMI's Metadata and Vocabulary Solutions: 10 Years and Growing

    NASA Astrophysics Data System (ADS)

    Graybeal, J.; Gayanilo, F.; Rueda-Velasquez, C. A.

    2014-12-01

    The Marine Metadata Interoperability project (http://marinemetadata.org) held its public opening at AGU's 2004 Fall Meeting. For 10 years since that debut, the MMI guidance and vocabulary sites have served over 100,000 visitors, with 525 community members and continuous Steering Committee leadership. Originally funded by the National Science Foundation, over the years multiple organizations have supported the MMI mission: "Our goal is to support collaborative research in the marine science domain, by simplifying the incredibly complex world of metadata into specific, straightforward guidance. MMI encourages scientists and data managers at all levels to apply good metadata practices from the start of a project, by providing the best guidance and resources for data management, and developing advanced metadata tools and services needed by the community." Now hosted by the Harte Research Institute at Texas A&M University at Corpus Christi, MMI continues to provide guidance and services to the community, and is planning for marine science and technology needs for the next 10 years. In this presentation we will highlight our major accomplishments, describe our recent achievements and imminent goals, and propose a vision for improving marine data interoperability for the next 10 years, including Ontology Registry and Repository (http://mmisw.org/orr) advancements and applications (http://mmisw.org/cfsn).

  5. Advancements in Large-Scale Data/Metadata Management for Scientific Data.

    NASA Astrophysics Data System (ADS)

    Guntupally, K.; Devarakonda, R.; Palanisamy, G.; Frame, M. T.

    2017-12-01

    Scientific data often comes with complex and diverse metadata which are critical for data discovery and users. The Online Metadata Editor (OME) tool, which was developed by an Oak Ridge National Laboratory team, effectively manages diverse scientific datasets across several federal data centers, such as DOE's Atmospheric Radiation Measurement (ARM) Data Center and USGS's Core Science Analytics, Synthesis, and Libraries (CSAS&L) project. This presentation will focus mainly on recent developments and future strategies for refining OME tool within these centers. The ARM OME is a standard based tool (https://www.archive.arm.gov/armome) that allows scientists to create and maintain metadata about their data products. The tool has been improved with new workflows that help metadata coordinators and submitting investigators to submit and review their data more efficiently. The ARM Data Center's newly upgraded Data Discovery Tool (http://www.archive.arm.gov/discovery) uses rich metadata generated by the OME to enable search and discovery of thousands of datasets, while also providing a citation generator and modern order-delivery techniques like Globus (using GridFTP), Dropbox and THREDDS. The Data Discovery Tool also supports incremental indexing, which allows users to find new data as and when they are added. The USGS CSAS&L search catalog employs a custom version of the OME (https://www1.usgs.gov/csas/ome), which has been upgraded with high-level Federal Geographic Data Committee (FGDC) validations and the ability to reserve and mint Digital Object Identifiers (DOIs). The USGS's Science Data Catalog (SDC) (https://data.usgs.gov/datacatalog) allows users to discover a myriad of science data holdings through a web portal. Recent major upgrades to the SDC and ARM Data Discovery Tool include improved harvesting performance and migration using new search software, such as Apache Solr 6.0 for serving up data/metadata to scientific communities. Our presentation will highlight

  6. Pathogen metadata platform: software for accessing and analyzing pathogen strain information.

    PubMed

    Chang, Wenling E; Peterson, Matthew W; Garay, Christopher D; Korves, Tonia

    2016-09-15

    Pathogen metadata includes information about where and when a pathogen was collected and the type of environment it came from. Along with genomic nucleotide sequence data, this metadata is growing rapidly and becoming a valuable resource not only for research but for biosurveillance and public health. However, current freely available tools for analyzing this data are geared towards bioinformaticians and/or do not provide summaries and visualizations needed to readily interpret results. We designed a platform to easily access and summarize data about pathogen samples. The software includes a PostgreSQL database that captures metadata useful for disease outbreak investigations, and scripts for downloading and parsing data from NCBI BioSample and BioProject into the database. The software provides a user interface to query metadata and obtain standardized results in an exportable, tab-delimited format. To visually summarize results, the user interface provides a 2D histogram for user-selected metadata types and mapping of geolocated entries. The software is built on the LabKey data platform, an open-source data management platform, which enables developers to add functionalities. We demonstrate the use of the software in querying for a pathogen serovar and for genome sequence identifiers. This software enables users to create a local database for pathogen metadata, populate it with data from NCBI, easily query the data, and obtain visual summaries. Some of the components, such as the database, are modular and can be incorporated into other data platforms. The source code is freely available for download at https://github.com/wchangmitre/bioattribution .

  7. Concentrations and transport of suspended sediment, nutrients, and pesticides in the lower Mississippi-Atchafalaya River subbasin during the 2011 Mississippi River flood, April through July

    USGS Publications Warehouse

    Welch, Heather L.; Coupe, Richard H.; Aulenbach, Brent T.

    2014-01-01

    High streamflow associated with the April–July 2011 Mississippi River flood forced the simultaneous opening of the three major flood-control structures in the lower Mississippi-Atchafalaya River subbasin for the first time in history in order to manage the amount of water moving through the system. The U.S. Geological Survey (USGS) collected samples for analysis of field properties, suspended-sediment concentration, particle-size, total nitrogen, nitrate plus nitrite, total phosphorus, orthophosphate, and up to 136 pesticides at 11 water-quality stations and 2 flood-control structures in the lower Mississippi-Atchafalaya River subbasin from just above the confluence of the upper Mississippi and Ohio Rivers downstream from April through July 2011. Monthly fluxes of suspended sediment, suspended sand, total nitrogen, nitrate plus nitrite, total phosphorus, orthophosphate, atrazine, simazine, metolachlor, and acetochlor were estimated at 9 stations and 2 flood-control structures during the flood period. Although concentrations during the 2011 flood were within the range of what has been observed historically, concentrations decreased during peak streamflow on the lower Mississippi River. Prior to the 2011 flood, high concentrations of suspended sediment and nitrate were observed in March 2011 at stations downstream of the confluence of the upper Mississippi and Ohio Rivers, which probably resulted in a loss of available material for movement during the flood. In addition, the major contributor of streamflow to the lower Mississippi-Atchafalaya River subbasin during April and May was the Ohio River, whose water contained lower concentrations of suspended sediment, pesticides, and nutrients than water from the upper Mississippi River. Estimated fluxes for the 4-month flood period were still quite high and contributed approximately 50 percent of the estimated annual suspended sediment, nitrate, and total phosphorus fluxes in 2011; the largest fluxes were estimated at

  8. Social tagging in the life sciences: characterizing a new metadata resource for bioinformatics.

    PubMed

    Good, Benjamin M; Tennis, Joseph T; Wilkinson, Mark D

    2009-09-25

    Academic social tagging systems, such as Connotea and CiteULike, provide researchers with a means to organize personal collections of online references with keywords (tags) and to share these collections with others. One of the side-effects of the operation of these systems is the generation of large, publicly accessible metadata repositories describing the resources in the collections. In light of the well-known expansion of information in the life sciences and the need for metadata to enhance its value, these repositories present a potentially valuable new resource for application developers. Here we characterize the current contents of two scientifically relevant metadata repositories created through social tagging. This investigation helps to establish how such socially constructed metadata might be used as it stands currently and to suggest ways that new social tagging systems might be designed that would yield better aggregate products. We assessed the metadata that users of CiteULike and Connotea associated with citations in PubMed with the following metrics: coverage of the document space, density of metadata (tags) per document, rates of inter-annotator agreement, and rates of agreement with MeSH indexing. CiteULike and Connotea were very similar on all of the measurements. In comparison to PubMed, document coverage and per-document metadata density were much lower for the social tagging systems. Inter-annotator agreement within the social tagging systems and the agreement between the aggregated social tagging metadata and MeSH indexing was low though the latter could be increased through voting. The most promising uses of metadata from current academic social tagging repositories will be those that find ways to utilize the novel relationships between users, tags, and documents exposed through these systems. For more traditional kinds of indexing-based applications (such as keyword-based search) to benefit substantially from socially generated metadata in

  9. Soybean aphids making their summer appearance early

    USDA-ARS?s Scientific Manuscript database

    Two small, soft-bodied insects have begun showing up in South Dakota soybean. One is the soybean aphid, and the other is a mealybug. Soybean aphids are yellow to yellow/green and are usually found feeding on the underside of leaves. Incidence of soybean aphid has been a bit higher than typical fo...

  10. 7 CFR 810.1601 - Definition of soybeans.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Definition of soybeans. 810.1601 Section 810.1601... GRAIN United States Standards for Soybeans Terms Defined § 810.1601 Definition of soybeans. Grain that consists of 50 percent or more of whole or broken soybeans (Glycine max (L.) Merr.) that will not pass...

  11. The flora of Oktibbeha County, Mississippi

    USGS Publications Warehouse

    Leidolf, A.; McDaniel, S.; Nuttle, T.

    2002-01-01

    We surveyed the flora of Oktibbeha County, Mississippi, U.S.A., from February 1994 to 1996. Occupying 118 square kilometers in east-central Mississippi, Oktibbeha County lies among 3 physiographic regions that include, from west to east, Interior Flatwoods, Pontotoc Ridge, and Black Prairie. Accordingly, the county harbors a diverse flora. Based on field work, as well as an extensive review of published literature and herbarium records at IBE and MISSA, we recorded a total of 1,148 taxa (1,125 species, 7 hybrids, 16 infraspecific taxa) belonging to 514 genera in 160 families, over 85% of all taxa documented were native. Compared to 3 other counties in east-central Mississippi, Oktibbeha County has the second largest recorded flora. The number of state-listed (endangered, threatened, or of special concern) taxa (67) documented in this survey far exceeds that reported from any other county in the region. Three introduced species, Ilex cornuta Lindl. & Paxton, Mahonia bealei (Fortune) Carrie??re, and Nandina domestica Thunb., are reported in a naturalized state for the first time from Mississippi. We also describe 16 different plant communities belonging to 5 broad habitat categories: bottomland forests, upland forests and prairies, aquatic habitats, seepage areas, and human-influenced habitats. A detailed description of the vegetation associated with each of these communities is provided.

  12. Seasonal soybean crop reflectance

    NASA Technical Reports Server (NTRS)

    Lemaster, E. W. (Principal Investigator); Chance, J. E.

    1983-01-01

    Data are presented from field measurements of 1980 including 5 acquisitions of handheld radiometer reflectance measurements, 7 complete sets of parameters for implementing the Suits mode, and other biophysical parameters to characterize the soybean canopy. LANDSAT calculations on the simulated Brazilian soybean reflectance are included along with data collected during the summer and fall on 1981 on soybean single leaf optical parameters for three irrigation treatments. Tests of the Suits vegetative canopy reflectance model for the full hemisphere of observer directions as well as the nadir direction show moderate agreement for the visible channels of the MSS and poor agreement in the near infrared channel. Temporal changes in the spectral characteristics of the single leaves were seen to occur as a function of maturity which demonstrates that the absorptance of a soybean single leaf is more a function of thetransmittancee characteristics than the seasonally consistent single leaf reflectance.

  13. Towards a semantic medical Web: HealthCyberMap's tool for building an RDF metadata base of health information resources based on the Qualified Dublin Core Metadata Set.

    PubMed

    Boulos, Maged N; Roudsari, Abdul V; Carson, Ewart R

    2002-07-01

    HealthCyberMap (http://healthcybermap.semanticweb.org/) aims at mapping Internet health information resources in novel ways for enhanced retrieval and navigation. This is achieved by collecting appropriate resource metadata in an unambiguous form that preserves semantics. We modelled a qualified Dublin Core (DC) metadata set ontology with extra elements for resource quality and geographical provenance in Prot g -2000. A metadata collection form helps acquiring resource instance data within Prot g . The DC subject field is populated with UMLS terms directly imported from UMLS Knowledge Source Server using UMLS tab, a Prot g -2000 plug-in. The project is saved in RDFS/RDF. The ontology and associated form serve as a free tool for building and maintaining an RDF medical resource metadata base. The UMLS tab enables browsing and searching for concepts that best describe a resource, and importing them to DC subject fields. The resultant metadata base can be used with a search and inference engine, and have textual and/or visual navigation interface(s) applied to it, to ultimately build a medical Semantic Web portal. Different ways of exploiting Prot g -2000 RDF output are discussed. By making the context and semantics of resources, not merely their raw text and formatting, amenable to computer 'understanding,' we can build a Semantic Web that is more useful to humans than the current Web. This requires proper use of metadata and ontologies. Clinical codes can reliably describe the subjects of medical resources, establish the semantic relationships (as defined by underlying coding scheme) between related resources, and automate their topical categorisation.

  14. Serious Games for Health: The Potential of Metadata.

    PubMed

    Göbel, Stefan; Maddison, Ralph

    2017-02-01

    Numerous serious games and health games exist, either as commercial products (typically with a focus on entertaining a broad user group) or smaller games and game prototypes, often resulting from research projects (typically tailored to a smaller user group with a specific health characteristic). A major drawback of existing health games is that they are not very well described and attributed with (machine-readable, quantitative, and qualitative) metadata such as the characterizing goal of the game, the target user group, or expected health effects well proven in scientific studies. This makes it difficult or even impossible for end users to find and select the most appropriate game for a specific situation (e.g., health needs). Therefore, the aim of this article was to motivate the need and potential/benefit of metadata for the description and retrieval of health games and to describe a descriptive model for the qualitative description of games for health. It was not the aim of the article to describe a stable, running system (portal) for health games. This will be addressed in future work. Building on previous work toward a metadata format for serious games, a descriptive model for the formal description of games for health is introduced. For the conceptualization of this model, classification schemata of different existing health game repositories are considered. The classification schema consists of three levels: a core set of mandatory descriptive fields relevant for all games for health application areas, a detailed level with more comprehensive, optional information about the games, and so-called extension as level three with specific descriptive elements relevant for dedicated health games application areas, for example, cardio training. A metadata format provides a technical framework to describe, find, and select appropriate health games matching the needs of the end user. Future steps to improve, apply, and promote the metadata format in the health games

  15. Presence of Fungicides Used to Control Asian Soybean Rust in Streams in Agricultural Areas in the United States

    NASA Astrophysics Data System (ADS)

    Sandstrom, M. W.; Battaglin, W. A.

    2007-05-01

    Concentrations of 11 fungicides were measured in stream samples during 2 years in agricultural areas in the United States that grow predominantly corn and soybean. The fungicides are registered for control of Asian Soybean Rust (ASR), which entered the United States in 2004. Many of these fungicides were registered under an emergency exemption because evaluation of environmental risks related to their widespread use on soybeans had not been completed. Some of these fungicides are considered moderately to highly toxic to fish and aquatic invertebrates. We developed a solid-phase extraction and gas chromatography/mass spectrometry method for determining the fungicides at low concentrations (ng/L). Stream samples were collected 2 to 4 times at study areas during the late spring through fall season when fungicides are applied. Six fungicides registered for control of ASR (Phakospora pachyrhizi) in 2005 were measured in streams in Alabama, Georgia, North Carolina, South Carolina, and Mississippi during August-November, 2005. One or more fungicides were detected in 8 of the 12 streams sampled. Azoxystrobin, pyraclostrobin, propiconazole, tebuconazole, and myclobutanil were found in at least one of the 40 samples collected, while chlorothalonil was not found. Azoxystrobin was detected most frequently, in 35 percent of the samples. In 2006, five additional fungicides registered for use in control of ASR were included in the analytical method. One or more of the fungicides (azoxystrobin, pyraclostrobin, trifloxystrobin, metconazole, propiconazole, tebuconazole, tetraconazole, myclobutanil) were detected in 12 of the 16 streams sampled from areas in the South and Midwest during May-September, 2006. Azoxystrobin was detected most frequently (40 percent of the samples) and the highest concentration was 1.1 μg/L in a small predominantly cotton and soybean watershed. The highest concentrations of azoxystrobin were measured prior to the spread of ASR in 2006, and the detections

  16. Stability of isoflavone isomers in steamed black soybeans and black soybean koji stored under different conditions.

    PubMed

    Huang, Ru-Yue; Chou, Cheng-Chun

    2009-03-11

    Steamed black soybeans and black soybean koji, a potentially functional food additive, were stored at 4 or 25 degrees C with or without deoxidant and desiccant for 120 days. After storage, steamed black soybeans and koji showed various extents of reduction in isoflavone contents dependent on storage temperature, packaging condition, and the kind of isoflavone isomer. Generally, black soybeans and koji showed the highest residual of isoflavone when they were stored at 4 degrees C with deoxidant and desiccant. Under this storage condition, beta-glucosides (daidzin, glycitin, and genistein), acetyl glucosides (acetyldaidzin, acetylglycitin, and acetylgenistin), manlonyl glucosides (malonyldaidzin, malonglycitin, and malonylgenistin), and aglycones (daidzein, glycitein, and genistin) in steamed black soybeans exhibited residuals of 100.1-100.9, 92.0-99.4, 90.0-94.0, and 77.2-78.8%, respectively, of their original contents after 120 days of storage. Meanwhile, the residuals found in black soybean koji were 77.8-90.0, 13.1-88.9, 66.7-85.5, and 76.4-80.6%, respectively.

  17. Production of Aflatoxin on Soybeans

    PubMed Central

    Gupta, S. K.; Venkitasubramanian, T. A.

    1975-01-01

    Probable factors influencing resistance to aflatoxin synthesis in soybeans have been investigated by using cultures of Aspergillus parasiticus NRRL 3240. Soybeans contain a small amount of zinc (0.01 μg/g) bound to phytic acid. Autoclaving soybeans at 15 pounds (6803.88 g) for 15 min increases the aflatoxin production, probably by making zinc available. Addition of zinc to both autoclaved and nonautoclaved soybeans promotes aflatoxin production. However, addition of varying levels of phytic acid at a constant concentration of zinc depresses aflatoxin synthesis with an increase in the added phytic acid. In a synthetic medium known to give good yields of aflatoxin, the addition of phytic acid (10 mM) decreases aflatoxin synthesis. PMID:1171654

  18. Improving Earth Science Metadata: Modernizing ncISO

    NASA Astrophysics Data System (ADS)

    O'Brien, K.; Schweitzer, R.; Neufeld, D.; Burger, E. F.; Signell, R. P.; Arms, S. C.; Wilcox, K.

    2016-12-01

    ncISO is a package of tools developed at NOAA's National Center for Environmental Information (NCEI) that facilitates the generation of ISO 19115-2 metadata from NetCDF data sources. The tool currently exists in two iterations: a command line utility and a web-accessible service within the THREDDS Data Server (TDS). Several projects, including NOAA's Unified Access Framework (UAF), depend upon ncISO to generate the ISO-compliant metadata from their data holdings and use the resulting information to populate discovery tools such as NCEI's ESRI Geoportal and NOAA's data.noaa.gov CKAN system. In addition to generating ISO 19115-2 metadata, the tool calculates a rubric score based on how well the dataset follows the Attribute Conventions for Dataset Discovery (ACDD). The result of this rubric calculation, along with information about what has been included and what is missing is displayed in an HTML document generated by the ncISO software package. Recently ncISO has fallen behind in terms of supporting updates to conventions such updates to the ACDD. With the blessing of the original programmer, NOAA's UAF has been working to modernize the ncISO software base. In addition to upgrading ncISO to utilize version1.3 of the ACDD, we have been working with partners at Unidata and IOOS to unify the tool's code base. In essence, we are merging the command line capabilities into the same software that will now be used by the TDS service, allowing easier updates when conventions such as ACDD are updated in the future. In this presentation, we will discuss the work the UAF project has done to support updated conventions within ncISO, as well as describe how the updated tool is helping to improve metadata throughout the earth and ocean sciences.

  19. Genome-wide identification of soybean microRNA responsive to soybean cyst nematodes infection by deep sequencing.

    PubMed

    Tian, Bin; Wang, Shichen; Todd, Timothy C; Johnson, Charles D; Tang, Guiliang; Trick, Harold N

    2017-08-02

    The soybean cyst nematode (SCN), Heterodera glycines, is one of the most devastating diseases limiting soybean production worldwide. It is known that small RNAs, including microRNAs (miRNAs) and small interfering RNAs (siRNAs), play important roles in regulating plant growth and development, defense against pathogens, and responses to environmental changes. In order to understand the role of soybean miRNAs during SCN infection, we analyzed 24 small RNA libraries including three biological replicates from two soybean cultivars (SCN susceptible KS4607, and SCN HG Type 7 resistant KS4313N) that were grown under SCN-infested and -noninfested soil at two different time points (SCN feeding establishment and egg production). In total, 537 known and 70 putative novel miRNAs in soybean were identified from a total of 0.3 billion reads (average about 13.5 million reads for each sample) with the programs of Bowtie and miRDeep2 mapper. Differential expression analyses were carried out using edgeR to identify miRNAs involved in the soybean-SCN interaction. Comparative analysis of miRNA profiling indicated a total of 60 miRNAs belonging to 25 families that might be specifically related to cultivar responses to SCN. Quantitative RT-PCR validated similar miRNA interaction patterns as sequencing results. These findings suggest that miRNAs are likely to play key roles in soybean response to SCN. The present work could provide a framework for miRNA functional identification and the development of novel approaches for improving soybean SCN resistance in future studies.

  20. Solutions for extracting file level spatial metadata from airborne mission data

    NASA Astrophysics Data System (ADS)

    Schwab, M. J.; Stanley, M.; Pals, J.; Brodzik, M.; Fowler, C.; Icebridge Engineering/Spatial Metadata

    2011-12-01

    Authors: Michael Stanley Mark Schwab Jon Pals Mary J. Brodzik Cathy Fowler Collaboration: Raytheon EED and NSIDC Raytheon / EED 5700 Rivertech Court Riverdale, MD 20737 NSIDC University of Colorado UCB 449 Boulder, CO 80309-0449 Data sets acquired from satellites and aircraft may differ in many ways. We will focus on the differences in spatial coverage between the two platforms. Satellite data sets over a given period typically cover large geographic regions. These data are collected in a consistent, predictable and well understood manner due to the uniformity of satellite orbits. Since satellite data collection paths are typically smooth and uniform the data from satellite instruments can usually be described with simple spatial metadata. Subsequently, these spatial metadata can be stored and searched easily and efficiently. Conversely, aircraft have significantly more freedom to change paths, circle, overlap, and vary altitude all of which add complexity to the spatial metadata. Aircraft are also subject to wind and other elements that result in even more complicated and unpredictable spatial coverage areas. This unpredictability and complexity makes it more difficult to extract usable spatial metadata from data sets collected on aircraft missions. It is not feasible to use all of the location data from aircraft mission data sets for use as spatial metadata. The number of data points in typical data sets poses serious performance problems for spatial searching. In order to provide efficient spatial searching of the large number of files cataloged in our systems, we need to extract approximate spatial descriptions as geo-polygons from a small number of vertices (fewer than two hundred). We present some of the challenges and solutions for creating airborne mission-derived spatial metadata. We are implementing these methods to create the spatial metadata for insertion of IceBridge mission data into ECS for public access through NSIDC and ECHO but, they are

  1. CruiseViewer: SIOExplorer Graphical Interface to Metadata and Archives.

    NASA Astrophysics Data System (ADS)

    Sutton, D. W.; Helly, J. J.; Miller, S. P.; Chase, A.; Clark, D.

    2002-12-01

    We are introducing "CruiseViewer" as a prototype graphical interface for the SIOExplorer digital library project, part of the overall NSF National Science Digital Library (NSDL) effort. When complete, CruiseViewer will provide access to nearly 800 cruises, as well as 100 years of documents and images from the archives of the Scripps Institution of Oceanography (SIO). The project emphasizes data object accessibility, a rich metadata format, efficient uploading methods and interoperability with other digital libraries. The primary function of CruiseViewer is to provide a human interface to the metadata database and to storage systems filled with archival data. The system schema is based on the concept of an "arbitrary digital object" (ADO). Arbitrary in that if the object can be stored on a computer system then SIOExplore can manage it. Common examples are a multibeam swath bathymetry file, a .pdf cruise report, or a tar file containing all the processing scripts used on a cruise. We require a metadata file for every ADO in an ascii "metadata interchange format" (MIF), which has proven to be highly useful for operability and extensibility. Bulk ADO storage is managed using the Storage Resource Broker, SRB, data handling middleware developed at the San Diego Supercomputer Center that centralizes management and access to distributed storage devices. MIF metadata are harvested from several sources and housed in a relational (Oracle) database. For CruiseViewer, cgi scripts resident on an Apache server are the primary communication and service request handling tools. Along with the CruiseViewer java application, users can query, access and download objects via a separate method that operates through standard web browsers, http://sioexplorer.ucsd.edu. Both provide the functionability to query and view object metadata, and select and download ADOs. For the CruiseViewer application Java 2D is used to add a geo-referencing feature that allows users to select basemap images

  2. Mississippi Library Commission.

    ERIC Educational Resources Information Center

    Mississippi Library Commission, Jackson.

    This document presents funding and expenditure statistics for the Mississippi Library Commission for fiscal year 1995, as well as an overview of developments in the state's public libraries. These developments include budget increases; increased circulation and use of electronic reference sources; additional staffing; and developments in state…

  3. International Metadata Standards and Enterprise Data Quality Metadata Systems

    NASA Technical Reports Server (NTRS)

    Habermann, Ted

    2016-01-01

    Well-documented data quality is critical in situations where scientists and decision-makers need to combine multiple datasets from different disciplines and collection systems to address scientific questions or difficult decisions. Standardized data quality metadata could be very helpful in these situations. Many efforts at developing data quality standards falter because of the diversity of approaches to measuring and reporting data quality. The one size fits all paradigm does not generally work well in this situation. I will describe these and other capabilities of ISO 19157 with examples of how they are being used to describe data quality across the NASA EOS Enterprise and also compare these approaches with other standards.

  4. Metadata from data: identifying holidays from anesthesia data.

    PubMed

    Starnes, Joseph R; Wanderer, Jonathan P; Ehrenfeld, Jesse M

    2015-05-01

    The increasingly large databases available to researchers necessitate high-quality metadata that is not always available. We describe a method for generating this metadata independently. Cluster analysis and expectation-maximization were used to separate days into holidays/weekends and regular workdays using anesthesia data from Vanderbilt University Medical Center from 2004 to 2014. This classification was then used to describe differences between the two sets of days over time. We evaluated 3802 days and correctly categorized 3797 based on anesthesia case time (representing an error rate of 0.13%). Use of other metrics for categorization, such as billed anesthesia hours and number of anesthesia cases per day, led to similar results. Analysis of the two categories showed that surgical volume increased more quickly with time for non-holidays than holidays (p < 0.001). We were able to successfully generate metadata from data by distinguishing holidays based on anesthesia data. This data can then be used for economic analysis and scheduling purposes. It is possible that the method can be expanded to similar bimodal and multimodal variables.

  5. Soybean Production Lesson Plan.

    ERIC Educational Resources Information Center

    Carlson, Keith R.

    These lesson plans for teaching soybean production in a secondary or postsecondary vocational agriculture class are organized in nine units and cover the following topics: raising soybeans, optimum tillage, fertilizer and lime, seed selection, pest management, planting, troubleshooting, double cropping, and harvesting. Each lesson plan contains…

  6. SOYBEAN.APHID.LH.2009

    USDA-ARS?s Scientific Manuscript database

    Expression of soybean aphid (SA) resistance was characterized among 496 soybean lines in a twice-replicated field-plot test at the Eastern South Dakota Soil and Water Research Farm near Brookings, SD, in 2009. Natural infestations of SA occurred but were supplemented by placing individual stems of ...

  7. Identification of client involvement in sex trafficking in Mississippi.

    PubMed

    Williams, Patricia R; Wyatt, Wendyann; Gaddis, Angela

    2018-01-01

    Sex trafficking is an unrelenting problem in Mississippi. No quantitative data currently exist on the prevalence of sex trafficking or the identification of victims in the state. This study used the Trafficking in Victims Identification Tool (TVIT) (Short Version) to identify the extent to which a sample of clients (n = 28) receiving services at a non-profit social services agency in Jackson, Mississippi, were also victims of sex trafficking. The TVIT interview tool was completed during the intake phase at one social services agency in Mississippi. Over a 90-day period, 54% (n = 15) of participants were likely to have been trafficked for sex at some point. The researcher focused on three questions identified as predictors of sex trafficking. This research study provides a snapshot of the potential for identifying sex trafficking victims in Mississippi.

  8. Turning Data into Information: Assessing and Reporting GIS Metadata Integrity Using Integrated Computing Technologies

    ERIC Educational Resources Information Center

    Mulrooney, Timothy J.

    2009-01-01

    A Geographic Information System (GIS) serves as the tangible and intangible means by which spatially related phenomena can be created, analyzed and rendered. GIS metadata serves as the formal framework to catalog information about a GIS data set. Metadata is independent of the encoded spatial and attribute information. GIS metadata is a subset of…

  9. Dynamics of soybean rust epidemics in sequential plantings of soybean cultivars in Nigeria

    USDA-ARS?s Scientific Manuscript database

    Soybean rust, caused by the fungus Phakopsora pachyrhizi, is an important foliar disease of soybean. The disease intensity is dependent on environmental factors, although the precise conditions of most of these factors is not known. To help understand what environmental factors favor disease develop...

  10. Analyzing handwriting biometrics in metadata context

    NASA Astrophysics Data System (ADS)

    Scheidat, Tobias; Wolf, Franziska; Vielhauer, Claus

    2006-02-01

    In this article, methods for user recognition by online handwriting are experimentally analyzed using a combination of demographic data of users in relation to their handwriting habits. Online handwriting as a biometric method is characterized by having high variations of characteristics that influences the reliance and security of this method. These variations have not been researched in detail so far. Especially in cross-cultural application it is urgent to reveal the impact of personal background to security aspects in biometrics. Metadata represent the background of writers, by introducing cultural, biological and conditional (changing) aspects like fist language, country of origin, gender, handedness, experiences the influence handwriting and language skills. The goal is the revelation of intercultural impacts on handwriting in order to achieve higher security in biometrical systems. In our experiments, in order to achieve a relatively high coverage, 48 different handwriting tasks have been accomplished by 47 users from three countries (Germany, India and Italy) have been investigated with respect to the relations of metadata and biometric recognition performance. For this purpose, hypotheses have been formulated and have been evaluated using the measurement of well-known recognition error rates from biometrics. The evaluation addressed both: system reliance and security threads by skilled forgeries. For the later purpose, a novel forgery type is introduced, which applies the personal metadata to security aspects and includes new methods of security tests. Finally in our paper, we formulate recommendations for specific user groups and handwriting samples.

  11. Improvements to the Ontology-based Metadata Portal for Unified Semantics (OlyMPUS)

    NASA Astrophysics Data System (ADS)

    Linsinbigler, M. A.; Gleason, J. L.; Huffer, E.

    2016-12-01

    The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support Earth Science data consumers and data providers, enabling the latter to register data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS complements the ODISEES' data discovery system with an intelligent tool to enable data producers to auto-generate semantically enhanced metadata and upload it to the metadata repository that drives ODISEES. Like ODISEES, the OlyMPUS metadata provisioning tool leverages robust semantics, a NoSQL database and query engine, an automated reasoning engine that performs first- and second-order deductive inferencing, and uses a controlled vocabulary to support data interoperability and automated analytics. The ODISEES data discovery portal leverages this metadata to provide a seamless data discovery and access experience for data consumers who are interested in comparing and contrasting the multiple Earth science data products available across NASA data centers. Olympus will support scientists' services and tools for performing complex analyses and identifying correlations and non-obvious relationships across all types of Earth System phenomena using the full spectrum of NASA Earth Science data available. By providing an intelligent discovery portal that supplies users - both human users and machines - with detailed information about data products, their contents and their structure, ODISEES will reduce the level of effort required to identify and prepare large volumes of data for analysis. This poster will explain how OlyMPUS leverages deductive reasoning and other technologies to create an integrated environment for generating and exploiting semantically rich metadata.

  12. Predicting age groups of Twitter users based on language and metadata features.

    PubMed

    Morgan-Lopez, Antonio A; Kim, Annice E; Chew, Robert F; Ruddle, Paul

    2017-01-01

    Health organizations are increasingly using social media, such as Twitter, to disseminate health messages to target audiences. Determining the extent to which the target audience (e.g., age groups) was reached is critical to evaluating the impact of social media education campaigns. The main objective of this study was to examine the separate and joint predictive validity of linguistic and metadata features in predicting the age of Twitter users. We created a labeled dataset of Twitter users across different age groups (youth, young adults, adults) by collecting publicly available birthday announcement tweets using the Twitter Search application programming interface. We manually reviewed results and, for each age-labeled handle, collected the 200 most recent publicly available tweets and user handles' metadata. The labeled data were split into training and test datasets. We created separate models to examine the predictive validity of language features only, metadata features only, language and metadata features, and words/phrases from another age-validated dataset. We estimated accuracy, precision, recall, and F1 metrics for each model. An L1-regularized logistic regression model was conducted for each age group, and predicted probabilities between the training and test sets were compared for each age group. Cohen's d effect sizes were calculated to examine the relative importance of significant features. Models containing both Tweet language features and metadata features performed the best (74% precision, 74% recall, 74% F1) while the model containing only Twitter metadata features were least accurate (58% precision, 60% recall, and 57% F1 score). Top predictive features included use of terms such as "school" for youth and "college" for young adults. Overall, it was more challenging to predict older adults accurately. These results suggest that examining linguistic and Twitter metadata features to predict youth and young adult Twitter users may be helpful for

  13. Mississippi National River and Recreation Area Water Trail Plan.

    DOT National Transportation Integrated Search

    2017-05-05

    The Water Trail Plan describes the current conditions of and future plans for the Mississippi National River and Recreation Area (NRRA), a 72-mile stretch of the Mississippi River running through the Twin Cities region of Minnesota. In 2012, the NRRA...

  14. [Radiological dose and metadata management].

    PubMed

    Walz, M; Kolodziej, M; Madsack, B

    2016-12-01

    This article describes the features of management systems currently available in Germany for extraction, registration and evaluation of metadata from radiological examinations, particularly in the digital imaging and communications in medicine (DICOM) environment. In addition, the probable relevant developments in this area concerning radiation protection legislation, terminology, standardization and information technology are presented.

  15. Document Classification in Support of Automated Metadata Extraction Form Heterogeneous Collections

    ERIC Educational Resources Information Center

    Flynn, Paul K.

    2014-01-01

    A number of federal agencies, universities, laboratories, and companies are placing their documents online and making them searchable via metadata fields such as author, title, and publishing organization. To enable this, every document in the collection must be catalogued using the metadata fields. Though time consuming, the task of identifying…

  16. An Assistant for Loading Learning Object Metadata: An Ontology Based Approach

    ERIC Educational Resources Information Center

    Casali, Ana; Deco, Claudia; Romano, Agustín; Tomé, Guillermo

    2013-01-01

    In the last years, the development of different Repositories of Learning Objects has been increased. Users can retrieve these resources for reuse and personalization through searches in web repositories. The importance of high quality metadata is key for a successful retrieval. Learning Objects are described with metadata usually in the standard…

  17. A metadata schema for data objects in clinical research.

    PubMed

    Canham, Steve; Ohmann, Christian

    2016-11-24

    A large number of stakeholders have accepted the need for greater transparency in clinical research and, in the context of various initiatives and systems, have developed a diverse and expanding number of repositories for storing the data and documents created by clinical studies (collectively known as data objects). To make the best use of such resources, we assert that it is also necessary for stakeholders to agree and deploy a simple, consistent metadata scheme. The relevant data objects and their likely storage are described, and the requirements for metadata to support data sharing in clinical research are identified. Issues concerning persistent identifiers, for both studies and data objects, are explored. A scheme is proposed that is based on the DataCite standard, with extensions to cover the needs of clinical researchers, specifically to provide (a) study identification data, including links to clinical trial registries; (b) data object characteristics and identifiers; and (c) data covering location, ownership and access to the data object. The components of the metadata scheme are described. The metadata schema is proposed as a natural extension of a widely agreed standard to fill a gap not tackled by other standards related to clinical research (e.g., Clinical Data Interchange Standards Consortium, Biomedical Research Integrated Domain Group). The proposal could be integrated with, but is not dependent on, other moves to better structure data in clinical research.

  18. ATLAS Metadata Infrastructure Evolution for Run 2 and Beyond

    NASA Astrophysics Data System (ADS)

    van Gemmeren, P.; Cranshaw, J.; Malon, D.; Vaniachine, A.

    2015-12-01

    ATLAS developed and employed for Run 1 of the Large Hadron Collider a sophisticated infrastructure for metadata handling in event processing jobs. This infrastructure profits from a rich feature set provided by the ATLAS execution control framework, including standardized interfaces and invocation mechanisms for tools and services, segregation of transient data stores with concomitant object lifetime management, and mechanisms for handling occurrences asynchronous to the control framework's state machine transitions. This metadata infrastructure is evolving and being extended for Run 2 to allow its use and reuse in downstream physics analyses, analyses that may or may not utilize the ATLAS control framework. At the same time, multiprocessing versions of the control framework and the requirements of future multithreaded frameworks are leading to redesign of components that use an incident-handling approach to asynchrony. The increased use of scatter-gather architectures, both local and distributed, requires further enhancement of metadata infrastructure in order to ensure semantic coherence and robust bookkeeping. This paper describes the evolution of ATLAS metadata infrastructure for Run 2 and beyond, including the transition to dual-use tools—tools that can operate inside or outside the ATLAS control framework—and the implications thereof. It further examines how the design of this infrastructure is changing to accommodate the requirements of future frameworks and emerging event processing architectures.

  19. Metadata-Driven SOA-Based Application for Facilitation of Real-Time Data Warehousing

    NASA Astrophysics Data System (ADS)

    Pintar, Damir; Vranić, Mihaela; Skočir, Zoran

    Service-oriented architecture (SOA) has already been widely recognized as an effective paradigm for achieving integration of diverse information systems. SOA-based applications can cross boundaries of platforms, operation systems and proprietary data standards, commonly through the usage of Web Services technology. On the other side, metadata is also commonly referred to as a potential integration tool given the fact that standardized metadata objects can provide useful information about specifics of unknown information systems with which one has interest in communicating with, using an approach commonly called "model-based integration". This paper presents the result of research regarding possible synergy between those two integration facilitators. This is accomplished with a vertical example of a metadata-driven SOA-based business process that provides ETL (Extraction, Transformation and Loading) and metadata services to a data warehousing system in need of a real-time ETL support.

  20. The ground truth about metadata and community detection in networks

    PubMed Central

    Peel, Leto; Larremore, Daniel B.; Clauset, Aaron

    2017-01-01

    Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system’s components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks’ links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures. PMID:28508065

  1. The ground truth about metadata and community detection in networks.

    PubMed

    Peel, Leto; Larremore, Daniel B; Clauset, Aaron

    2017-05-01

    Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system's components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks' links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures.

  2. Comparison of soybean cultivars for enhancement of the polyamine contents in the fermented soybean natto using Bacillus subtilis (natto).

    PubMed

    Kobayashi, Kazuya; Horii, Yuichiro; Watanabe, Satoshi; Kubo, Yuji; Koguchi, Kumiko; Hoshi, Yoshihiro; Matsumoto, Ken-Ichi; Soda, Kuniyasu

    2017-03-01

    Polyamines have beneficial properties to prevent aging-associated diseases. Raw soybean has relatively high polyamine contents; and the fermented soybean natto is a good source of polyamines. However, detailed information of diversity of polyamine content in raw soybean is lacking. The objectives of this study were to evaluate differences of polyamines among raw soybeans and select the high polyamine-containing cultivar for natto production. Polyamine contents were measured chromatographically in 16 samples of soybean, which showed high variation among soybeans as follows: 93-861 nmol/g putrescine, 1055-2306 nmol/g spermidine, and 177-578 nmol/g spermine. We then confirmed the high correlations of polyamine contents between raw soybean and natto (r = 0.96, 0.95, and 0.94 for putrescine, spermidine, and spermine, respectively). Furthermore, comparison of the polyamine contents among 9 Japanese cultivars showed that 'Nakasen-nari' has the highest polyamine contents, suggesting its suitability for enhancement of polyamine contents of natto.

  3. Soybean-Enriched Snacks Based on African Rice

    PubMed Central

    Marengo, Mauro; Akoto, Hannah F.; Zanoletti, Miriam; Carpen, Aristodemo; Buratti, Simona; Benedetti, Simona; Barbiroli, Alberto; Johnson, Paa-Nii T.; Sakyi-Dawson, Esther O.; Saalia, Firibu K.; Bonomi, Francesco; Pagani, Maria Ambrogina; Manful, John; Iametti, Stefania

    2016-01-01

    Snacks were produced by extruding blends of partially-defatted soybean flour with flours from milled or parboiled African-grown rice. The interplay between composition and processing in producing snacks with a satisfactory sensory profile was addressed by e-sensing, and by molecular and rheological approaches. Soybean proteins play a main role in defining the properties of the protein network in the products. At the same content in soybean flour, use of parboiled rice flour increases the snack’s hardness. Electronic nose and electronic tongue discriminated samples containing a higher amount of soybean flour from those with a lower soybean flour content. PMID:28231133

  4. Soybean-Enriched Snacks Based on African Rice.

    PubMed

    Marengo, Mauro; Akoto, Hannah F; Zanoletti, Miriam; Carpen, Aristodemo; Buratti, Simona; Benedetti, Simona; Barbiroli, Alberto; Johnson, Paa-Nii T; Sakyi-Dawson, Esther O; Saalia, Firibu K; Bonomi, Francesco; Pagani, Maria Ambrogina; Manful, John; Iametti, Stefania

    2016-05-20

    Snacks were produced by extruding blends of partially-defatted soybean flour with flours from milled or parboiled African-grown rice. The interplay between composition and processing in producing snacks with a satisfactory sensory profile was addressed by e-sensing, and by molecular and rheological approaches. Soybean proteins play a main role in defining the properties of the protein network in the products. At the same content in soybean flour, use of parboiled rice flour increases the snack's hardness. Electronic nose and electronic tongue discriminated samples containing a higher amount of soybean flour from those with a lower soybean flour content.

  5. A Digital Broadcast Item (DBI) enabling metadata repository for digital, interactive television (digiTV) feedback channel networks

    NASA Astrophysics Data System (ADS)

    Lugmayr, Artur R.; Mailaparampil, Anurag; Tico, Florina; Kalli, Seppo; Creutzburg, Reiner

    2003-01-01

    Digital television (digiTV) is an additional multimedia environment, where metadata is one key element for the description of arbitrary content. This implies adequate structures for content description, which is provided by XML metadata schemes (e.g. MPEG-7, MPEG-21). Content and metadata management is the task of a multimedia repository, from which digiTV clients - equipped with an Internet connection - can access rich additional multimedia types over an "All-HTTP" protocol layer. Within this research work, we focus on conceptual design issues of a metadata repository for the storage of metadata, accessible from the feedback channel of a local set-top box. Our concept describes the whole heterogeneous life-cycle chain of XML metadata from the service provider to the digiTV equipment, device independent representation of content, accessing and querying the metadata repository, management of metadata related to digiTV, and interconnection of basic system components (http front-end, relational database system, and servlet container). We present our conceptual test configuration of a metadata repository that is aimed at a real-world deployment, done within the scope of the future interaction (fiTV) project at the Digital Media Institute (DMI) Tampere (www.futureinteraction.tv).

  6. Incorporating clinical metadata with digital image features for automated identification of cutaneous melanoma.

    PubMed

    Liu, Z; Sun, J; Smith, M; Smith, L; Warr, R

    2013-11-01

    Computer-assisted diagnosis (CAD) of malignant melanoma (MM) has been advocated to help clinicians to achieve a more objective and reliable assessment. However, conventional CAD systems examine only the features extracted from digital photographs of lesions. Failure to incorporate patients' personal information constrains the applicability in clinical settings. To develop a new CAD system to improve the performance of automatic diagnosis of melanoma, which, for the first time, incorporates digital features of lesions with important patient metadata into a learning process. Thirty-two features were extracted from digital photographs to characterize skin lesions. Patients' personal information, such as age, gender and, lesion site, and their combinations, was quantified as metadata. The integration of digital features and metadata was realized through an extended Laplacian eigenmap, a dimensionality-reduction method grouping lesions with similar digital features and metadata into the same classes. The diagnosis reached 82.1% sensitivity and 86.1% specificity when only multidimensional digital features were used, but improved to 95.2% sensitivity and 91.0% specificity after metadata were incorporated appropriately. The proposed system achieves a level of sensitivity comparable with experienced dermatologists aided by conventional dermoscopes. This demonstrates the potential of our method for assisting clinicians in diagnosing melanoma, and the benefit it could provide to patients and hospitals by greatly reducing unnecessary excisions of benign naevi. This paper proposes an enhanced CAD system incorporating clinical metadata into the learning process for automatic classification of melanoma. Results demonstrate that the additional metadata and the mechanism to incorporate them are useful for improving CAD of melanoma. © 2013 British Association of Dermatologists.

  7. Ultrasound Assisted Synthesis of Hydroxylated Soybean Lecithin from Crude Soybean Lecithin as an Emulsifier.

    PubMed

    Chiplunkar, Pranali P; Pratap, Amit P

    2017-10-01

    Soybean lecithin is a by-product obtained during degumming step of crude soybean oil refining. Crude soybean lecithin (CSL) contains major amount of phospholipids (PLs) along with minor amount of acylglycerols, bioactive components, etc. Due to presence of PLs, CSL can be used as an emulsifier. Crude soybean lecithin (CSL) was utilized to synthesize hydroxylated soybean lecithin (HSL) by hydroxylation using hydrogen peroxide and catalytic amount of lactic acid to enhance the hydrophilicity and emulsifying properties of CSL. To reduce the reaction time and to increase rate of reaction, HSL was synthesized under ultrasound irradiation. The effect of different operating parameters such as lactic acid, hydrogen peroxide, temperature, ultrasonic power and duty cycle in synthesis of HSL were studied and optimized. The surface tension (SFT), interfacial tension (IFT) and the critical micelle concentration (CMC) of the HSL (26.11 mN/m, 2.67 mN/m, 112 mg/L) were compared to CSL (37.53 mN/m, 6.22 mN/m, 291 mg/L) respectively. The HSL has better emulsion stability and low foaming characteristics as compared to CSL. Therefore, the product as an effective emulsifier can be used in food, pharmacy, lubricant, cosmetics, etc.

  8. Overexpression of Soybean Isoflavone Reductase (GmIFR) Enhances Resistance to Phytophthora sojae in Soybean

    PubMed Central

    Cheng, Qun; Li, Ninghui; Dong, Lidong; Zhang, Dayong; Fan, Sujie; Jiang, Liangyu; Wang, Xin; Xu, Pengfei; Zhang, Shuzhen

    2015-01-01

    Isoflavone reductase (IFR) is an enzyme involved in the biosynthetic pathway of isoflavonoid phytoalexin in plants. IFRs are unique to the plant kingdom and are considered to have crucial roles in plant response to various biotic and abiotic environmental stresses. Here, we report the characterization of a novel member of the soybean isoflavone reductase gene family GmIFR. Overexpression of GmIFR transgenic soybean exhibited enhanced resistance to Phytophthora sojae. Following stress treatments, GmIFR was significantly induced by P. sojae, ethephon (ET), abscisic acid (placeCityABA), salicylic acid (SA). It is located in the cytoplasm when transiently expressed in soybean protoplasts. The daidzein levels reduced greatly for the seeds of transgenic plants, while the relative content of glyceollins in transgenic plants was significantly higher than that of non-transgenic plants. Furthermore, we found that the relative expression levels of reactive oxygen species (ROS) of transgenic soybean plants were significantly lower than those of non-transgenic plants after incubation with P. sojae, suggesting an important role of GmIFR might function as an antioxidant to reduce ROS in soybean. The enzyme activity assay suggested that GmIFR has isoflavone reductase activity. PMID:26635848

  9. Overexpression of Soybean Isoflavone Reductase (GmIFR) Enhances Resistance to Phytophthora sojae in Soybean.

    PubMed

    Cheng, Qun; Li, Ninghui; Dong, Lidong; Zhang, Dayong; Fan, Sujie; Jiang, Liangyu; Wang, Xin; Xu, Pengfei; Zhang, Shuzhen

    2015-01-01

    Isoflavone reductase (IFR) is an enzyme involved in the biosynthetic pathway of isoflavonoid phytoalexin in plants. IFRs are unique to the plant kingdom and are considered to have crucial roles in plant response to various biotic and abiotic environmental stresses. Here, we report the characterization of a novel member of the soybean isoflavone reductase gene family GmIFR. Overexpression of GmIFR transgenic soybean exhibited enhanced resistance to Phytophthora sojae. Following stress treatments, GmIFR was significantly induced by P. sojae, ethephon (ET), abscisic acid (placeCityABA), salicylic acid (SA). It is located in the cytoplasm when transiently expressed in soybean protoplasts. The daidzein levels reduced greatly for the seeds of transgenic plants, while the relative content of glyceollins in transgenic plants was significantly higher than that of non-transgenic plants. Furthermore, we found that the relative expression levels of reactive oxygen species (ROS) of transgenic soybean plants were significantly lower than those of non-transgenic plants after incubation with P. sojae, suggesting an important role of GmIFR might function as an antioxidant to reduce ROS in soybean. The enzyme activity assay suggested that GmIFR has isoflavone reductase activity.

  10. Effect of Sinorhizobium fredii strain Sneb183 on the biological control of soybean cyst nematode in soybean.

    PubMed

    Tian, Feng; Wang, Yuanyuan; Zhu, Xiaofeng; Chen, Lijie; Duan, Yuxi

    2014-11-01

    The soybean cyst nematode (SCN; Heterodera glycines) is a major detriment to soybean production. The endophytic bacterium Sinorhizobium fredii strain Sneb183 is known to inhibit the activity of SCN. In the present study, soybean seedlings were inoculated with Sneb183, to study the penetration juveniles, and their development inside the roots. The number of cysts in the soybean roots was also examined. The induced systemic resistance in soybean was also examined through the split-root system. Our results revealed that the number of juveniles and cysts significantly decreased as a result of Sneb183 inoculation. Sneb183 also prolonged the developmental stage of SCN in the root to 30 days as compared to 27 days in the control. Furthermore, the number of nematodes in each stage was lower in the Sneb183 treated plants than control plants. We also used a split-root system to show that the S. fredii strain Sneb183 induced a systemic resistance to SCN infection in soybean. The repression rate of SCN penetration was 38.75%. Our study showed that Sneb183 can be an effective biocontrol agent for managing SCN infestation in soybean. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Resistance to Soybean Aphid Among Soybean Lines, Growth-chamber Tests, 2006 Through 2008

    USDA-ARS?s Scientific Manuscript database

    We tested for resistance to the soybean aphid (SBA, Aphis glycines) among several soybean lines, and rated lines as resistant or susceptible in seven tests. The ratings of plants with respect to SBA infestation differed among lines in all tests. Kosamame (PI 171451, test II), Bhart (PI 165989, tes...

  12. Principles of metadata organization at the ENCODE data coordination center

    PubMed Central

    Hong, Eurie L.; Sloan, Cricket A.; Chan, Esther T.; Davidson, Jean M.; Malladi, Venkat S.; Strattan, J. Seth; Hitz, Benjamin C.; Gabdank, Idan; Narayanan, Aditi K.; Ho, Marcus; Lee, Brian T.; Rowe, Laurence D.; Dreszer, Timothy R.; Roe, Greg R.; Podduturi, Nikhil R.; Tanaka, Forrest; Hilton, Jason A.; Cherry, J. Michael

    2016-01-01

    The Encyclopedia of DNA Elements (ENCODE) Data Coordinating Center (DCC) is responsible for organizing, describing and providing access to the diverse data generated by the ENCODE project. The description of these data, known as metadata, includes the biological sample used as input, the protocols and assays performed on these samples, the data files generated from the results and the computational methods used to analyze the data. Here, we outline the principles and philosophy used to define the ENCODE metadata in order to create a metadata standard that can be applied to diverse assays and multiple genomic projects. In addition, we present how the data are validated and used by the ENCODE DCC in creating the ENCODE Portal (https://www.encodeproject.org/). Database URL: www.encodeproject.org PMID:26980513

  13. Mississippi and SREB

    ERIC Educational Resources Information Center

    Southern Regional Education Board (SREB), 2009

    2009-01-01

    The Southern Regional Education Board (SREB) is a nonprofit organization that works collaboratively with Mississippi and 15 other member states to improve education at every level--from pre-K to postdoctoral study--through many effective programs and initiatives. SREB's "Challenge to Lead" Goals for Education, which call for the region…

  14. Cleaning by clustering: methodology for addressing data quality issues in biomedical metadata.

    PubMed

    Hu, Wei; Zaveri, Amrapali; Qiu, Honglei; Dumontier, Michel

    2017-09-18

    The ability to efficiently search and filter datasets depends on access to high quality metadata. While most biomedical repositories require data submitters to provide a minimal set of metadata, some such as the Gene Expression Omnibus (GEO) allows users to specify additional metadata in the form of textual key-value pairs (e.g. sex: female). However, since there is no structured vocabulary to guide the submitter regarding the metadata terms to use, consequently, the 44,000,000+ key-value pairs in GEO suffer from numerous quality issues including redundancy, heterogeneity, inconsistency, and incompleteness. Such issues hinder the ability of scientists to hone in on datasets that meet their requirements and point to a need for accurate, structured and complete description of the data. In this study, we propose a clustering-based approach to address data quality issues in biomedical, specifically gene expression, metadata. First, we present three different kinds of similarity measures to compare metadata keys. Second, we design a scalable agglomerative clustering algorithm to cluster similar keys together. Our agglomerative cluster algorithm identified metadata keys that were similar, based on (i) name, (ii) core concept and (iii) value similarities, to each other and grouped them together. We evaluated our method using a manually created gold standard in which 359 keys were grouped into 27 clusters based on six types of characteristics: (i) age, (ii) cell line, (iii) disease, (iv) strain, (v) tissue and (vi) treatment. As a result, the algorithm generated 18 clusters containing 355 keys (four clusters with only one key were excluded). In the 18 clusters, there were keys that were identified correctly to be related to that cluster, but there were 13 keys which were not related to that cluster. We compared our approach with four other published methods. Our approach significantly outperformed them for most metadata keys and achieved the best average F-Score (0

  15. Effects of Soybean Seed Size on Weed Competition

    USDA-ARS?s Scientific Manuscript database

    Organic soybean producers must rely on various, nonherbicidal tactics for weed management. Increased soybean seed size may be one method to increase the competitiveness of the soybean canopy. Soybean varieties Hutcheson, NC-Roy, and NC-Raleigh were separated into four or five seed size classes. Seed...

  16. Mississippi Technology Transfer Center

    NASA Technical Reports Server (NTRS)

    1987-01-01

    The Mississippi Technology Transfer Center at the John C. Stennis Space Center in Hancock County, Miss., was officially dedicated in 1987. The center is home to several state agencies as well as the Center For Higher Learning.

  17. Soybean (2010 JGI User Meeting)

    ScienceCinema

    Stacey, Gary

    2018-02-13

    Gary Stacey, associate director of the National Center for Soybean Biotechnology at the University of Missouri, gives a talk simply titled "Soybean" on March 24, 2010 at the 5th Annual DOE JGI User Meeting.

  18. Twiddlenet: Metadata Tagging and Data Dissemination in Mobile Device Networks

    DTIC Science & Technology

    2007-09-01

    hosting a distributed data dissemination application. Stated simply, there are a multitude of handheld devices on the market that can communicate in...content ( UGC ) across a network of distributed devices. This sharing is accomplished through the use of descriptive metadata tags that are assigned to a...file once it has been shared. These metadata files are uploaded to a centralized portal and arranged for efficient UGC location and searching

  19. 78 FR 20888 - Foreign-Trade Zone (FTZ) 158-Vicksburg/Jackson, Mississippi; Notification of Proposed Production...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-08

    ...--Vicksburg/Jackson, Mississippi; Notification of Proposed Production Activity; Lane Furniture Industries, Inc., (Upholstered Furniture), Belden, Saltillo, and Verona, Mississippi The Greater Mississippi Foreign-Trade Zone... Furniture Industries, Inc. (Lane), located in Belden, Saltillo, and Verona, Mississippi. The notification...

  20. phosphorus retention data and metadata

    EPA Pesticide Factsheets

    phosphorus retention in wetlands data and metadataThis dataset is associated with the following publication:Lane , C., and B. Autrey. Phosphorus retention of forested and emergent marsh depressional wetlands in differing land uses in Florida, USA. Wetlands Ecology and Management. Springer Science and Business Media B.V;Formerly Kluwer Academic Publishers B.V., GERMANY, 24(1): 45-60, (2016).

  1. NCPP's Use of Standard Metadata to Promote Open and Transparent Climate Modeling

    NASA Astrophysics Data System (ADS)

    Treshansky, A.; Barsugli, J. J.; Guentchev, G.; Rood, R. B.; DeLuca, C.

    2012-12-01

    The National Climate Predictions and Projections (NCPP) Platform is developing comprehensive regional and local information about the evolving climate to inform decision making and adaptation planning. This includes both creating and providing tools to create metadata about the models and processes used to create its derived data products. NCPP is using the Common Information Model (CIM), an ontology developed by a broad set of international partners in climate research, as its metadata language. This use of a standard ensures interoperability within the climate community as well as permitting access to the ecosystem of tools and services emerging alongside the CIM. The CIM itself is divided into a general-purpose (UML & XML) schema which structures metadata documents, and a project or community-specific (XML) Controlled Vocabulary (CV) which constraints the content of metadata documents. NCPP has already modified the CIM Schema to accommodate downscaling models, simulations, and experiments. NCPP is currently developing a CV for use by the downscaling community. Incorporating downscaling into the CIM will lead to several benefits: easy access to the existing CIM Documents describing CMIP5 models and simulations that are being downscaled, access to software tools that have been developed in order to search, manipulate, and visualize CIM metadata, and coordination with national and international efforts such as ES-DOC that are working to make climate model descriptions and datasets interoperable. Providing detailed metadata descriptions which include the full provenance of derived data products will contribute to making that data (and, the models and processes which generated that data) more open and transparent to the user community.

  2. 77 FR 40529 - Soybean Promotion and Research: Amend the Order To Adjust Representation on the United Soybean Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-10

    ... research designed to strengthen the soybean industry's position in the marketplace, and to maintain and... Service 7 CFR Part 1220 [Doc. No. AMS-LS-12-0022] Soybean Promotion and Research: Amend the Order To... in 2009. As required by the Soybean Promotion, Research, and Consumer Information Act (Act...

  3. Mississippi CaP HBCU Undergraduate Research Training Program

    DTIC Science & Technology

    2016-09-01

    activities. This activity, occurred once a week (between weeks 4-6) and included touring to Urology, Hematology- Oncology , and Radiation Oncology facilities...Director of UMMC-Cancer Institute, Professor and Chairman, Department of Radiation Oncology University of Mississippi Medical Center, "Precision...Jackson, MS,4Vanderbilt University, Nashville, TN, 5Department of Pathology and Radiation Oncology , Mississippi Medical Center, Jackson, MS Tumor hypoxia

  4. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and ARM

    NASA Astrophysics Data System (ADS)

    Crow, M. C.; Devarakonda, R.; Killeffer, T.; Hook, L.; Boden, T.; Wullschleger, S.

    2017-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This poster describes tools being used in several projects at Oak Ridge National Laboratory (ORNL), with a focus on the U.S. Department of Energy's Next Generation Ecosystem Experiment in the Arctic (NGEE Arctic) and Atmospheric Radiation Measurements (ARM) project, and their usage at different stages of the data lifecycle. The Online Metadata Editor (OME) is used for the documentation and archival stages while a Data Search tool supports indexing, cataloging, and searching. The NGEE Arctic OME Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload while adhering to standard metadata formats. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The Data Search Tool conveniently displays each data record in a thumbnail containing the title, source, and date range, and features a quick view of the metadata associated with that record, as well as a direct link to the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for geo-searching. These tools are supported by the Mercury [2] consortium (funded by DOE, NASA, USGS, and ARM) and developed and managed at Oak Ridge National Laboratory. Mercury is a set of tools for collecting, searching, and retrieving metadata and data. Mercury collects metadata from contributing project servers, then indexes the metadata to make it searchable using Apache Solr, and provides access to retrieve it from the web page. Metadata standards that Mercury supports include: XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115.

  5. Identification of the soybean HyPRP family and specific gene response to Asian soybean rust disease.

    PubMed

    Neto, Lauro Bücker; de Oliveira, Rafael Rodrigues; Wiebke-Strohm, Beatriz; Bencke, Marta; Weber, Ricardo Luís Mayer; Cabreira, Caroline; Abdelnoor, Ricardo Vilela; Marcelino, Francismar Correa; Zanettini, Maria Helena Bodanese; Passaglia, Luciane Maria Pereira

    2013-07-01

    Soybean [Glycine max (L.) Merril], one of the most important crop species in the world, is very susceptible to abiotic and biotic stress. Soybean plants have developed a variety of molecular mechanisms that help them survive stressful conditions. Hybrid proline-rich proteins (HyPRPs) constitute a family of cell-wall proteins with a variable N-terminal domain and conserved C-terminal domain that is phylogenetically related to non-specific lipid transfer proteins. Members of the HyPRP family are involved in basic cellular processes and their expression and activity are modulated by environmental factors. In this study, microarray analysis and real time RT-qPCR were used to identify putative HyPRP genes in the soybean genome and to assess their expression in different plant tissues. Some of the genes were also analyzed by time-course real time RT-qPCR in response to infection by Phakopsora pachyrhizi, the causal agent of Asian soybean rust disease. Our findings indicate that the time of induction of a defense pathway is crucial in triggering the soybean resistance response to P. pachyrhizi. This is the first study to identify the soybean HyPRP group B family and to analyze disease-responsive GmHyPRP during infection by P. pachyrhizi.

  6. System for Earth Sample Registration SESAR: Services for IGSN Registration and Sample Metadata Management

    NASA Astrophysics Data System (ADS)

    Chan, S.; Lehnert, K. A.; Coleman, R. J.

    2011-12-01

    SESAR, the System for Earth Sample Registration, is an online registry for physical samples collected for Earth and environmental studies. SESAR generates and administers the International Geo Sample Number IGSN, a unique identifier for samples that is dramatically advancing interoperability amongst information systems for sample-based data. SESAR was developed to provide the complete range of registry services, including definition of IGSN syntax and metadata profiles, registration and validation of name spaces requested by users, tools for users to submit and manage sample metadata, validation of submitted metadata, generation and validation of the unique identifiers, archiving of sample metadata, and public or private access to the sample metadata catalog. With the development of SESAR v3, we placed particular emphasis on creating enhanced tools that make metadata submission easier and more efficient for users, and that provide superior functionality for users to manage metadata of their samples in their private workspace MySESAR. For example, SESAR v3 includes a module where users can generate custom spreadsheet templates to enter metadata for their samples, then upload these templates online for sample registration. Once the content of the template is uploaded, it is displayed online in an editable grid format. Validation rules are executed in real-time on the grid data to ensure data integrity. Other new features of SESAR v3 include the capability to transfer ownership of samples to other SESAR users, the ability to upload and store images and other files in a sample metadata profile, and the tracking of changes to sample metadata profiles. In the next version of SESAR (v3.5), we will further improve the discovery, sharing, registration of samples. For example, we are developing a more comprehensive suite of web services that will allow discovery and registration access to SESAR from external systems. Both batch and individual registrations will be possible

  7. Role of Soybean mosaic virus-encoded proteins in seed and aphid transmission in soybean

    USDA-ARS?s Scientific Manuscript database

    Soybean mosaic virus (SMV) is seed and aphid transmitted and can cause significant reductions in yield and seed quality in soybean, Glycine max. The roles in seed and aphid transmission of selected SMV-encoded proteins were investigated by constructing chimeric recombinants between SMV 413 (efficien...

  8. Principles of metadata organization at the ENCODE data coordination center.

    PubMed

    Hong, Eurie L; Sloan, Cricket A; Chan, Esther T; Davidson, Jean M; Malladi, Venkat S; Strattan, J Seth; Hitz, Benjamin C; Gabdank, Idan; Narayanan, Aditi K; Ho, Marcus; Lee, Brian T; Rowe, Laurence D; Dreszer, Timothy R; Roe, Greg R; Podduturi, Nikhil R; Tanaka, Forrest; Hilton, Jason A; Cherry, J Michael

    2016-01-01

    The Encyclopedia of DNA Elements (ENCODE) Data Coordinating Center (DCC) is responsible for organizing, describing and providing access to the diverse data generated by the ENCODE project. The description of these data, known as metadata, includes the biological sample used as input, the protocols and assays performed on these samples, the data files generated from the results and the computational methods used to analyze the data. Here, we outline the principles and philosophy used to define the ENCODE metadata in order to create a metadata standard that can be applied to diverse assays and multiple genomic projects. In addition, we present how the data are validated and used by the ENCODE DCC in creating the ENCODE Portal (https://www.encodeproject.org/). Database URL: www.encodeproject.org. © The Author(s) 2016. Published by Oxford University Press.

  9. Final Report on the Mississippi Project CLEAR Voice Teacher Working Conditions Survey

    ERIC Educational Resources Information Center

    Berry, Barnett; Fuller, Ed

    2008-01-01

    In 2007, the state of Mississippi conducted a web-based survey of all school-based licensed educators in which they were asked to share their perceptions of the state of teacher working conditions in Mississippi. This report of the Mississippi Teacher Working Conditions Survey, Project CLEAR Voice (Cultivate Learning Environments to Accelerate…

  10. Leveraging Metadata to Create Interactive Images... Today!

    NASA Astrophysics Data System (ADS)

    Hurt, Robert L.; Squires, G. K.; Llamas, J.; Rosenthal, C.; Brinkworth, C.; Fay, J.

    2011-01-01

    The image gallery for NASA's Spitzer Space Telescope has been newly rebuilt to fully support the Astronomy Visualization Metadata (AVM) standard to create a new user experience both on the website and in other applications. We encapsulate all the key descriptive information for a public image, including color representations and astronomical and sky coordinates and make it accessible in a user-friendly form on the website, but also embed the same metadata within the image files themselves. Thus, images downloaded from the site will carry with them all their descriptive information. Real-world benefits include display of general metadata when such images are imported into image editing software (e.g. Photoshop) or image catalog software (e.g. iPhoto). More advanced support in Microsoft's WorldWide Telescope can open a tagged image after it has been downloaded and display it in its correct sky position, allowing comparison with observations from other observatories. An increasing number of software developers are implementing AVM support in applications and an online image archive for tagged images is under development at the Spitzer Science Center. Tagging images following the AVM offers ever-increasing benefits to public-friendly imagery in all its standard forms (JPEG, TIFF, PNG). The AVM standard is one part of the Virtual Astronomy Multimedia Project (VAMP); http://www.communicatingastronomy.org

  11. NetCDF4/HDF5 and Linked Data in the Real World - Enriching Geoscientific Metadata without Bloat

    NASA Astrophysics Data System (ADS)

    Ip, Alex; Car, Nicholas; Druken, Kelsey; Poudjom-Djomani, Yvette; Butcher, Stirling; Evans, Ben; Wyborn, Lesley

    2017-04-01

    NetCDF4 has become the dominant generic format for many forms of geoscientific data, leveraging (and constraining) the versatile HDF5 container format, while providing metadata conventions for interoperability. However, the encapsulation of detailed metadata within each file can lead to metadata "bloat", and difficulty in maintaining consistency where metadata is replicated to multiple locations. Complex conceptual relationships are also difficult to represent in simple key-value netCDF metadata. Linked Data provides a practical mechanism to address these issues by associating the netCDF files and their internal variables with complex metadata stored in Semantic Web vocabularies and ontologies, while complying with and complementing existing metadata conventions. One of the stated objectives of the netCDF4/HDF5 formats is that they should be self-describing: containing metadata sufficient for cataloguing and using the data. However, this objective can be regarded as only partially-met where details of conventions and definitions are maintained externally to the data files. For example, one of the most widely used netCDF community standards, the Climate and Forecasting (CF) Metadata Convention, maintains standard vocabularies for a broad range of disciplines across the geosciences, but this metadata is currently neither readily discoverable nor machine-readable. We have previously implemented useful Linked Data and netCDF tooling (ncskos) that associates netCDF files, and individual variables within those files, with concepts in vocabularies formulated using the Simple Knowledge Organization System (SKOS) ontology. NetCDF files contain Uniform Resource Identifier (URI) links to terms represented as SKOS Concepts, rather than plain-text representations of those terms, so we can use simple, standardised web queries to collect and use rich metadata for the terms from any Linked Data-presented SKOS vocabulary. Geoscience Australia (GA) manages a large volume of diverse

  12. Cephalosporium Wilt of Elm in the Lower Mississippi Valley

    Treesearch

    T. H. Filer; F. I. McCracken; E. R. Toole

    1968-01-01

    Dead and dying American elms (Ulmus americana) and cedar elms (U. crassifolia) were observed on the Delta Experimental Forest, Stoneville, Mississippi, and in Desha County, Arkansas, near the Mississippi River (about 30 miles northwest of the experimental forest) during August 1967. The only fungus consistently isolated from these...

  13. An Investigation into Burnout among Mississippi High School Principals.

    ERIC Educational Resources Information Center

    Smith-Stevenson, Ruthie; Saul, Charles E.

    This paper presents findings of a study that analyzed the extent of burnout among Mississippi high school principals. Specifically, it identified the level of burnout among Mississippi high school principals, the relationship between certain demographic variables and burnout, and the relationship between burnout and personality type. The level of…

  14. Making Information Visible, Accessible, and Understandable: Meta-Data and Registries

    DTIC Science & Technology

    2007-07-01

    the data created, the length of play time, album name, and the genre. Without resource metadata, portable digital music players would not be so...notion of a catalog card in a library. An example of metadata is the description of a music file specifying the creator, the artist that performed the song...describe struc- ture and formatting which are critical to interoperability and the management of databases. Going back to the portable music player example

  15. Approaches for Increasing Soybean Use by Low-Income Brazilian Families.

    ERIC Educational Resources Information Center

    Wright, Maria da Gloria Miotto; And Others

    1982-01-01

    Describes an educational/distributional campaign to increase use of soybeans by low-income Brazilian families. Initially, no families surveyed used soybeans but, after participating in a program on nutrition and soybeans, and free distribution of soybeans for one month, soybean usage by participants increased even when free soybeans were replaced…

  16. Floods of April 1979, Mississippi, Alabama, and Georgia

    USGS Publications Warehouse

    Edelen, G.W.; Wilson, K.V.; Harkins, J.R.; Miller, J.F.; Chin, E.H.

    1986-01-01

    A major storm April 11-13, 1979, following a series of storms in March and April, brought large amounts of rainfall over southeastern United States. Heaviest rain fell over north-central Mississippi and Alabama. A maximum of 21.5 inches was observed at Louisville, 14 SE, Mississippi. Floods in Mississippi and Alabama were the maximum of record at 60 streamflow gaging stations in the Coosa, Alabama, Tombigbee, Chickasawhay, Pearl, and Big Black River basins. On the Pearl River, peak discharges at main stem gaging stations generally approached or exceeded those of the great flood of 1874, and recurrence intervals generally were greater than 100 years. Nine lives were reported lost. Estimated damages totaled nearly $400 million. Seventeen thousand people were driven from their homes in Jackson, Mississippi. This report presents analyses of the meterological settings of the storms, summaries of flood stages and discharges at 221 streamflow gaging stations, stages and contents of 10 reservoirs, flood-crest stages and hydrograph data consisting of gage height, discharge, and accumulated runoff at selected times, at 46 gaging stations, groundwater fluctuations in 11 observation wells, and water salinity and temperature at 22 sites along the Intracoastal Waterway in Mobile Bay. (USGS)

  17. Increasing the international visibility of research data by a joint metadata schema

    NASA Astrophysics Data System (ADS)

    Svoboda, Nikolai; Zoarder, Muquit; Gärtner, Philipp; Hoffmann, Carsten; Heinrich, Uwe

    2017-04-01

    The BonaRes Project ("Soil as a sustainable resource for the bioeconomy") was launched in 2015 to promote sustainable soil management and to avoid fragmentation of efforts (Wollschläger et al., 2016). For this purpose, an IT infrastructure is being developed to upload, manage, store, and provide research data and its associated metadata. The research data provided by the BonaRes data centre are, in principle, not subject to any restrictions on reuse. For all research data considerable standardized metadata are the key enablers for the effective use of these data. Providing proper metadata is often viewed as an extra burden with further work and resources consumed. In our lecture we underline the benefits of structured and interoperable metadata like: accessibility of data, discovery of data, interpretation of data, linking data and several more and we counter these advantages with the effort of time, personnel and further costs. Building on this, we describe the framework of metadata in BonaRes combining the standards of OGC for description, visualization, exchange and discovery of geodata as well as the schema of DataCite for the publication and citation of this research data. This enables the generation of a DOI, a unique identifier that provides a permanent link to the citable research data. By using OGC standards, data and metadata become interoperable with numerous research data provided via INSPIRE. It enables further services like CSW for harvesting WMS for visualization and WFS for downloading. We explain the mandatory fields that result from our approach and we give a general overview about our metadata architecture implementation. Literature: Wollschläger, U; Helming, K.; Heinrich, U.; Bartke, S.; Kögel-Knabner, I.; Russell, D.; Eberhardt, E. & Vogel, H.-J.: The BonaRes Centre - A virtual institute for soil research in the context of a sustainable bio-economy. Geophysical Research Abstracts, Vol. 18, EGU2016-9087, 2016.

  18. Predicting age groups of Twitter users based on language and metadata features

    PubMed Central

    Morgan-Lopez, Antonio A.; Chew, Robert F.; Ruddle, Paul

    2017-01-01

    Health organizations are increasingly using social media, such as Twitter, to disseminate health messages to target audiences. Determining the extent to which the target audience (e.g., age groups) was reached is critical to evaluating the impact of social media education campaigns. The main objective of this study was to examine the separate and joint predictive validity of linguistic and metadata features in predicting the age of Twitter users. We created a labeled dataset of Twitter users across different age groups (youth, young adults, adults) by collecting publicly available birthday announcement tweets using the Twitter Search application programming interface. We manually reviewed results and, for each age-labeled handle, collected the 200 most recent publicly available tweets and user handles’ metadata. The labeled data were split into training and test datasets. We created separate models to examine the predictive validity of language features only, metadata features only, language and metadata features, and words/phrases from another age-validated dataset. We estimated accuracy, precision, recall, and F1 metrics for each model. An L1-regularized logistic regression model was conducted for each age group, and predicted probabilities between the training and test sets were compared for each age group. Cohen’s d effect sizes were calculated to examine the relative importance of significant features. Models containing both Tweet language features and metadata features performed the best (74% precision, 74% recall, 74% F1) while the model containing only Twitter metadata features were least accurate (58% precision, 60% recall, and 57% F1 score). Top predictive features included use of terms such as “school” for youth and “college” for young adults. Overall, it was more challenging to predict older adults accurately. These results suggest that examining linguistic and Twitter metadata features to predict youth and young adult Twitter users may be

  19. Systemic properties of myclobutanil in soybean plants, affecting control of Asian soybean rust (Phakopsora pachyrhizi).

    PubMed

    Kemmitt, Gregory M; DeBoer, Gerrit; Ouimette, David; Iamauti, Marilene

    2008-12-01

    The demethylation inhibitor (DMI) fungicide myclobutanil can be an effective component of spray programmes designed to control the highly destructive plant pathogen Phakopsora pachyrhizi Syd. & P. Syd., causal agent of Asian soybean rust. Myclobutanil is known from previous studies in grapevines to be xylem mobile. This study investigates the mobility profile of myclobutanil in soybean as an important component of its effective field performance. Over a 12 day period under greenhouse conditions, a constant uptake of myclobutanil from leaflet surfaces into the leaflet tissue was observed. Once in the leaflet, myclobutanil was seen to redistribute throughout the tissue, although no movement out of leaflets occurred owing to a lack of phloem mobility. The ability of myclobutanil to redistribute over distance within the soybean plant was revealed when visualizing movement of the compound to foliage above the point of application on the plant stem. An efficacy bioassay demonstrated that the systemic properties of myclobutanil allow control of disease at a point remote from the initial site of compound application. It is suggested that the high degree of xylem systemicity displayed by myclobutanil in soybean foliage is a contributory factor towards its commercial effectiveness for control of Asian soybean rust.

  20. Resistance to Phomopsis Seed Decay in soybean

    USDA-ARS?s Scientific Manuscript database

    Phomopsis seed decay (PSD) of soybean is caused primarily by the fungal pathogen, Phomopsis longicolla T.W. Hobbs along with other Phomopsis and Diaporthe spp. This disease causes poor seed quality and suppresses yield in most soybean-growing countries. Infected soybean seeds can be symptomless, but...

  1. A Common Metadata System for Marine Data Portals

    NASA Astrophysics Data System (ADS)

    Wosniok, C.; Breitbach, G.; Lehfeldt, R.

    2012-04-01

    ), Web Feature Service (WFS) and Sensor Observation Service (SOS), which ensures interoperability and extensibility. In addition, metadata as crucial components for searching and finding information in large data infrastructures is provided via the Catalogue Web Service (CS-W). MDI-DE and COSYNA rely on the metadata information system for marine metadata NOKIS, which reflects a metadata profile tailored for marine data according to the specifications of German coastal authorities. In spite of this common software base, interoperability between the two data collections requires constant alignments of the diverse data processed by the two portals. While monitoring data in the MDI-DE is currently rather campaign-based, COSYNA has to fit constantly evolving time series into metadata sets. With all data following the same metadata profile, we now reach full interoperability between the different data collections. The distributed marine information system provides options to search, find and visualise the harmonised results from continuous monitoring, field campaigns, numerical modeling and other data in one web client.

  2. Miles to Go: Mississippi. Pre-Kindergarten--Time to Begin

    ERIC Educational Resources Information Center

    Suitts, Steve

    2010-01-01

    This report by the Southern Education Foundation (SEF) finds that high-quality pre-kindergarten (Pre-K) is the "first, essential step towards building the educated workforce that will enable a better economic future for Mississippi." The report calls on Mississippi leaders to establish a blue-ribbon, bipartisan commission to develop a…

  3. Manifestations of Metadata: From Alexandria to the Web--Old is New Again

    ERIC Educational Resources Information Center

    Kennedy, Patricia

    2008-01-01

    This paper is a discussion of the use of metadata, in its various manifestations, to access information. Information management standards are discussed. The connection between the ancient world and the modern world is highlighted. Individual perspectives are paramount in fulfilling information seeking. Metadata is interpreted and reflected upon in…

  4. Linking the historic 2011 Mississippi River flood to coastal wetland sedimentation

    USGS Publications Warehouse

    Falcini, Federico; Khan, Nicole S.; Macelloni, Leonardo; Horton, Benjamin P.; Lutken, Carol B.; McKee, Karen L.; Santoleri, Rosalia; Colella, Simone; Li, Chunyan; Volpe, Gianluca; D’Emidio, Marco; Salusti, Alessandro; Jerolmack, Douglas J.

    2012-01-01

    Wetlands in the Mississippi River deltaic plain are deteriorating in part because levees and control structures starve them of sediment. In Spring of 2011 a record-breaking flood brought discharge on the lower Mississippi River to dangerous levels, forcing managers to divert up to 3500 m3/s-1 of water to the Atchafalaya River Basin. Here we quantify differences between the Mississippi and Atchafalaya River inundation and sediment-plume patterns using field-calibrated satellite data, and assess the impact these outflows had on wetland sedimentation. We characterize hydrodynamics and suspended sediment patterns of the Mississippi River plume using in-situ data collected during the historic flood. We show that the focused, high-momentum jet from the leveed Mississippi delivered sediment far offshore. In contrast, the plume from the Atchafalaya was more diffuse; diverted water inundated a large area; and sediment was trapped within the coastal current. Maximum sedimentation (up to several centimetres) occurred in the Atchafalaya Basin despite the larger sediment load carried by the Mississippi. Minimum accumulation occurred along the shoreline between these river sources. Our findings provide a mechanistic link between river-mouth dynamics and wetland sedimentation patterns that is relevant for plans to restore deltaic wetlands using artificial diversions.

  5. A Metadata Standard for Hydroinformatic Data Conforming to International Standards

    NASA Astrophysics Data System (ADS)

    Notay, Vikram; Carstens, Georg; Lehfeldt, Rainer

    2017-04-01

    The affordable availability of computing power and digital storage has been a boon for the scientific community. The hydroinformatics community has also benefitted from the so-called digital revolution, which has enabled the tackling of more and more complex physical phenomena using hydroinformatic models, instruments, sensors, etc. With models getting more and more complex, computational domains getting larger and the resolution of computational grids and measurement data getting finer, a large amount of data is generated and consumed in any hydroinformatics related project. The ubiquitous availability of internet also contributes to this phenomenon with data being collected through sensor networks connected to telecommunications networks and the internet long before the term Internet of Things existed. Although generally good, this exponential increase in the number of available datasets gives rise to the need to describe this data in a standardised way to not only be able to get a quick overview about the data but to also facilitate interoperability of data from different sources. The Federal Waterways Engineering and Research Institute (BAW) is a federal authority of the German Federal Ministry of Transport and Digital Infrastructure. BAW acts as a consultant for the safe and efficient operation of the German waterways. As part of its consultation role, BAW operates a number of physical and numerical models for sections of inland and marine waterways. In order to uniformly describe the data produced and consumed by these models throughout BAW and to ensure interoperability with other federal and state institutes on the one hand and with EU countries on the other, a metadata profile for hydroinformatic data has been developed at BAW. The metadata profile is composed in its entirety using the ISO 19115 international standard for metadata related to geographic information. Due to the widespread use of the ISO 19115 standard in the existing geodata infrastructure

  6. Transgenic soybean overexpressing GmSamT1 exhibits resistance to multiple-HG types of soybean cyst nematode Heterodera glycines

    USDA-ARS?s Scientific Manuscript database

    Soybean (Glycine max (L.) Merr.) salicylic acid methyl transferase (GmSAMT1) catalyzes the conversion of salicylic acid to methyl salicylate. Prior results showed that when GmSAMT1 was overexpressed in transgenic soybean hairy roots, resistance is conferred against soybean cyst nematode (SCN), Heter...

  7. Integrating Microarray Analysis and the Soybean Genome to Understand the Soybean's Iron Deficiency Response

    USDA-ARS?s Scientific Manuscript database

    Transcriptional profiles of soybean (Glycine max, L. Merr) near isogenic lines Clark (PI548553, iron efficient) and IsoClark (PI547430, iron inefficient) were analyzed and compared using the Affymetrix® GeneChip® Soybean Genome Array. A comparison of plants grown under Fe-sufficient and Fe-limited ...

  8. Nonnative Fishes in the Upper Mississippi River System

    USGS Publications Warehouse

    Irons, Kevin S.; DeLain, Steven A.; Gittinger, Eric; Ickes, Brian S.; Kolar, Cindy S.; Ostendort, David; Ratcliff, Eric N.; Benson, Amy J.; Irons, Kevin S.

    2009-01-01

    The introduction, spread, and establishment of nonnative species is widely regarded as a leading threat to aquatic biodiversity and consequently is ranked among the most serious environmental problems facing the United States today. This report presents information on nonnative fish species observed by the Long Term Resource Monitoring Program on the Upper Mississippi River System a nexus of North American freshwater fish diversity for the Nation. The Long Term Resource Monitoring Program, as part of the U.S. Army Corps of Engineers' Environmental Management Plan, is the Nation's largest river monitoring program and stands as the primary source of standardized ecological information on the Upper Mississippi River System. The Long Term Resource Monitoring Program has been monitoring fish communities in six study areas on the Upper Mississippi River System since 1989. During this period, more than 3.5 million individual fish, consisting of 139 species, have been collected. Although fish monitoring activities of the Long Term Resource Monitoring Program focus principally on entire fish communities, data collected by the Program are useful for detecting and monitoring the establishment and spread of nonnative fish species within the Upper Mississippi River System Basin. Sixteen taxa of nonnative fishes, or hybrids thereof, have been observed by the Long Term Resource Monitoring Program since 1989, and several species are presently expanding their distribution and increasing in abundance. For example, in one of the six study areas monitored by the Long Term Resource Monitoring Program, the number of established nonnative species has increased from two to eight species in less than 10 years. Furthermore, contributions of those eight species can account for up to 60 percent of the total annual catch and greater than 80 percent of the observed biomass. These observations are critical because the Upper Mississippi River System stands as a nationally significant pathway for

  9. Raising orphans from a metadata morass: A researcher's guide to re-use of public 'omics data.

    PubMed

    Bhandary, Priyanka; Seetharam, Arun S; Arendsee, Zebulun W; Hur, Manhoi; Wurtele, Eve Syrkin

    2018-02-01

    More than 15 petabases of raw RNAseq data is now accessible through public repositories. Acquisition of other 'omics data types is expanding, though most lack a centralized archival repository. Data-reuse provides tremendous opportunity to extract new knowledge from existing experiments, and offers a unique opportunity for robust, multi-'omics analyses by merging metadata (information about experimental design, biological samples, protocols) and data from multiple experiments. We illustrate how predictive research can be accelerated by meta-analysis with a study of orphan (species-specific) genes. Computational predictions are critical to infer orphan function because their coding sequences provide very few clues. The metadata in public databases is often confusing; a test case with Zea mays mRNA seq data reveals a high proportion of missing, misleading or incomplete metadata. This metadata morass significantly diminishes the insight that can be extracted from these data. We provide tips for data submitters and users, including specific recommendations to improve metadata quality by more use of controlled vocabulary and by metadata reviews. Finally, we advocate for a unified, straightforward metadata submission and retrieval system. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Mississippi forest industry

    Treesearch

    Dwane D. Van Hooser

    1968-01-01

    Nearly 402 million cubic feet of industrial roundwood were harvested in Mississippi in 1966. This was the largest harvest in a decade. Nearly three-fifths was softwood-mainly pine. Altogether, 305 million cubic feet were processed by the State's forest industries. Some 123 million cubic feet were shipped to surrounding States, while 26 million cubic feet were...

  11. Effective use of metadata in the integration and analysis of multi-dimensional optical data

    NASA Astrophysics Data System (ADS)

    Pastorello, G. Z.; Gamon, J. A.

    2012-12-01

    Data discovery and integration relies on adequate metadata. However, creating and maintaining metadata is time consuming and often poorly addressed or avoided altogether, leading to problems in later data analysis and exchange. This is particularly true for research fields in which metadata standards do not yet exist or are under development, or within smaller research groups without enough resources. Vegetation monitoring using in-situ and remote optical sensing is an example of such a domain. In this area, data are inherently multi-dimensional, with spatial, temporal and spectral dimensions usually being well characterized. Other equally important aspects, however, might be inadequately translated into metadata. Examples include equipment specifications and calibrations, field/lab notes and field/lab protocols (e.g., sampling regimen, spectral calibration, atmospheric correction, sensor view angle, illumination angle), data processing choices (e.g., methods for gap filling, filtering and aggregation of data), quality assurance, and documentation of data sources, ownership and licensing. Each of these aspects can be important as metadata for search and discovery, but they can also be used as key data fields in their own right. If each of these aspects is also understood as an "extra dimension," it is possible to take advantage of them to simplify the data acquisition, integration, analysis, visualization and exchange cycle. Simple examples include selecting data sets of interest early in the integration process (e.g., only data collected according to a specific field sampling protocol) or applying appropriate data processing operations to different parts of a data set (e.g., adaptive processing for data collected under different sky conditions). More interesting scenarios involve guided navigation and visualization of data sets based on these extra dimensions, as well as partitioning data sets to highlight relevant subsets to be made available for exchange. The

  12. An asynchronous traversal engine for graph-based rich metadata management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Dong; Carns, Philip; Ross, Robert B.

    Rich metadata in high-performance computing (HPC) systems contains extended information about users, jobs, data files, and their relationships. Property graphs are a promising data model to represent heterogeneous rich metadata flexibly. Specifically, a property graph can use vertices to represent different entities and edges to record the relationships between vertices with unique annotations. The high-volume HPC use case, with millions of entities and relationships, naturally requires an out-of-core distributed property graph database, which must support live updates (to ingest production information in real time), low-latency point queries (for frequent metadata operations such as permission checking), and large-scale traversals (for provenancemore » data mining). Among these needs, large-scale property graph traversals are particularly challenging for distributed graph storage systems. Most existing graph systems implement a "level synchronous" breadth-first search algorithm that relies on global synchronization in each traversal step. This performs well in many problem domains; but a rich metadata management system is characterized by imbalanced graphs, long traversal lengths, and concurrent workloads, each of which has the potential to introduce or exacerbate stragglers (i.e., abnormally slow steps or servers in a graph traversal) that lead to low overall throughput for synchronous traversal algorithms. Previous research indicated that the straggler problem can be mitigated by using asynchronous traversal algorithms, and many graph-processing frameworks have successfully demonstrated this approach. Such systems require the graph to be loaded into a separate batch-processing framework instead of being iteratively accessed, however. In this work, we investigate a general asynchronous graph traversal engine that can operate atop a rich metadata graph in its native format. We outline a traversal-aware query language and key optimizations (traversal

  13. An asynchronous traversal engine for graph-based rich metadata management

    DOE PAGES

    Dai, Dong; Carns, Philip; Ross, Robert B.; ...

    2016-06-23

    Rich metadata in high-performance computing (HPC) systems contains extended information about users, jobs, data files, and their relationships. Property graphs are a promising data model to represent heterogeneous rich metadata flexibly. Specifically, a property graph can use vertices to represent different entities and edges to record the relationships between vertices with unique annotations. The high-volume HPC use case, with millions of entities and relationships, naturally requires an out-of-core distributed property graph database, which must support live updates (to ingest production information in real time), low-latency point queries (for frequent metadata operations such as permission checking), and large-scale traversals (for provenancemore » data mining). Among these needs, large-scale property graph traversals are particularly challenging for distributed graph storage systems. Most existing graph systems implement a "level synchronous" breadth-first search algorithm that relies on global synchronization in each traversal step. This performs well in many problem domains; but a rich metadata management system is characterized by imbalanced graphs, long traversal lengths, and concurrent workloads, each of which has the potential to introduce or exacerbate stragglers (i.e., abnormally slow steps or servers in a graph traversal) that lead to low overall throughput for synchronous traversal algorithms. Previous research indicated that the straggler problem can be mitigated by using asynchronous traversal algorithms, and many graph-processing frameworks have successfully demonstrated this approach. Such systems require the graph to be loaded into a separate batch-processing framework instead of being iteratively accessed, however. In this work, we investigate a general asynchronous graph traversal engine that can operate atop a rich metadata graph in its native format. We outline a traversal-aware query language and key optimizations (traversal

  14. Phomopsis seed decay of soybean

    USDA-ARS?s Scientific Manuscript database

    Soybean Phomopsis seed decay (PSD) causes poor seed quality and suppresses yield in most soybean-growing countries. The disease is caused primarily by the fungal pathogen Phomopsis longicolla along with other Phomopsis and Diaporthe spp. Infected seed range from symptomless to shriveled, elongated, ...

  15. Metadata Repository for Improved Data Sharing and Reuse Based on HL7 FHIR.

    PubMed

    Ulrich, Hannes; Kock, Ann-Kristin; Duhm-Harbeck, Petra; Habermann, Jens K; Ingenerf, Josef

    2016-01-01

    Unreconciled data structures and formats are a common obstacle to the urgently required sharing and reuse of data within healthcare and medical research. Within the North German Tumor Bank of Colorectal Cancer, clinical and sample data, based on a harmonized data set, is collected and can be pooled by using a hospital-integrated Research Data Management System supporting biobank and study management. Adding further partners who are not using the core data set requires manual adaptations and mapping of data elements. Facing this manual intervention and focusing the reuse of heterogeneous healthcare instance data (value level) and data elements (metadata level), a metadata repository has been developed. The metadata repository is an ISO 11179-3 conformant server application built for annotating and mediating data elements. The implemented architecture includes the translation of metadata information about data elements into the FHIR standard using the FHIR Data Element resource with the ISO 11179 Data Element Extensions. The FHIR-based processing allows exchange of data elements with clinical and research IT systems as well as with other metadata systems. With increasingly annotated and harmonized data elements, data quality and integration can be improved for successfully enabling data analytics and decision support.

  16. Transgenic soybean overexpressing GmSAMT1 exhibits resistance to multiple-HG types of soybean cyst nematode Heterodera glycines

    DOE PAGES

    Lin, Jingyu; Mazarei, Mitra; Zhao, Nan; ...

    2016-05-23

    Soybean ( Glycine max (L.) Merr.) salicylic acid methyl transferase (GmSAMT1) catalyses the conversion of salicylic acid to methyl salicylate. Prior results showed that when GmSAMT1 was overexpressed in transgenic soybean hairy roots, resistance is conferred against soybean cyst nematode (SCN), Heterodera glycines Ichinohe. In this study, we produced transgenic soybean overexpressing GmSAMT1 and characterized their response to various SCN races. Transgenic plants conferred a significant reduction in the development of SCN HG type 1.2.5.7 (race 2), HG type 0 (race 3) and HG type 2.5.7 (race 5). Among transgenic lines, GmSAMT1 expression in roots was positively associated with SCNmore » resistance. In some transgenic lines, there was a significant decrease in salicylic acid titer relative to control plants. No significant seed yield differences were observed between transgenics and control soybean plants grown in one greenhouse with 22 °C day/night temperature, whereas transgenic soybean had higher yield than controls grown a warmer greenhouse (27 °C day/23 °C night) temperature. In a 1-year field experiment in Knoxville, TN, there was no significant difference in seed yield between the transgenic and nontransgenic soybean under conditions with negligible SCN infection. We hypothesize that GmSAMT1 expression affects salicylic acid biosynthesis, which, in turn, attenuates SCN development, without negative consequences to soybean yield or other morphological traits. Furthermore, we conclude that GmSAMT1 overexpression confers broad resistance to multiple SCN races, which would be potentially applicable to commercial production.« less

  17. Transgenic soybean overexpressing GmSAMT1 exhibits resistance to multiple-HG types of soybean cyst nematode Heterodera glycines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Jingyu; Mazarei, Mitra; Zhao, Nan

    Soybean ( Glycine max (L.) Merr.) salicylic acid methyl transferase (GmSAMT1) catalyses the conversion of salicylic acid to methyl salicylate. Prior results showed that when GmSAMT1 was overexpressed in transgenic soybean hairy roots, resistance is conferred against soybean cyst nematode (SCN), Heterodera glycines Ichinohe. In this study, we produced transgenic soybean overexpressing GmSAMT1 and characterized their response to various SCN races. Transgenic plants conferred a significant reduction in the development of SCN HG type 1.2.5.7 (race 2), HG type 0 (race 3) and HG type 2.5.7 (race 5). Among transgenic lines, GmSAMT1 expression in roots was positively associated with SCNmore » resistance. In some transgenic lines, there was a significant decrease in salicylic acid titer relative to control plants. No significant seed yield differences were observed between transgenics and control soybean plants grown in one greenhouse with 22 °C day/night temperature, whereas transgenic soybean had higher yield than controls grown a warmer greenhouse (27 °C day/23 °C night) temperature. In a 1-year field experiment in Knoxville, TN, there was no significant difference in seed yield between the transgenic and nontransgenic soybean under conditions with negligible SCN infection. We hypothesize that GmSAMT1 expression affects salicylic acid biosynthesis, which, in turn, attenuates SCN development, without negative consequences to soybean yield or other morphological traits. Furthermore, we conclude that GmSAMT1 overexpression confers broad resistance to multiple SCN races, which would be potentially applicable to commercial production.« less

  18. Impacts of Insect Defoliation in Cottonwood Plantations in Mississippi

    Treesearch

    Theodor D. Leininger; Nathan M. Schiff; Jackie Henne-Kerr

    2004-01-01

    In spring 2001, a notodontid moth, Gluphisia septentrionis Wlkr., defoliated about 2,000 acres of 9- and 10- year-old eastern cottonwood (Populus deltoides Bartr.) trees in west central Mississippi. The farm manager had never seen cottonwood defoliated by that species of moth, because it was not considered a pest in Mississippi....

  19. Tracking contaminants down the Mississippi

    USGS Publications Warehouse

    Swarzenski, P.; Campbell, P.

    2004-01-01

    The Mississippi River and its last major downstream distributary, the Atchafalaya River, provide approximately 90 percent of the freshwater input to the Gulf of Mexico. Analyses of sediment cores using organic and inorganic tracers as well as bethic foraminifera appear to provide a reliable record of the historic variability of hypoxia in the northern Gulf of Mexico over the past few centuries. Natural variability in hypoxic events may be driven largely by flooding cycles of El Nin??o/La Nin??a prior to recent increases in nutrient loading. Specifically, large floods in 1979, 1983, 1993 and 1998, compounded with the widespread use of fertilizers, also appear at least partially responsible for the recent (post-1980) dramatic increase of hypoxic events in the Mississippi Bight.

  20. Invasion of the Upper Mississippi River System by Saltwater Amphipods

    EPA Science Inventory

    Zoobenthos surveys of the Great Rivers of the Upper Mississippi River basin (Missouri, Mississippi, and Ohio Rivers) provided an opportunity for documenting a series of invasions by euryhaline amphipods. The corophiid amphipod Apocorophium lacustre was first found in the Ohio Ri...

  1. 77 FR 64348 - Land Acquisitions: Mississippi Band of Choctaw Indians

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-19

    ... DEPARTMENT OF THE INTERIOR Bureau of Indian Affairs Land Acquisitions: Mississippi Band of Choctaw Indians AGENCY: Bureau of Indian Affairs, Interior. ACTION: Notice of transfer of federally owned lands... of Choctaw Indians, Choctaw Reservation, Mississippi (Tribe). This notice announces that the...

  2. Serving Fisheries and Ocean Metadata to Communities Around the World

    NASA Technical Reports Server (NTRS)

    Meaux, Melanie F.

    2007-01-01

    NASA's Global Change Master Directory (GCMD) assists the oceanographic community in the discovery, access, and sharing of scientific data by serving on-line fisheries and ocean metadata to users around the globe. As of January 2006, the directory holds more than 16,300 Earth Science data descriptions and over 1,300 services descriptions. Of these, nearly 4,000 unique ocean-related metadata records are available to the public, with many having direct links to the data. In 2005, the GCMD averaged over 5 million hits a month, with nearly a half million unique hosts for the year. Through the GCMD portal (http://gcmd.nasa.gov/), users can search vast and growing quantities of data and services using controlled keywords, free-text searches, or a combination of both. Users may now refine a search based on topic, location, instrument, platform, project, data center, spatial and temporal coverage, and data resolution for selected datasets. The directory also offers data holders a means to advertise and search their data through customized portals, which are subset views of the directory. The discovery metadata standard used is the Directory Interchange Format (DIF), adopted in 1988. This format has evolved to accommodate other national and international standards such as FGDC and IS019115. Users can submit metadata through easy-to-use online and offline authoring tools. The directory, which also serves as the International Directory Network (IDN), has been providing its services and sharing its experience and knowledge of metadata at the international, national, regional, and local level for many years. Active partners include the Committee on Earth Observation Satellites (CEOS), federal agencies (such as NASA, NOAA, and USGS), international agencies (such as IOC/IODE, UN, and JAXA) and organizations (such as ESIP, IOOS/DMAC, GOSIC, GLOBEC, OBIS, and GoMODP).

  3. Automatic meta-data collection of STP observation data

    NASA Astrophysics Data System (ADS)

    Ishikura, S.; Kimura, E.; Murata, K.; Kubo, T.; Shinohara, I.

    2006-12-01

    For the geo-science and the STP (Solar-Terrestrial Physics) studies, various observations have been done by satellites and ground-based observatories up to now. These data are saved and managed at many organizations, but no common procedure and rule to provide and/or share these data files. Researchers have felt difficulty in searching and analyzing such different types of data distributed over the Internet. To support such cross-over analyses of observation data, we have developed the STARS (Solar-Terrestrial data Analysis and Reference System). The STARS consists of client application (STARS-app), the meta-database (STARS- DB), the portal Web service (STARS-WS) and the download agent Web service (STARS DLAgent-WS). The STARS-DB includes directory information, access permission, protocol information to retrieve data files, hierarchy information of mission/team/data and user information. Users of the STARS are able to download observation data files without knowing locations of the files by using the STARS-DB. We have implemented the Portal-WS to retrieve meta-data from the meta-database. One reason we use the Web service is to overcome a variety of firewall restrictions which is getting stricter in recent years. Now it is difficult for the STARS client application to access to the STARS-DB by sending SQL query to obtain meta- data from the STARS-DB. Using the Web service, we succeeded in placing the STARS-DB behind the Portal- WS and prevent from exposing it on the Internet. The STARS accesses to the Portal-WS by sending the SOAP (Simple Object Access Protocol) request over HTTP. Meta-data is received as a SOAP Response. The STARS DLAgent-WS provides clients with data files downloaded from data sites. The data files are provided with a variety of protocols (e.g., FTP, HTTP, FTPS and SFTP). These protocols are individually selected at each site. The clients send a SOAP request with download request messages and receive observation data files as a SOAP Response with

  4. Virtual Environments for Visualizing Structural Health Monitoring Sensor Networks, Data, and Metadata.

    PubMed

    Napolitano, Rebecca; Blyth, Anna; Glisic, Branko

    2018-01-16

    Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included.

  5. Virtual Environments for Visualizing Structural Health Monitoring Sensor Networks, Data, and Metadata

    PubMed Central

    Napolitano, Rebecca; Blyth, Anna; Glisic, Branko

    2018-01-01

    Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included. PMID:29337877

  6. SOYBEAN.DEFOLIATION.1.SD.2011

    USDA-ARS?s Scientific Manuscript database

    Various chewing insects feed upon soybean plants, and their infestations may be economically significant in some years in the north-central United States. Soybean lines that are resistant to defoliation may be useful for management of chewing insect pests. Levels of defoliation from chewing insec...

  7. Bathymetry and acoustic backscatter data collected in 2010 from Cat Island, Mississippi

    USGS Publications Warehouse

    Buster, Noreen A.; Pfeiffer, William R.; Miselis, Jennifer L.; Kindinger, Jack G.; Wiese, Dana S.; Reynolds, B.J.

    2012-01-01

    Scientists from the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center (SPCMSC), in collaboration with the U.S. Army Corps of Engineers (USACE), conducted geophysical and sedimentological surveys around Cat Island, the westernmost island in the Mississippi-Alabama barrier island chain (fig. 1). The objectives of the study were to understand the geologic evolution of Cat Island relative to other barrier islands in the northern Gulf of Mexico and to identify relationships between the geologic history, present day morphology, and sediment distribution. This report contains data from the bathymetry and side-scan sonar portion of the study collected during two geophysical cruises. Interferometric swath bathymetry and side-scan sonar data were collected aboard the RV G.K. Gilbert September 7-15, 2010. Single-beam bathymetry was collected in shallow water around the island (< 2 meter (m)) from the RV Streeterville from September 28 to October 2, 2010, to cover the data gap between the landward limit of the previous cruise and the shoreline. This report serves as an archive of processed interferometric swath and single-beam bathymetry and side scan sonar data. GIS data products include a 50-m cell size interpolated gridded bathymetry surface, trackline maps, and an acoustic side-scan sonar image. Additional files include error analysis maps, Field Activity Collection System (FACS) logs, and formal Federal Geographic Data Committee (FDGC) metadata.

  8. Soybean aphid feeding on resistant soybean leads to induction of xenobiotic stress response and suppression of salivary effector genes

    USDA-ARS?s Scientific Manuscript database

    The soybean aphid, Aphis glycines, poses serious challenges to soybean production in Asia, where it is native, and North-America, where it is invasive. To date, 6 major soybean genes for host plant resistance (HPR) to A. glycines have been identified, including Rag1, which is available in commercial...

  9. Assessment of the effects of Hirsutella minnesotensis on Soybean Cyst Nematode and growth of soybean

    USDA-ARS?s Scientific Manuscript database

    Hirsutella minnesotensis is a fungal endoparasite of nematodes juvenile and parasitizes soybean cyst nematodes (SCN) with high frequency. In this study, the effects of two H. minnesotensis isolates on population and distribution of SCN and growth of soybean were evaluated. Experiments were conducted...

  10. In-field Access to Geoscientific Metadata through GPS-enabled Mobile Phones

    NASA Astrophysics Data System (ADS)

    Hobona, Gobe; Jackson, Mike; Jordan, Colm; Butchart, Ben

    2010-05-01

    Fieldwork is an integral part of much geosciences research. But whilst geoscientists have physical or online access to data collections whilst in the laboratory or at base stations, equivalent in-field access is not standard or straightforward. The increasing availability of mobile internet and GPS-supported mobile phones, however, now provides the basis for addressing this issue. The SPACER project was commissioned by the Rapid Innovation initiative of the UK Joint Information Systems Committee (JISC) to explore the potential for GPS-enabled mobile phones to access geoscientific metadata collections. Metadata collections within the geosciences and the wider geospatial domain can be disseminated through web services based on the Catalogue Service for Web(CSW) standard of the Open Geospatial Consortium (OGC) - a global grouping of over 380 private, public and academic organisations aiming to improve interoperability between geospatial technologies. CSW offers an XML-over-HTTP interface for querying and retrieval of geospatial metadata. By default, the metadata returned by CSW is based on the ISO19115 standard and encoded in XML conformant to ISO19139. The SPACER project has created a prototype application that enables mobile phones to send queries to CSW containing user-defined keywords and coordinates acquired from GPS devices built-into the phones. The prototype has been developed using the free and open source Google Android platform. The mobile application offers views for listing titles, presenting multiple metadata elements and a Google Map with an overlay of bounding coordinates of datasets. The presentation will describe the architecture and approach applied in the development of the prototype.

  11. Pathogenic Variation of Phakopsora pachyrhizi Infecting Soybean in Nigeria

    USDA-ARS?s Scientific Manuscript database

    Soybean rust is an important disease in Nigeria and many other soybean-producing countries world-wide. To determine the geographical distribution of soybean rust in Nigeria, soybean fields were surveyed in the Derived Savanna, Northern Guinea Savanna, and Southern Guinea Savanna agroecological zones...

  12. Mississippi CaP HBCU Undergraduate Research Training Program

    DTIC Science & Technology

    2017-11-01

    AWARD NUMBER: W81XWH-14-1-0151 TITLE: Mississippi CaP HBCU Undergraduate Research Training Program PRINCIPAL INVESTIGATOR: Christian Gomez...Final PREPARED FOR: U.S. Army Medical Research and Materiel Command Fort Detrick, Maryland 21702-5012 DISTRIBUTION STATEMENT: Approved for...2017 4. TITLE AND SUBTITLE Mississippi CaP HBCU Undergraduate Research Training Program 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH-14-1-0151 5c

  13. Mississippi forest industries, 1972

    Treesearch

    Daniel F. Bertelson

    1973-01-01

    Mississippi forests supplied more than 559 million cubic feet of roundwood to forest industries in 1072. Pulpwood and saw logs were the major products, accounting for 85 percent of the harvest. A total of 315 primary wood-using plants were in operation in 1972.

  14. Mississippi State Briefing Book for low-level radioactive waste management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The Mississippi State Briefing Book is one of a series of state briefing books on low-level radioactive waste management practices. It has been prepared to assist state an federal agency officials in planning for safe low-level radioactive waste disposal. The report contains a profile of low-level radioactive waste generators in Mississippi. The profile is the result of a survey of NRC licensees in Mississippi. The briefing book also contains a comprehensive assessment of low-level radioactive waste management issues and concerns as defined by all major interested parties including industry, government, the media, and interest groups. The assessment was developed throughmore » personal communications with representatives of interested parties, and through a review of media sources. Lastly, the briefing book provides demographic and socioeconomic data and a discussion of relevant government agencies and activities, all of which may impact waste management practices in Mississippi.« less

  15. SOYBEAN.DEFOLIATION.2.SD.2011

    USDA-ARS?s Scientific Manuscript database

    Several types of chewing insects feed upon soybean plants, and their infestations may be economically significant in some years in the north-central United States. Soybean lines that are resistant to defoliation may be useful in the management of chewing insect pests. Levels of defoliation from c...

  16. Archive of Digitized Analog Boomer and Minisparker Seismic Reflection Data Collected from the Alabama-Mississippi-Louisiana Shelf During Cruises Onboard the R/V Carancahua and R/V Gyre, April and July, 1981

    USGS Publications Warehouse

    Sanford, Jordan M.; Harrison, Arnell S.; Wiese, Dana S.; Flocks, James G.

    2009-01-01

    In April and July of 1981, the U.S. Geological Survey (USGS) conducted geophysical surveys to investigate the shallow geologic framework of the Alabama-Mississippi-Louisiana Shelf in the northern Gulf of Mexico. Work was conducted onboard the Texas A&M University R/V Carancahua and the R/V Gyre to develop a geologic understanding of the study area and to locate potential hazards related to offshore oil and gas production. While the R/V Carancahua only collected boomer data, the R/V Gyre used a 400-Joule minisparker, 3.5-kilohertz (kHz) subbottom profiler, 12-kHz precision depth recorder, and two air guns. The authors selected the minisparker data set because, unlike with the boomer data, it provided the most complete record. This report is part of a series to digitally archive the legacy analog data collected from the Mississippi-Alabama SHelf (MASH). The MASH data rescue project is a cooperative effort by the USGS and the Minerals Management Service (MMS). This report serves as an archive of high-resolution scanned Tagged Image File Format (TIFF) and Graphics Interchange Format (GIF) images of the original boomer and minisparker paper records, navigation files, trackline maps, Geographic Information System (GIS) files, cruise logs, and formal Federal Geographic Data Committee (FGDC) metadata.

  17. High-performance metadata indexing and search in petascale data storage systems

    NASA Astrophysics Data System (ADS)

    Leung, A. W.; Shao, M.; Bisson, T.; Pasupathy, S.; Miller, E. L.

    2008-07-01

    Large-scale storage systems used for scientific applications can store petabytes of data and billions of files, making the organization and management of data in these systems a difficult, time-consuming task. The ability to search file metadata in a storage system can address this problem by allowing scientists to quickly navigate experiment data and code while allowing storage administrators to gather the information they need to properly manage the system. In this paper, we present Spyglass, a file metadata search system that achieves scalability by exploiting storage system properties, providing the scalability that existing file metadata search tools lack. In doing so, Spyglass can achieve search performance up to several thousand times faster than existing database solutions. We show that Spyglass enables important functionality that can aid data management for scientists and storage administrators.

  18. Utilizing Linked Open Data Sources for Automatic Generation of Semantic Metadata

    NASA Astrophysics Data System (ADS)

    Nummiaho, Antti; Vainikainen, Sari; Melin, Magnus

    In this paper we present an application that can be used to automatically generate semantic metadata for tags given as simple keywords. The application that we have implemented in Java programming language creates the semantic metadata by linking the tags to concepts in different semantic knowledge bases (CrunchBase, DBpedia, Freebase, KOKO, Opencyc, Umbel and/or WordNet). The steps that our application takes in doing so include detecting possible languages, finding spelling suggestions and finding meanings from amongst the proper nouns and common nouns separately. Currently, our application supports English, Finnish and Swedish words, but other languages could be included easily if the required lexical tools (spellcheckers, etc.) are available. The created semantic metadata can be of great use in, e.g., finding and combining similar contents, creating recommendations and targeting advertisements.

  19. Use of a metadata documentation and search tool for large data volumes: The NGEE arctic example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devarakonda, Ranjeet; Hook, Leslie A; Killeffer, Terri S

    The Online Metadata Editor (OME) is a web-based tool to help document scientific data in a well-structured, popular scientific metadata format. In this paper, we will discuss the newest tool that Oak Ridge National Laboratory (ORNL) has developed to generate, edit, and manage metadata and how it is helping data-intensive science centers and projects, such as the U.S. Department of Energy s Next Generation Ecosystem Experiments (NGEE) in the Arctic to prepare metadata and make their big data produce big science and lead to new discoveries.

  20. Perplexing Federal Cases from Mississippi: Lessons for School Administrators

    ERIC Educational Resources Information Center

    Ratliff, Lindon J.

    2010-01-01

    Federal court cases are examined in an effort to view recent First Amendment rights infringements which have occurred in Mississippi. Case law reinforces students' rights to wear same-sex outfits to school functions as well as to bring same-sex dates. Connection to a recent civil rights investigation by the NAACP into a north Mississippi middle…