Sample records for large spatial databases

  1. Design and implementation of a distributed large-scale spatial database system based on J2EE

    NASA Astrophysics Data System (ADS)

    Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia

    2003-03-01

    With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.

  2. Architectural Implications for Spatial Object Association Algorithms*

    PubMed Central

    Kumar, Vijay S.; Kurc, Tahsin; Saltz, Joel; Abdulla, Ghaleb; Kohn, Scott R.; Matarazzo, Celeste

    2013-01-01

    Spatial object association, also referred to as crossmatch of spatial datasets, is the problem of identifying and comparing objects in two or more datasets based on their positions in a common spatial coordinate system. In this work, we evaluate two crossmatch algorithms that are used for astronomical sky surveys, on the following database system architecture configurations: (1) Netezza Performance Server®, a parallel database system with active disk style processing capabilities, (2) MySQL Cluster, a high-throughput network database system, and (3) a hybrid configuration consisting of a collection of independent database system instances with data replication support. Our evaluation provides insights about how architectural characteristics of these systems affect the performance of the spatial crossmatch algorithms. We conducted our study using real use-case scenarios borrowed from a large-scale astronomy application known as the Large Synoptic Survey Telescope (LSST). PMID:25692244

  3. Architectural Implications for Spatial Object Association Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, V S; Kurc, T; Saltz, J

    2009-01-29

    Spatial object association, also referred to as cross-match of spatial datasets, is the problem of identifying and comparing objects in two or more datasets based on their positions in a common spatial coordinate system. In this work, we evaluate two crossmatch algorithms that are used for astronomical sky surveys, on the following database system architecture configurations: (1) Netezza Performance Server R, a parallel database system with active disk style processing capabilities, (2) MySQL Cluster, a high-throughput network database system, and (3) a hybrid configuration consisting of a collection of independent database system instances with data replication support. Our evaluation providesmore » insights about how architectural characteristics of these systems affect the performance of the spatial crossmatch algorithms. We conducted our study using real use-case scenarios borrowed from a large-scale astronomy application known as the Large Synoptic Survey Telescope (LSST).« less

  4. A generic method for improving the spatial interoperability of medical and ecological databases.

    PubMed

    Ghenassia, A; Beuscart, J B; Ficheur, G; Occelli, F; Babykina, E; Chazard, E; Genin, M

    2017-10-03

    The availability of big data in healthcare and the intensive development of data reuse and georeferencing have opened up perspectives for health spatial analysis. However, fine-scale spatial studies of ecological and medical databases are limited by the change of support problem and thus a lack of spatial unit interoperability. The use of spatial disaggregation methods to solve this problem introduces errors into the spatial estimations. Here, we present a generic, two-step method for merging medical and ecological databases that avoids the use of spatial disaggregation methods, while maximizing the spatial resolution. Firstly, a mapping table is created after one or more transition matrices have been defined. The latter link the spatial units of the original databases to the spatial units of the final database. Secondly, the mapping table is validated by (1) comparing the covariates contained in the two original databases, and (2) checking the spatial validity with a spatial continuity criterion and a spatial resolution index. We used our novel method to merge a medical database (the French national diagnosis-related group database, containing 5644 spatial units) with an ecological database (produced by the French National Institute of Statistics and Economic Studies, and containing with 36,594 spatial units). The mapping table yielded 5632 final spatial units. The mapping table's validity was evaluated by comparing the number of births in the medical database and the ecological databases in each final spatial unit. The median [interquartile range] relative difference was 2.3% [0; 5.7]. The spatial continuity criterion was low (2.4%), and the spatial resolution index was greater than for most French administrative areas. Our innovative approach improves interoperability between medical and ecological databases and facilitates fine-scale spatial analyses. We have shown that disaggregation models and large aggregation techniques are not necessarily the best ways to tackle the change of support problem.

  5. Development of a GIService based on spatial data mining for location choice of convenience stores in Taipei City

    NASA Astrophysics Data System (ADS)

    Jung, Chinte; Sun, Chih-Hong

    2006-10-01

    Motivated by the increasing accessibility of technology, more and more spatial data are being made digitally available. How to extract the valuable knowledge from these large (spatial) databases is becoming increasingly important to businesses, as well. It is essential to be able to analyze and utilize these large datasets, convert them into useful knowledge, and transmit them through GIS-enabled instruments and the Internet, conveying the key information to business decision-makers effectively and benefiting business entities. In this research, we combine the techniques of GIS, spatial decision support system (SDSS), spatial data mining (SDM), and ArcGIS Server to achieve the following goals: (1) integrate databases from spatial and non-spatial datasets about the locations of businesses in Taipei, Taiwan; (2) use the association rules, one of the SDM methods, to extract the knowledge from the integrated databases; and (3) develop a Web-based SDSS GIService as a location-selection tool for business by the product of ArcGIS Server.

  6. Pattern-based, multi-scale segmentation and regionalization of EOSD land cover

    NASA Astrophysics Data System (ADS)

    Niesterowicz, Jacek; Stepinski, Tomasz F.

    2017-10-01

    The Earth Observation for Sustainable Development of Forests (EOSD) map is a 25 m resolution thematic map of Canadian forests. Because of its large spatial extent and relatively high resolution the EOSD is difficult to analyze using standard GIS methods. In this paper we propose multi-scale segmentation and regionalization of EOSD as new methods for analyzing EOSD on large spatial scales. Segments, which we refer to as forest land units (FLUs), are delineated as tracts of forest characterized by cohesive patterns of EOSD categories; we delineated from 727 to 91,885 FLUs within the spatial extent of EOSD depending on the selected scale of a pattern. Pattern of EOSD's categories within each FLU is described by 1037 landscape metrics. A shapefile containing boundaries of all FLUs together with an attribute table listing landscape metrics make up an SQL-searchable spatial database providing detailed information on composition and pattern of land cover types in Canadian forest. Shapefile format and extensive attribute table pertaining to the entire legend of EOSD are designed to facilitate broad range of investigations in which assessment of composition and pattern of forest over large areas is needed. We calculated four such databases using different spatial scales of pattern. We illustrate the use of FLU database for producing forest regionalization maps of two Canadian provinces, Quebec and Ontario. Such maps capture the broad scale variability of forest at the spatial scale of the entire province. We also demonstrate how FLU database can be used to map variability of landscape metrics, and thus the character of landscape, over the entire Canada.

  7. Data Representations for Geographic Information Systems.

    ERIC Educational Resources Information Center

    Shaffer, Clifford A.

    1992-01-01

    Surveys the field and literature of geographic information systems (GIS) and spatial data representation as it relates to GIS. Highlights include GIS terms, data types, and operations; vector representations and raster, or grid, representations; spatial indexing; elevation data representations; large spatial databases; and problem areas and future…

  8. Zebra Crossing Spotter: Automatic Population of Spatial Databases for Increased Safety of Blind Travelers

    PubMed Central

    Ahmetovic, Dragan; Manduchi, Roberto; Coughlan, James M.; Mascetti, Sergio

    2016-01-01

    In this paper we propose a computer vision-based technique that mines existing spatial image databases for discovery of zebra crosswalks in urban settings. Knowing the location of crosswalks is critical for a blind person planning a trip that includes street crossing. By augmenting existing spatial databases (such as Google Maps or OpenStreetMap) with this information, a blind traveler may make more informed routing decisions, resulting in greater safety during independent travel. Our algorithm first searches for zebra crosswalks in satellite images; all candidates thus found are validated against spatially registered Google Street View images. This cascaded approach enables fast and reliable discovery and localization of zebra crosswalks in large image datasets. While fully automatic, our algorithm could also be complemented by a final crowdsourcing validation stage for increased accuracy. PMID:26824080

  9. A high-performance spatial database based approach for pathology imaging algorithm evaluation

    PubMed Central

    Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.

    2013-01-01

    Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. Results: Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. Conclusions: Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation. PMID:23599905

  10. An Evaluation of Database Solutions to Spatial Object Association

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, V S; Kurc, T; Saltz, J

    2008-06-24

    Object association is a common problem encountered in many applications. Spatial object association, also referred to as crossmatch of spatial datasets, is the problem of identifying and comparing objects in two datasets based on their positions in a common spatial coordinate system--one of the datasets may correspond to a catalog of objects observed over time in a multi-dimensional domain; the other dataset may consist of objects observed in a snapshot of the domain at a time point. The use of database management systems to the solve the object association problem provides portability across different platforms and also greater flexibility. Increasingmore » dataset sizes in today's applications, however, have made object association a data/compute-intensive problem that requires targeted optimizations for efficient execution. In this work, we investigate how database-based crossmatch algorithms can be deployed on different database system architectures and evaluate the deployments to understand the impact of architectural choices on crossmatch performance and associated trade-offs. We investigate the execution of two crossmatch algorithms on (1) a parallel database system with active disk style processing capabilities, (2) a high-throughput network database (MySQL Cluster), and (3) shared-nothing databases with replication. We have conducted our study in the context of a large-scale astronomy application with real use-case scenarios.« less

  11. Distribution of late Pleistocene ice-rich syngenetic permafrost of the Yedoma Suite in east and central Siberia, Russia

    USGS Publications Warehouse

    Grosse, Guido; Robinson, Joel E.; Bryant, Robin; Taylor, Maxwell D.; Harper, William; DeMasi, Amy; Kyker-Snowman, Emily; Veremeeva, Alexandra; Schirrmeister, Lutz; Harden, Jennifer

    2013-01-01

    This digital database is the product of collaboration between the U.S. Geological Survey, the Geophysical Institute at the University of Alaska, Fairbanks; the Los Altos Hills Foothill College GeoSpatial Technology Certificate Program; the Alfred Wegener Institute for Polar and Marine Research, Potsdam, Germany; and the Institute of Physical Chemical and Biological Problems in Soil Science of the Russian Academy of Sciences. The primary goal for creating this digital database is to enhance current estimates of soil organic carbon stored in deep permafrost, in particular the late Pleistocene syngenetic ice-rich permafrost deposits of the Yedoma Suite. Previous studies estimated that Yedoma deposits cover about 1 million square kilometers of a large region in central and eastern Siberia, but these estimates generally are based on maps with scales smaller than 1:10,000,000. Taking into account this large area, it was estimated that Yedoma may store as much as 500 petagrams of soil organic carbon, a large part of which is vulnerable to thaw and mobilization from thermokarst and erosion. To refine assessments of the spatial distribution of Yedoma deposits, we digitized 11 Russian Quaternary geologic maps. Our study focused on extracting geologic units interpreted by us as late Pleistocene ice-rich syngenetic Yedoma deposits based on lithology, ground ice conditions, stratigraphy, and geomorphological and spatial association. These Yedoma units then were merged into a single data layer across map tiles. The spatial database provides a useful update of the spatial distribution of this deposit for an approximately 2.32 million square kilometers land area in Siberia that will (1) serve as a core database for future refinements of Yedoma distribution in additional regions, and (2) provide a starting point to revise the size of deep but thaw-vulnerable permafrost carbon pools in the Arctic based on surface geology and the distribution of cryolithofacies types at high spatial resolution. However, we recognize that the extent of Yedoma deposits presented in this database is not complete for a global assessment, because Yedoma deposits also occur in the Taymyr lowlands and Chukotka, and in parts of Alaska and northwestern Canada.

  12. Nosql for Storage and Retrieval of Large LIDAR Data Collections

    NASA Astrophysics Data System (ADS)

    Boehm, J.; Liu, K.

    2015-08-01

    Developments in LiDAR technology over the past decades have made LiDAR to become a mature and widely accepted source of geospatial information. This in turn has led to an enormous growth in data volume. The central idea for a file-centric storage of LiDAR point clouds is the observation that large collections of LiDAR data are typically delivered as large collections of files, rather than single files of terabyte size. This split of the dataset, commonly referred to as tiling, was usually done to accommodate a specific processing pipeline. It makes therefore sense to preserve this split. A document oriented NoSQL database can easily emulate this data partitioning, by representing each tile (file) in a separate document. The document stores the metadata of the tile. The actual files are stored in a distributed file system emulated by the NoSQL database. We demonstrate the use of MongoDB a highly scalable document oriented NoSQL database for storing large LiDAR files. MongoDB like any NoSQL database allows for queries on the attributes of the document. As a specialty MongoDB also allows spatial queries. Hence we can perform spatial queries on the bounding boxes of the LiDAR tiles. Inserting and retrieving files on a cloud-based database is compared to native file system and cloud storage transfer speed.

  13. Rasdaman for Big Spatial Raster Data

    NASA Astrophysics Data System (ADS)

    Hu, F.; Huang, Q.; Scheele, C. J.; Yang, C. P.; Yu, M.; Liu, K.

    2015-12-01

    Spatial raster data have grown exponentially over the past decade. Recent advancements on data acquisition technology, such as remote sensing, have allowed us to collect massive observation data of various spatial resolution and domain coverage. The volume, velocity, and variety of such spatial data, along with the computational intensive nature of spatial queries, pose grand challenge to the storage technologies for effective big data management. While high performance computing platforms (e.g., cloud computing) can be used to solve the computing-intensive issues in big data analysis, data has to be managed in a way that is suitable for distributed parallel processing. Recently, rasdaman (raster data manager) has emerged as a scalable and cost-effective database solution to store and retrieve massive multi-dimensional arrays, such as sensor, image, and statistics data. Within this paper, the pros and cons of using rasdaman to manage and query spatial raster data will be examined and compared with other common approaches, including file-based systems, relational databases (e.g., PostgreSQL/PostGIS), and NoSQL databases (e.g., MongoDB and Hive). Earth Observing System (EOS) data collected from NASA's Atmospheric Scientific Data Center (ASDC) will be used and stored in these selected database systems, and a set of spatial and non-spatial queries will be designed to benchmark their performance on retrieving large-scale, multi-dimensional arrays of EOS data. Lessons learnt from using rasdaman will be discussed as well.

  14. An intelligent user interface for browsing satellite data catalogs

    NASA Technical Reports Server (NTRS)

    Cromp, Robert F.; Crook, Sharon

    1989-01-01

    A large scale domain-independent spatial data management expert system that serves as a front-end to databases containing spatial data is described. This system is unique for two reasons. First, it uses spatial search techniques to generate a list of all the primary keys that fall within a user's spatial constraints prior to invoking the database management system, thus substantially decreasing the amount of time required to answer a user's query. Second, a domain-independent query expert system uses a domain-specific rule base to preprocess the user's English query, effectively mapping a broad class of queries into a smaller subset that can be handled by a commercial natural language processing system. The methods used by the spatial search module and the query expert system are explained, and the system architecture for the spatial data management expert system is described. The system is applied to data from the International Ultraviolet Explorer (IUE) satellite, and results are given.

  15. Providing R-Tree Support for Mongodb

    NASA Astrophysics Data System (ADS)

    Xiang, Longgang; Shao, Xiaotian; Wang, Dehao

    2016-06-01

    Supporting large amounts of spatial data is a significant characteristic of modern databases. However, unlike some mature relational databases, such as Oracle and PostgreSQL, most of current burgeoning NoSQL databases are not well designed for storing geospatial data, which is becoming increasingly important in various fields. In this paper, we propose a novel method to provide R-tree index, as well as corresponding spatial range query and nearest neighbour query functions, for MongoDB, one of the most prevalent NoSQL databases. First, after in-depth analysis of MongoDB's features, we devise an efficient tabular document structure which flattens R-tree index into MongoDB collections. Further, relevant mechanisms of R-tree operations are issued, and then we discuss in detail how to integrate R-tree into MongoDB. Finally, we present the experimental results which show that our proposed method out-performs the built-in spatial index of MongoDB. Our research will greatly facilitate big data management issues with MongoDB in a variety of geospatial information applications.

  16. Improving data management and dissemination in web based information systems by semantic enrichment of descriptive data aspects

    NASA Astrophysics Data System (ADS)

    Gebhardt, Steffen; Wehrmann, Thilo; Klinger, Verena; Schettler, Ingo; Huth, Juliane; Künzer, Claudia; Dech, Stefan

    2010-10-01

    The German-Vietnamese water-related information system for the Mekong Delta (WISDOM) project supports business processes in Integrated Water Resources Management in Vietnam. Multiple disciplines bring together earth and ground based observation themes, such as environmental monitoring, water management, demographics, economy, information technology, and infrastructural systems. This paper introduces the components of the web-based WISDOM system including data, logic and presentation tier. It focuses on the data models upon which the database management system is built, including techniques for tagging or linking metadata with the stored information. The model also uses ordered groupings of spatial, thematic and temporal reference objects to semantically tag datasets to enable fast data retrieval, such as finding all data in a specific administrative unit belonging to a specific theme. A spatial database extension is employed by the PostgreSQL database. This object-oriented database was chosen over a relational database to tag spatial objects to tabular data, improving the retrieval of census and observational data at regional, provincial, and local areas. While the spatial database hinders processing raster data, a "work-around" was built into WISDOM to permit efficient management of both raster and vector data. The data model also incorporates styling aspects of the spatial datasets through styled layer descriptions (SLD) and web mapping service (WMS) layer specifications, allowing retrieval of rendered maps. Metadata elements of the spatial data are based on the ISO19115 standard. XML structured information of the SLD and metadata are stored in an XML database. The data models and the data management system are robust for managing the large quantity of spatial objects, sensor observations, census and document data. The operational WISDOM information system prototype contains modules for data management, automatic data integration, and web services for data retrieval, analysis, and distribution. The graphical user interfaces facilitate metadata cataloguing, data warehousing, web sensor data analysis and thematic mapping.

  17. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    PubMed

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  18. A geo-spatial data management system for potentially active volcanoes—GEOWARN project

    NASA Astrophysics Data System (ADS)

    Gogu, Radu C.; Dietrich, Volker J.; Jenny, Bernhard; Schwandner, Florian M.; Hurni, Lorenz

    2006-02-01

    Integrated studies of active volcanic systems for the purpose of long-term monitoring and forecast and short-term eruption prediction require large numbers of data-sets from various disciplines. A modern database concept has been developed for managing and analyzing multi-disciplinary volcanological data-sets. The GEOWARN project (choosing the "Kos-Yali-Nisyros-Tilos volcanic field, Greece" and the "Campi Flegrei, Italy" as test sites) is oriented toward potentially active volcanoes situated in regions of high geodynamic unrest. This article describes the volcanological database of the spatial and temporal data acquired within the GEOWARN project. As a first step, a spatial database embedded in a Geographic Information System (GIS) environment was created. Digital data of different spatial resolution, and time-series data collected at different intervals or periods, were unified in a common, four-dimensional representation of space and time. The database scheme comprises various information layers containing geographic data (e.g. seafloor and land digital elevation model, satellite imagery, anthropogenic structures, land-use), geophysical data (e.g. from active and passive seismicity, gravity, tomography, SAR interferometry, thermal imagery, differential GPS), geological data (e.g. lithology, structural geology, oceanography), and geochemical data (e.g. from hydrothermal fluid chemistry and diffuse degassing features). As a second step based on the presented database, spatial data analysis has been performed using custom-programmed interfaces that execute query scripts resulting in a graphical visualization of data. These query tools were designed and compiled following scenarios of known "behavior" patterns of dormant volcanoes and first candidate signs of potential unrest. The spatial database and query approach is intended to facilitate scientific research on volcanic processes and phenomena, and volcanic surveillance.

  19. Integration and management of massive remote-sensing data based on GeoSOT subdivision model

    NASA Astrophysics Data System (ADS)

    Li, Shuang; Cheng, Chengqi; Chen, Bo; Meng, Li

    2016-07-01

    Owing to the rapid development of earth observation technology, the volume of spatial information is growing rapidly; therefore, improving query retrieval speed from large, rich data sources for remote-sensing data management systems is quite urgent. A global subdivision model, geographic coordinate subdivision grid with one-dimension integer coding on 2n-tree, which we propose as a solution, has been used in data management organizations. However, because a spatial object may cover several grids, ample data redundancy will occur when data are stored in relational databases. To solve this redundancy problem, we first combined the subdivision model with the spatial array database containing the inverted index. We proposed an improved approach for integrating and managing massive remote-sensing data. By adding a spatial code column in an array format in a database, spatial information in remote-sensing metadata can be stored and logically subdivided. We implemented our method in a Kingbase Enterprise Server database system and compared the results with the Oracle platform by simulating worldwide image data. Experimental results showed that our approach performed better than Oracle in terms of data integration and time and space efficiency. Our approach also offers an efficient storage management system for existing storage centers and management systems.

  20. PRAIRIEMAP: A GIS database for prairie grassland management in western North America

    USGS Publications Warehouse

    ,

    2003-01-01

    The USGS Forest and Rangeland Ecosystem Science Center, Snake River Field Station (SRFS) maintains a database of spatial information, called PRAIRIEMAP, which is needed to address the management of prairie grasslands in western North America. We identify and collect spatial data for the region encompassing the historical extent of prairie grasslands (Figure 1). State and federal agencies, the primary entities responsible for management of prairie grasslands, need this information to develop proactive management strategies to prevent prairie-grassland wildlife species from being listed as Endangered Species, or to develop appropriate responses if listing does occur. Spatial data are an important component in documenting current habitat and other environmental conditions, which can be used to identify areas that have undergone significant changes in land cover and to identify underlying causes. Spatial data will also be a critical component guiding the decision processes for restoration of habitat in the Great Plains. As such, the PRAIRIEMAP database will facilitate analyses of large-scale and range-wide factors that may be causing declines in grassland habitat and populations of species that depend on it for their survival. Therefore, development of a reliable spatial database carries multiple benefits for land and wildlife management. The project consists of 3 phases: (1) identify relevant spatial data, (2) assemble, document, and archive spatial data on a computer server, and (3) develop and maintain the web site (http://prairiemap.wr.usgs.gov) for query and transfer of GIS data to managers and researchers.

  1. A prototype system based on visual interactive SDM called VGC

    NASA Astrophysics Data System (ADS)

    Jia, Zelu; Liu, Yaolin; Liu, Yanfang

    2009-10-01

    In many application domains, data is collected and referenced by its geo-spatial location. Spatial data mining, or the discovery of interesting patterns in such databases, is an important capability in the development of database systems. Spatial data mining recently emerges from a number of real applications, such as real-estate marketing, urban planning, weather forecasting, medical image analysis, road traffic accident analysis, etc. It demands for efficient solutions for many new, expensive, and complicated problems. For spatial data mining of large data sets to be effective, it is also important to include humans in the data exploration process and combine their flexibility, creativity, and general knowledge with the enormous storage capacity and computational power of today's computers. Visual spatial data mining applies human visual perception to the exploration of large data sets. Presenting data in an interactive, graphical form often fosters new insights, encouraging the information and validation of new hypotheses to the end of better problem-solving and gaining deeper domain knowledge. In this paper a visual interactive spatial data mining prototype system (visual geo-classify) based on VC++6.0 and MapObject2.0 are designed and developed, the basic algorithms of the spatial data mining is used decision tree and Bayesian networks, and data classify are used training and learning and the integration of the two to realize. The result indicates it's a practical and extensible visual interactive spatial data mining tool.

  2. Advanced techniques for the storage and use of very large, heterogeneous spatial databases

    NASA Technical Reports Server (NTRS)

    Peuquet, Donna J.

    1987-01-01

    Progress is reported in the development of a prototype knowledge-based geographic information system. The overall purpose of this project is to investigate and demonstrate the use of advanced methods in order to greatly improve the capabilities of geographic information system technology in the handling of large, multi-source collections of spatial data in an efficient manner, and to make these collections of data more accessible and usable for the Earth scientist.

  3. Scaling up the diversity-resilience relationship with trait databases and remote sensing data: the recovery of productivity after wildfire.

    PubMed

    Spasojevic, Marko J; Bahlai, Christie A; Bradley, Bethany A; Butterfield, Bradley J; Tuanmu, Mao-Ning; Sistla, Seeta; Wiederholt, Ruscena; Suding, Katharine N

    2016-04-01

    Understanding the mechanisms underlying ecosystem resilience - why some systems have an irreversible response to disturbances while others recover - is critical for conserving biodiversity and ecosystem function in the face of global change. Despite the widespread acceptance of a positive relationship between biodiversity and resilience, empirical evidence for this relationship remains fairly limited in scope and localized in scale. Assessing resilience at the large landscape and regional scales most relevant to land management and conservation practices has been limited by the ability to measure both diversity and resilience over large spatial scales. Here, we combined tools used in large-scale studies of biodiversity (remote sensing and trait databases) with theoretical advances developed from small-scale experiments to ask whether the functional diversity within a range of woodland and forest ecosystems influences the recovery of productivity after wildfires across the four-corner region of the United States. We additionally asked how environmental variation (topography, macroclimate) across this geographic region influences such resilience, either directly or indirectly via changes in functional diversity. Using path analysis, we found that functional diversity in regeneration traits (fire tolerance, fire resistance, resprout ability) was a stronger predictor of the recovery of productivity after wildfire than the functional diversity of seed mass or species richness. Moreover, slope, elevation, and aspect either directly or indirectly influenced the recovery of productivity, likely via their effect on microclimate, while macroclimate had no direct or indirect effects. Our study provides some of the first direct empirical evidence for functional diversity increasing resilience at large spatial scales. Our approach highlights the power of combining theory based on local-scale studies with tools used in studies at large spatial scales and trait databases to understand pressing environmental issues. © 2015 John Wiley & Sons Ltd.

  4. Preliminary surficial geologic map of the Newberry Springs 30' x 60' quadrangle, California

    USGS Publications Warehouse

    Phelps, G.A.; Bedford, D.R.; Lidke, D.J.; Miller, D.M.; Schmidt, K.M.

    2012-01-01

    The Newberry Springs 30' x 60' quadrangle is located in the central Mojave Desert of southern California. It is split approximately into northern and southern halves by I-40, with the city of Barstow at its western edge and the town of Ludlow near its eastern edge. The map area spans lat 34°30 to 35° N. to long -116 °to -117° W. and covers over 1,000 km2. We integrate the results of surficial geologic mapping conducted during 2002-2005 with compilations of previous surficial mapping and bedrock geologic mapping. Quaternary units are subdivided in detail on the map to distinguish variations in age, process of formation, pedogenesis, lithology, and spatial interdependency, whereas pre-Quaternary bedrock units are grouped into generalized assemblages that emphasize their attributes as hillslope-forming materials and sources of parent material for the Quaternary units. The spatial information in this publication is presented in two forms: a spatial database and a geologic map. The geologic map is a view (the display of an extracted subset of the database at a given time) of the spatial database; it highlights key aspects of the database and necessarily does not show all of the data contained therein. The database contains detailed information about Quaternary geologic unit composition, authorship, and notes regarding geologic units, faults, contacts, and local vegetation. The amount of information contained in the database is too large to show on a single map, so a restricted subset of the information was chosen to summarize the overall nature of the geology. Refer to the database for additional information. Accompanying the spatial data are the map documentation and spatial metadata. The map documentation (this document) describes the geologic setting and history of the Newberry Springs map sheet, summarizes the age and physical character of each map unit, and describes principal faults and folds. The Federal Geographic Data Committee (FGDC) compliant metadata provides detailed information about the digital files and file structure of the spatial data.

  5. Video quality pooling adaptive to perceptual distortion severity.

    PubMed

    Park, Jincheol; Seshadrinathan, Kalpana; Lee, Sanghoon; Bovik, Alan Conrad

    2013-02-01

    It is generally recognized that severe video distortions that are transient in space and/or time have a large effect on overall perceived video quality. In order to understand this phenomena, we study the distribution of spatio-temporally local quality scores obtained from several video quality assessment (VQA) algorithms on videos suffering from compression and lossy transmission over communication channels. We propose a content adaptive spatial and temporal pooling strategy based on the observed distribution. Our method adaptively emphasizes "worst" scores along both the spatial and temporal dimensions of a video sequence and also considers the perceptual effect of large-area cohesive motion flow such as egomotion. We demonstrate the efficacy of the method by testing it using three different VQA algorithms on the LIVE Video Quality database and the EPFL-PoliMI video quality database.

  6. Building a multi-scaled geospatial temporal ecology database from disparate data sources: fostering open science and data reuse.

    PubMed

    Soranno, Patricia A; Bissell, Edward G; Cheruvelil, Kendra S; Christel, Samuel T; Collins, Sarah M; Fergus, C Emi; Filstrup, Christopher T; Lapierre, Jean-Francois; Lottig, Noah R; Oliver, Samantha K; Scott, Caren E; Smith, Nicole J; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A; Gries, Corinna; Henry, Emily N; Skaff, Nick K; Stanley, Emily H; Stow, Craig A; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E

    2015-01-01

    Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km(2)). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated database reproducible and extensible, allowing users to ask new research questions with the existing database or through the addition of new data. The largest challenge of this task was the heterogeneity of the data, formats, and metadata. Many steps of data integration need manual input from experts in diverse fields, requiring close collaboration.

  7. Building a multi-scaled geospatial temporal ecology database from disparate data sources: Fostering open science through data reuse

    USGS Publications Warehouse

    Soranno, Patricia A.; Bissell, E.G.; Cheruvelil, Kendra S.; Christel, Samuel T.; Collins, Sarah M.; Fergus, C. Emi; Filstrup, Christopher T.; Lapierre, Jean-Francois; Lotting, Noah R.; Oliver, Samantha K.; Scott, Caren E.; Smith, Nicole J.; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A.; Gries, Corinna; Henry, Emily N.; Skaff, Nick K.; Stanley, Emily H.; Stow, Craig A.; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E.

    2015-01-01

    Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km2). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated database reproducible and extensible, allowing users to ask new research questions with the existing database or through the addition of new data. The largest challenge of this task was the heterogeneity of the data, formats, and metadata. Many steps of data integration need manual input from experts in diverse fields, requiring close collaboration.

  8. Historical reconstructions of California wildfires vary by data source

    USGS Publications Warehouse

    Syphard, Alexandra D.; Keeley, Jon E.

    2016-01-01

    Historical data are essential for understanding how fire activity responds to different drivers. It is important that the source of data is commensurate with the spatial and temporal scale of the question addressed, but fire history databases are derived from different sources with different restrictions. In California, a frequently used fire history dataset is the State of California Fire and Resource Assessment Program (FRAP) fire history database, which circumscribes fire perimeters at a relatively fine scale. It includes large fires on both state and federal lands but only covers fires that were mapped or had other spatially explicit data. A different database is the state and federal governments’ annual reports of all fires. They are more complete than the FRAP database but are only spatially explicit to the level of county (California Department of Forestry and Fire Protection – Cal Fire) or forest (United States Forest Service – USFS). We found substantial differences between the FRAP database and the annual summaries, with the largest and most consistent discrepancy being in fire frequency. The FRAP database missed the majority of fires and is thus a poor indicator of fire frequency or indicators of ignition sources. The FRAP database is also deficient in area burned, especially before 1950. Even in contemporary records, the huge number of smaller fires not included in the FRAP database account for substantial cumulative differences in area burned. Wildfires in California account for nearly half of the western United States fire suppression budget. Therefore, the conclusions about data discrepancies and the implications for fire research are of broad importance.

  9. Unleashing spatially distributed ecohydrology modeling using Big Data tools

    NASA Astrophysics Data System (ADS)

    Miles, B.; Idaszak, R.

    2015-12-01

    Physically based spatially distributed ecohydrology models are useful for answering science and management questions related to the hydrology and biogeochemistry of prairie, savanna, forested, as well as urbanized ecosystems. However, these models can produce hundreds of gigabytes of spatial output for a single model run over decadal time scales when run at regional spatial scales and moderate spatial resolutions (~100-km2+ at 30-m spatial resolution) or when run for small watersheds at high spatial resolutions (~1-km2 at 3-m spatial resolution). Numerical data formats such as HDF5 can store arbitrarily large datasets. However even in HPC environments, there are practical limits on the size of single files that can be stored and reliably backed up. Even when such large datasets can be stored, querying and analyzing these data can suffer from poor performance due to memory limitations and I/O bottlenecks, for example on single workstations where memory and bandwidth are limited, or in HPC environments where data are stored separately from computational nodes. The difficulty of storing and analyzing spatial data from ecohydrology models limits our ability to harness these powerful tools. Big Data tools such as distributed databases have the potential to surmount the data storage and analysis challenges inherent to large spatial datasets. Distributed databases solve these problems by storing data close to computational nodes while enabling horizontal scalability and fault tolerance. Here we present the architecture of and preliminary results from PatchDB, a distributed datastore for managing spatial output from the Regional Hydro-Ecological Simulation System (RHESSys). The initial version of PatchDB uses message queueing to asynchronously write RHESSys model output to an Apache Cassandra cluster. Once stored in the cluster, these data can be efficiently queried to quickly produce both spatial visualizations for a particular variable (e.g. maps and animations), as well as point time series of arbitrary variables at arbitrary points in space within a watershed or river basin. By treating ecohydrology modeling as a Big Data problem, we hope to provide a platform for answering transformative science and management questions related to water quantity and quality in a world of non-stationary climate.

  10. U.S. Quaternary Fault and Fold Database Released

    NASA Astrophysics Data System (ADS)

    Haller, Kathleen M.; Machette, Michael N.; Dart, Richard L.; Rhea, B. Susan

    2004-06-01

    A comprehensive online compilation of Quaternary-age faults and folds throughout the United States was recently released by the U.S. Geological Survey, with cooperation from state geological surveys, academia, and the private sector. The Web site at http://Qfaults.cr.usgs.gov/ contains searchable databases and related geo-spatial data that characterize earthquake-related structures that could be potential seismic sources for large-magnitude (M > 6) earthquakes.

  11. A geospatial database model for the management of remote sensing datasets at multiple spectral, spatial, and temporal scales

    NASA Astrophysics Data System (ADS)

    Ifimov, Gabriela; Pigeau, Grace; Arroyo-Mora, J. Pablo; Soffer, Raymond; Leblanc, George

    2017-10-01

    In this study the development and implementation of a geospatial database model for the management of multiscale datasets encompassing airborne imagery and associated metadata is presented. To develop the multi-source geospatial database we have used a Relational Database Management System (RDBMS) on a Structure Query Language (SQL) server which was then integrated into ArcGIS and implemented as a geodatabase. The acquired datasets were compiled, standardized, and integrated into the RDBMS, where logical associations between different types of information were linked (e.g. location, date, and instrument). Airborne data, at different processing levels (digital numbers through geocorrected reflectance), were implemented in the geospatial database where the datasets are linked spatially and temporally. An example dataset consisting of airborne hyperspectral imagery, collected for inter and intra-annual vegetation characterization and detection of potential hydrocarbon seepage events over pipeline areas, is presented. Our work provides a model for the management of airborne imagery, which is a challenging aspect of data management in remote sensing, especially when large volumes of data are collected.

  12. A Study of the Efficiency of Spatial Indexing Methods Applied to Large Astronomical Databases

    NASA Astrophysics Data System (ADS)

    Donaldson, Tom; Berriman, G. Bruce; Good, John; Shiao, Bernie

    2018-01-01

    Spatial indexing of astronomical databases generally uses quadrature methods, which partition the sky into cells used to create an index (usually a B-tree) written as database column. We report the results of a study to compare the performance of two common indexing methods, HTM and HEALPix, on Solaris and Windows database servers installed with a PostgreSQL database, and a Windows Server installed with MS SQL Server. The indexing was applied to the 2MASS All-Sky Catalog and to the Hubble Source catalog. On each server, the study compared indexing performance by submitting 1 million queries at each index level with random sky positions and random cone search radius, which was computed on a logarithmic scale between 1 arcsec and 1 degree, and measuring the time to complete the query and write the output. These simulated queries, intended to model realistic use patterns, were run in a uniform way on many combinations of indexing method and indexing level. The query times in all simulations are strongly I/O-bound and are linear with number of records returned for large numbers of sources. There are, however, considerable differences between simulations, which reveal that hardware I/O throughput is a more important factor in managing the performance of a DBMS than the choice of indexing scheme. The choice of index itself is relatively unimportant: for comparable index levels, the performance is consistent within the scatter of the timings. At small index levels (large cells; e.g. level 4; cell size 3.7 deg), there is large scatter in the timings because of wide variations in the number of sources found in the cells. At larger index levels, performance improves and scatter decreases, but the improvement at level 8 (14 min) and higher is masked to some extent in the timing scatter caused by the range of query sizes. At very high levels (20; 0.0004 arsec), the granularity of the cells becomes so high that a large number of extraneous empty cells begin to degrade performance. Thus, for the use patterns studied here the database performance is not critically dependent on the exact choices of index or level.

  13. A blue carbon soil database: Tidal wetland stocks for the US National Greenhouse Gas Inventory

    NASA Astrophysics Data System (ADS)

    Feagin, R. A.; Eriksson, M.; Hinson, A.; Najjar, R. G.; Kroeger, K. D.; Herrmann, M.; Holmquist, J. R.; Windham-Myers, L.; MacDonald, G. M.; Brown, L. N.; Bianchi, T. S.

    2015-12-01

    Coastal wetlands contain large reservoirs of carbon, and in 2015 the US National Greenhouse Gas Inventory began the work of placing blue carbon within the national regulatory context. The potential value of a wetland carbon stock, in relation to its location, soon could be influential in determining governmental policy and management activities, or in stimulating market-based CO2 sequestration projects. To meet the national need for high-resolution maps, a blue carbon stock database was developed linking National Wetlands Inventory datasets with the USDA Soil Survey Geographic Database. Users of the database can identify the economic potential for carbon conservation or restoration projects within specific estuarine basins, states, wetland types, physical parameters, and land management activities. The database is geared towards both national-level assessments and local-level inquiries. Spatial analysis of the stocks show high variance within individual estuarine basins, largely dependent on geomorphic position on the landscape, though there are continental scale trends to the carbon distribution as well. Future plans including linking this database with a sedimentary accretion database to predict carbon flux in US tidal wetlands.

  14. Scalable population estimates using spatial-stream-network (SSN) models, fish density surveys, and national geospatial database frameworks for streams

    Treesearch

    Daniel J. Isaak; Jay M. Ver Hoef; Erin E. Peterson; Dona L. Horan; David E. Nagel

    2017-01-01

    Population size estimates for stream fishes are important for conservation and management, but sampling costs limit the extent of most estimates to small portions of river networks that encompass 100s–10 000s of linear kilometres. However, the advent of large fish density data sets, spatial-stream-network (SSN) models that benefit from nonindependence among samples,...

  15. A novel on-line spatial-temporal k-anonymity method for location privacy protection from sequence rules-based inference attacks.

    PubMed

    Zhang, Haitao; Wu, Chenxue; Chen, Zewei; Liu, Zhao; Zhu, Yunhong

    2017-01-01

    Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non-sensitive rules variation ratios with the traditional spatial-temporal k-anonymity method. Furthermore, we also found the performance variation tendency from the parameter K value, which can help achieve the goal of hiding the maximum number of original sensitive rules while generating a minimum of new sensitive rules and affecting a minimum number of non-sensitive rules.

  16. A novel on-line spatial-temporal k-anonymity method for location privacy protection from sequence rules-based inference attacks

    PubMed Central

    Wu, Chenxue; Liu, Zhao; Zhu, Yunhong

    2017-01-01

    Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non-sensitive rules variation ratios with the traditional spatial-temporal k-anonymity method. Furthermore, we also found the performance variation tendency from the parameter K value, which can help achieve the goal of hiding the maximum number of original sensitive rules while generating a minimum of new sensitive rules and affecting a minimum number of non-sensitive rules. PMID:28767687

  17. Spatial pattern recognition of seismic events in South West Colombia

    NASA Astrophysics Data System (ADS)

    Benítez, Hernán D.; Flórez, Juan F.; Duque, Diana P.; Benavides, Alberto; Lucía Baquero, Olga; Quintero, Jiber

    2013-09-01

    Recognition of seismogenic zones in geographical regions supports seismic hazard studies. This recognition is usually based on visual, qualitative and subjective analysis of data. Spatial pattern recognition provides a well founded means to obtain relevant information from large amounts of data. The purpose of this work is to identify and classify spatial patterns in instrumental data of the South West Colombian seismic database. In this research, clustering tendency analysis validates whether seismic database possesses a clustering structure. A non-supervised fuzzy clustering algorithm creates groups of seismic events. Given the sensitivity of fuzzy clustering algorithms to centroid initial positions, we proposed a methodology to initialize centroids that generates stable partitions with respect to centroid initialization. As a result of this work, a public software tool provides the user with the routines developed for clustering methodology. The analysis of the seismogenic zones obtained reveals meaningful spatial patterns in South-West Colombia. The clustering analysis provides a quantitative location and dispersion of seismogenic zones that facilitates seismological interpretations of seismic activities in South West Colombia.

  18. Nick Grue | NREL

    Science.gov Websites

    geospatial data analysis using parallel processing High performance computing Renewable resource technical potential and supply curve analysis Spatial database utilization Rapid analysis of large geospatial datasets energy and geospatial analysis products Research Interests Rapid, web-based renewable resource analysis

  19. Development of a database system for near-future climate change projections under the Japanese National Project SI-CAT

    NASA Astrophysics Data System (ADS)

    Nakagawa, Y.; Kawahara, S.; Araki, F.; Matsuoka, D.; Ishikawa, Y.; Fujita, M.; Sugimoto, S.; Okada, Y.; Kawazoe, S.; Watanabe, S.; Ishii, M.; Mizuta, R.; Murata, A.; Kawase, H.

    2017-12-01

    Analyses of large ensemble data are quite useful in order to produce probabilistic effect projection of climate change. Ensemble data of "+2K future climate simulations" are currently produced by Japanese national project "Social Implementation Program on Climate Change Adaptation Technology (SI-CAT)" as a part of a database for Policy Decision making for Future climate change (d4PDF; Mizuta et al. 2016) produced by Program for Risk Information on Climate Change. Those data consist of global warming simulations and regional downscaling simulations. Considering that those data volumes are too large (a few petabyte) to download to a local computer of users, a user-friendly system is required to search and download data which satisfy requests of the users. We develop "a database system for near-future climate change projections" for providing functions to find necessary data for the users under SI-CAT. The database system for near-future climate change projections mainly consists of a relational database, a data download function and user interface. The relational database using PostgreSQL is a key function among them. Temporally and spatially compressed data are registered on the relational database. As a first step, we develop the relational database for precipitation, temperature and track data of typhoon according to requests by SI-CAT members. The data download function using Open-source Project for a Network Data Access Protocol (OPeNDAP) provides a function to download temporally and spatially extracted data based on search results obtained by the relational database. We also develop the web-based user interface for using the relational database and the data download function. A prototype of the database system for near-future climate change projections are currently in operational test on our local server. The database system for near-future climate change projections will be released on Data Integration and Analysis System Program (DIAS) in fiscal year 2017. Techniques of the database system for near-future climate change projections might be quite useful for simulation and observational data in other research fields. We report current status of development and some case studies of the database system for near-future climate change projections.

  20. Supporting user-defined granularities in a spatiotemporal conceptual model

    USGS Publications Warehouse

    Khatri, V.; Ram, S.; Snodgrass, R.T.; O'Brien, G. M.

    2002-01-01

    Granularities are integral to spatial and temporal data. A large number of applications require storage of facts along with their temporal and spatial context, which needs to be expressed in terms of appropriate granularities. For many real-world applications, a single granularity in the database is insufficient. In order to support any type of spatial or temporal reasoning, the semantics related to granularities needs to be embedded in the database. Specifying granularities related to facts is an important part of conceptual database design because under-specifying the granularity can restrict an application, affect the relative ordering of events and impact the topological relationships. Closely related to granularities is indeterminacy, i.e., an occurrence time or location associated with a fact that is not known exactly. In this paper, we present an ontology for spatial granularities that is a natural analog of temporal granularities. We propose an upward-compatible, annotation-based spatiotemporal conceptual model that can comprehensively capture the semantics related to spatial and temporal granularities, and indeterminacy without requiring new spatiotemporal constructs. We specify the formal semantics of this spatiotemporal conceptual model via translation to a conventional conceptual model. To underscore the practical focus of our approach, we describe an on-going case study. We apply our approach to a hydrogeologic application at the United States Geologic Survey and demonstrate that our proposed granularity-based spatiotemporal conceptual model is straightforward to use and is comprehensive.

  1. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1992-01-01

    One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.

  2. Geoscience information integration and visualization research of Shandong Province, China based on ArcGIS engine

    NASA Astrophysics Data System (ADS)

    Xu, Mingzhu; Gao, Zhiqiang; Ning, Jicai

    2014-10-01

    To improve the access efficiency of geoscience data, efficient data model and storage solutions should be used. Geoscience data is usually classified by format or coordinate system in existing storage solutions. When data is large, it is not conducive to search the geographic features. In this study, a geographical information integration system of Shandong province, China was developed based on the technology of ArcGIS Engine, .NET, and SQL Server. It uses Geodatabase spatial data model and ArcSDE to organize and store spatial and attribute data and establishes geoscience database of Shangdong. Seven function modules were designed: map browse, database and subject management, layer control, map query, spatial analysis and map symbolization. The system's characteristics of can be browsed and managed by geoscience subjects make the system convenient for geographic researchers and decision-making departments to use the data.

  3. Creating a three level building classification using topographic and address-based data for Manchester

    NASA Astrophysics Data System (ADS)

    Hussain, M.; Chen, D.

    2014-11-01

    Buildings, the basic unit of an urban landscape, host most of its socio-economic activities and play an important role in the creation of urban land-use patterns. The spatial arrangement of different building types creates varied urban land-use clusters which can provide an insight to understand the relationships between social, economic, and living spaces. The classification of such urban clusters can help in policy-making and resource management. In many countries including the UK no national-level cadastral database containing information on individual building types exists in public domain. In this paper, we present a framework for inferring functional types of buildings based on the analysis of their form (e.g. geometrical properties, such as area and perimeter, layout) and spatial relationship from large topographic and address-based GIS database. Machine learning algorithms along with exploratory spatial analysis techniques are used to create the classification rules. The classification is extended to two further levels based on the functions (use) of buildings derived from address-based data. The developed methodology was applied to the Manchester metropolitan area using the Ordnance Survey's MasterMap®, a large-scale topographic and address-based data available for the UK.

  4. The statistical power to detect cross-scale interactions at macroscales

    USGS Publications Warehouse

    Wagner, Tyler; Fergus, C. Emi; Stow, Craig A.; Cheruvelil, Kendra S.; Soranno, Patricia A.

    2016-01-01

    Macroscale studies of ecological phenomena are increasingly common because stressors such as climate and land-use change operate at large spatial and temporal scales. Cross-scale interactions (CSIs), where ecological processes operating at one spatial or temporal scale interact with processes operating at another scale, have been documented in a variety of ecosystems and contribute to complex system dynamics. However, studies investigating CSIs are often dependent on compiling multiple data sets from different sources to create multithematic, multiscaled data sets, which results in structurally complex, and sometimes incomplete data sets. The statistical power to detect CSIs needs to be evaluated because of their importance and the challenge of quantifying CSIs using data sets with complex structures and missing observations. We studied this problem using a spatially hierarchical model that measures CSIs between regional agriculture and its effects on the relationship between lake nutrients and lake productivity. We used an existing large multithematic, multiscaled database, LAke multiscaled GeOSpatial, and temporal database (LAGOS), to parameterize the power analysis simulations. We found that the power to detect CSIs was more strongly related to the number of regions in the study rather than the number of lakes nested within each region. CSI power analyses will not only help ecologists design large-scale studies aimed at detecting CSIs, but will also focus attention on CSI effect sizes and the degree to which they are ecologically relevant and detectable with large data sets.

  5. Utilizing Arc Marine Concepts for Designing a Geospatially Enabled Database to Support Rapid Environmental Assessment

    DTIC Science & Technology

    2009-07-01

    data were recognized as being largely geospatial and thus a GIS was considered the most reasonable way to proceed. The Postgre suite of software also...for the ESRI (2009) geodatabase environment but is applicable for this Postgre -based system. We then introduce and discuss spatial reference...PostgreSQL database using a Postgre ODBC connection. This procedure identified 100 tables with 737 columns. This is after the removal of two

  6. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1993-01-01

    One of the biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental data base access method, VIEWCACHE, provides such an interface for accessing distributed datasets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image datasets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate database search.

  7. Applications of spatial statistical network models to stream data

    USGS Publications Warehouse

    Isaak, Daniel J.; Peterson, Erin E.; Ver Hoef, Jay M.; Wenger, Seth J.; Falke, Jeffrey A.; Torgersen, Christian E.; Sowder, Colin; Steel, E. Ashley; Fortin, Marie-Josée; Jordan, Chris E.; Ruesch, Aaron S.; Som, Nicholas; Monestiez, Pascal

    2014-01-01

    Streams and rivers host a significant portion of Earth's biodiversity and provide important ecosystem services for human populations. Accurate information regarding the status and trends of stream resources is vital for their effective conservation and management. Most statistical techniques applied to data measured on stream networks were developed for terrestrial applications and are not optimized for streams. A new class of spatial statistical model, based on valid covariance structures for stream networks, can be used with many common types of stream data (e.g., water quality attributes, habitat conditions, biological surveys) through application of appropriate distributions (e.g., Gaussian, binomial, Poisson). The spatial statistical network models account for spatial autocorrelation (i.e., nonindependence) among measurements, which allows their application to databases with clustered measurement locations. Large amounts of stream data exist in many areas where spatial statistical analyses could be used to develop novel insights, improve predictions at unsampled sites, and aid in the design of efficient monitoring strategies at relatively low cost. We review the topic of spatial autocorrelation and its effects on statistical inference, demonstrate the use of spatial statistics with stream datasets relevant to common research and management questions, and discuss additional applications and development potential for spatial statistics on stream networks. Free software for implementing the spatial statistical network models has been developed that enables custom applications with many stream databases.

  8. Assessment of imputation methods using varying ecological information to fill the gaps in a tree functional trait database

    NASA Astrophysics Data System (ADS)

    Poyatos, Rafael; Sus, Oliver; Vilà-Cabrera, Albert; Vayreda, Jordi; Badiella, Llorenç; Mencuccini, Maurizio; Martínez-Vilalta, Jordi

    2016-04-01

    Plant functional traits are increasingly being used in ecosystem ecology thanks to the growing availability of large ecological databases. However, these databases usually contain a large fraction of missing data because measuring plant functional traits systematically is labour-intensive and because most databases are compilations of datasets with different sampling designs. As a result, within a given database, there is an inevitable variability in the number of traits available for each data entry and/or the species coverage in a given geographical area. The presence of missing data may severely bias trait-based analyses, such as the quantification of trait covariation or trait-environment relationships and may hamper efforts towards trait-based modelling of ecosystem biogeochemical cycles. Several data imputation (i.e. gap-filling) methods have been recently tested on compiled functional trait databases, but the performance of imputation methods applied to a functional trait database with a regular spatial sampling has not been thoroughly studied. Here, we assess the effects of data imputation on five tree functional traits (leaf biomass to sapwood area ratio, foliar nitrogen, maximum height, specific leaf area and wood density) in the Ecological and Forest Inventory of Catalonia, an extensive spatial database (covering 31900 km2). We tested the performance of species mean imputation, single imputation by the k-nearest neighbors algorithm (kNN) and a multiple imputation method, Multivariate Imputation with Chained Equations (MICE) at different levels of missing data (10%, 30%, 50%, and 80%). We also assessed the changes in imputation performance when additional predictors (species identity, climate, forest structure, spatial structure) were added in kNN and MICE imputations. We evaluated the imputed datasets using a battery of indexes describing departure from the complete dataset in trait distribution, in the mean prediction error, in the correlation matrix and in selected bivariate trait relationships. MICE yielded imputations which better preserved the variability and covariance structure of the data and provided an estimate of between-imputation uncertainty. We found that adding species identity as a predictor in MICE and kNN improved imputation for all traits, but adding climate did not lead to any appreciable improvement. However, forest structure and spatial structure did reduce imputation errors in maximum height and in leaf biomass to sapwood area ratios, respectively. Although species mean imputations showed the lowest error for 3 out the 5 studied traits, dataset-averaged errors were lowest for MICE imputations with all additional predictors, when missing data levels were 50% or lower. Species mean imputations always resulted in larger errors in the correlation matrix and appreciably altered the studied bivariate trait relationships. In conclusion, MICE imputations using species identity, climate, forest structure and spatial structure as predictors emerged as the most suitable method of the ones tested here, but it was also evident that imputation performance deteriorates at high levels of missing data (80%).

  9. Accounting for rainfall spatial variability in the prediction of flash floods

    NASA Astrophysics Data System (ADS)

    Saharia, Manabendra; Kirstetter, Pierre-Emmanuel; Gourley, Jonathan J.; Hong, Yang; Vergara, Humberto; Flamig, Zachary L.

    2017-04-01

    Flash floods are a particularly damaging natural hazard worldwide in terms of both fatalities and property damage. In the United States, the lack of a comprehensive database that catalogues information related to flash flood timing, location, causative rainfall, and basin geomorphology has hindered broad characterization studies. First a representative and long archive of more than 15,000 flooding events during 2002-2011 is used to analyze the spatial and temporal variability of flash floods. We also derive large number of spatially distributed geomorphological and climatological parameters such as basin area, mean annual precipitation, basin slope etc. to identify static basin characteristics that influence flood response. For the same period, the National Severe Storms Laboratory (NSSL) has produced a decadal archive of Multi-Radar/Multi-Sensor (MRMS) radar-only precipitation rates at 1-km spatial resolution with 5-min temporal resolution. This provides an unprecedented opportunity to analyze the impact of event-level precipitation variability on flooding using a big data approach. To analyze the impact of sub-basin scale rainfall spatial variability on flooding, certain indices such as the first and second scaled moment of rainfall, horizontal gap, vertical gap etc. are computed from the MRMS dataset. Finally, flooding characteristics such as rise time, lag time, and peak discharge are linked to derived geomorphologic, climatologic, and rainfall indices to identify basin characteristics that drive flash floods. The database has been subjected to rigorous quality control by accounting for radar beam height and percentage snow in basins. So far studies involving rainfall variability indices have only been performed on a case study basis, and a large scale approach is expected to provide a deeper insight into how sub-basin scale precipitation variability affects flooding. Finally, these findings are validated using the National Weather Service storm reports and a historical flood fatalities database. This analysis framework will serve as a baseline for evaluating distributed hydrologic model simulations such as the Flooded Locations And Simulated Hydrographs Project (FLASH) (http://flash.ou.edu).

  10. THE SAN PEDRO RIVER SPATIAL DATA ARCHIVE, A DATABASE BROWSER FOR COMMUNITY-BASED ENVIRONMENTAL PROTECTION

    EPA Science Inventory

    It is currently possible to measure landscape change over large areas and determine trends in ecological and hydrological condition using advanced space-based technologies accompanied by geospatial data. Specifically, this process is being tested in a community-based watershed in...

  11. THE SAN PEDRO SPATIAL DATA ARCHIVE, A DATABASE BROWSER FOR COMMUNITY-BASED ENVIRONMENTAL PROTECTION

    EPA Science Inventory

    It is currently possible to measure landscape change over large areas and determine trends in ecological and hydrological condition using advanced space-based technologies accompanied by geospatial data. Specifically, this process is being tested in a community-based watershed in...

  12. INVENTORY AND CLASSIFICATION OF GREAT LAKES COASTAL WETLANDS FOR MONITORING AND ASSESSMENT AT LARGE SPATIAL SCALES

    EPA Science Inventory

    Monitoring aquatic resources for regional assessments requires an accurate and comprehensive inventory of the resource and useful classification of exosystem similarities. Our research effort to create an electronic database and work with various ways to classify coastal wetlands...

  13. Digital version of "Open-File Report 92-179: Geologic map of the Cow Cove Quadrangle, San Bernardino County, California"

    USGS Publications Warehouse

    Wilshire, Howard G.; Bedford, David R.; Coleman, Teresa

    2002-01-01

    3. Plottable map representations of the database at 1:24,000 scale in PostScript and Adobe PDF formats. The plottable files consist of a color geologic map derived from the spatial database, composited with a topographic base map in the form of the USGS Digital Raster Graphic for the map area. Color symbology from each of these datasets is maintained, which can cause plot file sizes to be large.

  14. The MIND PALACE: A Multi-Spectral Imaging and Spectroscopy Database for Planetary Science

    NASA Astrophysics Data System (ADS)

    Eshelman, E.; Doloboff, I.; Hara, E. K.; Uckert, K.; Sapers, H. M.; Abbey, W.; Beegle, L. W.; Bhartia, R.

    2017-12-01

    The Multi-Instrument Database (MIND) is the web-based home to a well-characterized set of analytical data collected by a suite of deep-UV fluorescence/Raman instruments built at the Jet Propulsion Laboratory (JPL). Samples derive from a growing body of planetary surface analogs, mineral and microbial standards, meteorites, spacecraft materials, and other astrobiologically relevant materials. In addition to deep-UV spectroscopy, datasets stored in MIND are obtained from a variety of analytical techniques obtained over multiple spatial and spectral scales including electron microscopy, optical microscopy, infrared spectroscopy, X-ray fluorescence, and direct fluorescence imaging. Multivariate statistical analysis techniques, primarily Principal Component Analysis (PCA), are used to guide interpretation of these large multi-analytical spectral datasets. Spatial co-referencing of integrated spectral/visual maps is performed using QGIS (geographic information system software). Georeferencing techniques transform individual instrument data maps into a layered co-registered data cube for analysis across spectral and spatial scales. The body of data in MIND is intended to serve as a permanent, reliable, and expanding database of deep-UV spectroscopy datasets generated by this unique suite of JPL-based instruments on samples of broad planetary science interest.

  15. Spatial distribution of citizen science casuistic observations for different taxonomic groups.

    PubMed

    Tiago, Patrícia; Ceia-Hasse, Ana; Marques, Tiago A; Capinha, César; Pereira, Henrique M

    2017-10-16

    Opportunistic citizen science databases are becoming an important way of gathering information on species distributions. These data are temporally and spatially dispersed and could have limitations regarding biases in the distribution of the observations in space and/or time. In this work, we test the influence of landscape variables in the distribution of citizen science observations for eight taxonomic groups. We use data collected through a Portuguese citizen science database (biodiversity4all.org). We use a zero-inflated negative binomial regression to model the distribution of observations as a function of a set of variables representing the landscape features plausibly influencing the spatial distribution of the records. Results suggest that the density of paths is the most important variable, having a statistically significant positive relationship with number of observations for seven of the eight taxa considered. Wetland coverage was also identified as having a significant, positive relationship, for birds, amphibians and reptiles, and mammals. Our results highlight that the distribution of species observations, in citizen science projects, is spatially biased. Higher frequency of observations is driven largely by accessibility and by the presence of water bodies. We conclude that efforts are required to increase the spatial evenness of sampling effort from volunteers.

  16. Future of applied watershed science at regional scales

    Treesearch

    Lee Benda; Daniel Miller; Steve Lanigan; Gordon Reeves

    2009-01-01

    Resource managers must deal increasingly with land use and conservation plans applied at large spatial scales (watersheds, landscapes, states, regions) involving multiple interacting federal agencies and stakeholders. Access to a geographically focused and application-oriented database would allow users in different locations and with different concerns to quickly...

  17. Access to Emissions Distributions and Related Ancillary Data through the ECCAD database

    NASA Astrophysics Data System (ADS)

    Darras, Sabine; Granier, Claire; Liousse, Catherine; De Graaf, Erica; Enriquez, Edgar; Boulanger, Damien; Brissebrat, Guillaume

    2017-04-01

    The ECCAD database (Emissions of atmospheric Compounds and Compilation of Ancillary Data) provides a user-friendly access to global and regional surface emissions for a large set of chemical compounds and ancillary data (land use, active fires, burned areas, population,etc). The emissions inventories are time series gridded data at spatial resolution from 1x1 to 0.1x0.1 degrees. ECCAD is the emissions database of the GEIA (Global Emissions InitiAtive) project and a sub-project of the French Atmospheric Data Center AERIS (http://www.aeris-data.fr). ECCAD has currently more than 2200 users originating from more than 80 countries. The project benefits from this large international community of users to expand the number of emission datasets made available. ECCAD provides detailed metadata for each of the datasets and various tools for data visualization, for computing global and regional totals and for interactive spatial and temporal analysis. The data can be downloaded as interoperable NetCDF CF-compliant files, i.e. the data are compatible with many other client interfaces. The presentation will provide information on the datasets available within ECCAD, as well as examples of the analysis work that can be done online through the website: http://eccad.aeris-data.fr.

  18. Access to Emissions Distributions and Related Ancillary Data through the ECCAD database

    NASA Astrophysics Data System (ADS)

    Darras, Sabine; Enriquez, Edgar; Granier, Claire; Liousse, Catherine; Boulanger, Damien; Fontaine, Alain

    2016-04-01

    The ECCAD database (Emissions of atmospheric Compounds and Compilation of Ancillary Data) provides a user-friendly access to global and regional surface emissions for a large set of chemical compounds and ancillary data (land use, active fires, burned areas, population,etc). The emissions inventories are time series gridded data at spatial resolution from 1x1 to 0.1x0.1 degrees. ECCAD is the emissions database of the GEIA (Global Emissions InitiAtive) project and a sub-project of the French Atmospheric Data Center AERIS (http://www.aeris-data.fr). ECCAD has currently more than 2200 users originating from more than 80 countries. The project benefits from this large international community of users to expand the number of emission datasets made available. ECCAD provides detailed metadata for each of the datasets and various tools for data visualization, for computing global and regional totals and for interactive spatial and temporal analysis. The data can be downloaded as interoperable NetCDF CF-compliant files, i.e. the data are compatible with many other client interfaces. The presentation will provide information on the datasets available within ECCAD, as well as examples of the analysis work that can be done online through the website: http://eccad.aeris-data.fr.

  19. SAGEMAP: A web-based spatial dataset for sage grouse and sagebrush steppe management in the Intermountain West

    USGS Publications Warehouse

    Knick, Steven T.; Schueck, Linda

    2002-01-01

    The Snake River Field Station of the Forest and Rangeland Ecosystem Science Center has developed and now maintains a database of the spatial information needed to address management of sage grouse and sagebrush steppe habitats in the western United States. The SAGEMAP project identifies and collects infor-mation for the region encompassing the historical extent of sage grouse distribution. State and federal agencies, the primary entities responsible for managing sage grouse and their habitats, need the information to develop an objective assessment of the current status of sage grouse populations and their habitats, or to provide responses and recommendations for recovery if sage grouse are listed as a Threatened or Endangered Species. The spatial data on the SAGEMAP website (http://SAGEMAP.wr.usgs.gov) are an important component in documenting current habitat and other environmental conditions. In addition, the data can be used to identify areas that have undergone significant changes in land cover and to determine underlying causes. As such, the database permits an analysis for large-scale and range-wide factors that may be causing declines of sage grouse populations. The spatial data contained on this site also will be a critical component guiding the decision processes for restoration of habitats in the Great Basin. Therefore, development of this database and the capability to disseminate the information carries multiple benefits for land and wildlife management.

  20. An integrated database on ticks and tick-borne zoonoses in the tropics and subtropics with special reference to developing and emerging countries.

    PubMed

    Vesco, Umberto; Knap, Nataša; Labruna, Marcelo B; Avšič-Županc, Tatjana; Estrada-Peña, Agustín; Guglielmone, Alberto A; Bechara, Gervasio H; Gueye, Arona; Lakos, Andras; Grindatto, Anna; Conte, Valeria; De Meneghi, Daniele

    2011-05-01

    Tick-borne zoonoses (TBZ) are emerging diseases worldwide. A large amount of information (e.g. case reports, results of epidemiological surveillance, etc.) is dispersed through various reference sources (ISI and non-ISI journals, conference proceedings, technical reports, etc.). An integrated database-derived from the ICTTD-3 project ( http://www.icttd.nl )-was developed in order to gather TBZ records in the (sub-)tropics, collected both by the authors and collaborators worldwide. A dedicated website ( http://www.tickbornezoonoses.org ) was created to promote collaboration and circulate information. Data collected are made freely available to researchers for analysis by spatial methods, integrating mapped ecological factors for predicting TBZ risk. The authors present the assembly process of the TBZ database: the compilation of an updated list of TBZ relevant for (sub-)tropics, the database design and its structure, the method of bibliographic search, the assessment of spatial precision of geo-referenced records. At the time of writing, 725 records extracted from 337 publications related to 59 countries in the (sub-)tropics, have been entered in the database. TBZ distribution maps were also produced. Imported cases have been also accounted for. The most important datasets with geo-referenced records were those on Spotted Fever Group rickettsiosis in Latin-America and Crimean-Congo Haemorrhagic Fever in Africa. The authors stress the need for international collaboration in data collection to update and improve the database. Supervision of data entered remains always necessary. Means to foster collaboration are discussed. The paper is also intended to describe the challenges encountered to assemble spatial data from various sources and to help develop similar data collections.

  1. Towards a New Assessment of Urban Areas from Local to Global Scales

    NASA Astrophysics Data System (ADS)

    Bhaduri, B. L.; Roy Chowdhury, P. K.; McKee, J.; Weaver, J.; Bright, E.; Weber, E.

    2015-12-01

    Since early 2000s, starting with NASA MODIS, satellite based remote sensing has facilitated collection of imagery with medium spatial resolution but high temporal resolution (daily). This trend continues with an increasing number of sensors and data products. Increasing spatial and temporal resolutions of remotely sensed data archives, from both public and commercial sources, have significantly enhanced the quality of mapping and change data products. However, even with automation of such analysis on evolving computing platforms, rates of data processing have been suboptimal largely because of the ever-increasing pixel to processor ratio coupled with limitations of the computing architectures. Novel approaches utilizing spatiotemporal data mining techniques and computational architectures have emerged that demonstrates the potential for sustained and geographically scalable landscape monitoring to be operational. We exemplify this challenge with two broad research initiatives on High Performance Geocomputation at Oak Ridge National Laboratory: (a) mapping global settlement distribution; (b) developing national critical infrastructure databases. Our present effort, on large GPU based architectures, to exploit high resolution (1m or less) satellite and airborne imagery for extracting settlements at global scale is yielding understanding of human settlement patterns and urban areas at unprecedented resolution. Comparison of such urban land cover database, with existing national and global land cover products, at various geographic scales in selected parts of the world is revealing intriguing patterns and insights for urban assessment. Early results, from the USA, Taiwan, and Egypt, indicate closer agreements (5-10%) in urban area assessments among databases at larger, aggregated geographic extents. However, spatial variability at local scales could be significantly different (over 50% disagreement).

  2. Deriving spatial patterns from a novel database of volcanic rock geochemistry in the Virunga Volcanic Province, East African Rift

    NASA Astrophysics Data System (ADS)

    Poppe, Sam; Barette, Florian; Smets, Benoît; Benbakkar, Mhammed; Kervyn, Matthieu

    2016-04-01

    The Virunga Volcanic Province (VVP) is situated within the western branch of the East-African Rift. The geochemistry and petrology of its' volcanic products has been studied extensively in a fragmented manner. They represent a unique collection of silica-undersaturated, ultra-alkaline and ultra-potassic compositions, displaying marked geochemical variations over the area occupied by the VVP. We present a novel spatially-explicit database of existing whole-rock geochemical analyses of the VVP volcanics, compiled from international publications, (post-)colonial scientific reports and PhD theses. In the database, a total of 703 geochemical analyses of whole-rock samples collected from the 1950s until recently have been characterised with a geographical location, eruption source location, analytical results and uncertainty estimates for each of these categories. Comparative box plots and Kruskal-Wallis H tests on subsets of analyses with contrasting ages or analytical methods suggest that the overall database accuracy is consistent. We demonstrate how statistical techniques such as Principal Component Analysis (PCA) and subsequent cluster analysis allow the identification of clusters of samples with similar major-element compositions. The spatial patterns represented by the contrasting clusters show that both the historically active volcanoes represent compositional clusters which can be identified based on their contrasted silica and alkali contents. Furthermore, two sample clusters are interpreted to represent the most primitive, deep magma source within the VVP, different from the shallow magma reservoirs that feed the eight dominant large volcanoes. The samples from these two clusters systematically originate from locations which 1. are distal compared to the eight large volcanoes and 2. mostly coincide with the surface expressions of rift faults or NE-SW-oriented inherited Precambrian structures which were reactivated during rifting. The lava from the Mugogo eruption of 1957 belongs to these primitive clusters and is the only known to have erupted outside the current rift valley in historical times. We thus infer there is a distributed hazard of vent opening susceptibility additional to the susceptibility associated with the main Virunga edifices. This study suggests that the statistical analysis of such geochemical database may help to understand complex volcanic plumbing systems and the spatial distribution of volcanic hazards in active and poorly known volcanic areas such as the Virunga Volcanic Province.

  3. Environmental concern-based site screening of carbon dioxide geological storage in China.

    PubMed

    Cai, Bofeng; Li, Qi; Liu, Guizhen; Liu, Lancui; Jin, Taotao; Shi, Hui

    2017-08-08

    Environmental impacts and risks related to carbon dioxide (CO 2 ) capture and storage (CCS) projects may have direct effects on the decision-making process during CCS site selection. This paper proposes a novel method of environmental optimization for CCS site selection using China's ecological red line approach. Moreover, this paper established a GIS based spatial analysis model of environmental optimization during CCS site selection by a large database. The comprehensive data coverage of environmental elements and fine 1 km spatial resolution were used in the database. The quartile method was used for value assignment for specific indicators including the prohibited index and restricted index. The screening results show that areas classified as having high environmental suitability (classes III and IV) in China account for 620,800 km 2 and 156,600 km 2 , respectively, and are mainly distributed in Inner Mongolia, Qinghai and Xinjiang. The environmental suitability class IV areas of Bayingol Mongolian Autonomous Prefecture, Hotan Prefecture, Aksu Prefecture, Hulunbuir, Xilingol League and other prefecture-level regions not only cover large land areas, but also form a continuous area in the three provincial-level administrative units. This study may benefit the national macro-strategic deployment and implementation of CCS spatial layout and environmental management in China.

  4. A hierarchical spatial framework and database for the national river fish habitat condition assessment

    USGS Publications Warehouse

    Wang, L.; Infante, D.; Esselman, P.; Cooper, A.; Wu, D.; Taylor, W.; Beard, D.; Whelan, G.; Ostroff, A.

    2011-01-01

    Fisheries management programs, such as the National Fish Habitat Action Plan (NFHAP), urgently need a nationwide spatial framework and database for health assessment and policy development to protect and improve riverine systems. To meet this need, we developed a spatial framework and database using National Hydrography Dataset Plus (I-.100,000-scale); http://www.horizon-systems.com/nhdplus). This framework uses interconfluence river reaches and their local and network catchments as fundamental spatial river units and a series of ecological and political spatial descriptors as hierarchy structures to allow users to extract or analyze information at spatial scales that they define. This database consists of variables describing channel characteristics, network position/connectivity, climate, elevation, gradient, and size. It contains a series of catchment-natural and human-induced factors that are known to influence river characteristics. Our framework and database assembles all river reaches and their descriptors in one place for the first time for the conterminous United States. This framework and database provides users with the capability of adding data, conducting analyses, developing management scenarios and regulation, and tracking management progresses at a variety of spatial scales. This database provides the essential data needs for achieving the objectives of NFHAP and other management programs. The downloadable beta version database is available at http://ec2-184-73-40-15.compute-1.amazonaws.com/nfhap/main/.

  5. Validating crash locations for quantitative spatial analysis: a GIS-based approach.

    PubMed

    Loo, Becky P Y

    2006-09-01

    In this paper, the spatial variables of the crash database in Hong Kong from 1993 to 2004 are validated. The proposed spatial data validation system makes use of three databases (the crash, road network and district board databases) and relies on GIS to carry out most of the validation steps so that the human resource required for manually checking the accuracy of the spatial data can be enormously reduced. With the GIS-based spatial data validation system, it was found that about 65-80% of the police crash records from 1993 to 2004 had correct road names and district board information. In 2004, the police crash database contained about 12.7% mistakes for road names and 9.7% mistakes for district boards. The situation was broadly comparable to the United Kingdom. However, the results also suggest that safety researchers should carefully validate spatial data in the crash database before scientific analysis.

  6. Effective spatial database support for acquiring spatial information from remote sensing images

    NASA Astrophysics Data System (ADS)

    Jin, Peiquan; Wan, Shouhong; Yue, Lihua

    2009-12-01

    In this paper, a new approach to maintain spatial information acquiring from remote-sensing images is presented, which is based on Object-Relational DBMS. According to this approach, the detected and recognized results of targets are stored and able to be further accessed in an ORDBMS-based spatial database system, and users can access the spatial information using the standard SQL interface. This approach is different from the traditional ArcSDE-based method, because the spatial information management module is totally integrated into the DBMS and becomes one of the core modules in the DBMS. We focus on three issues, namely the general framework for the ORDBMS-based spatial database system, the definitions of the add-in spatial data types and operators, and the process to develop a spatial Datablade on Informix. The results show that the ORDBMS-based spatial database support for image-based target detecting and recognition is easy and practical to be implemented.

  7. An integrated photogrammetric and spatial database management system for producing fully structured data using aerial and remote sensing images.

    PubMed

    Ahmadi, Farshid Farnood; Ebadi, Hamid

    2009-01-01

    3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs); direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium) standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS) is presented.

  8. GIEMS-D3: A new long-term, dynamical, high-spatial resolution inundation extent dataset at global scale

    NASA Astrophysics Data System (ADS)

    Aires, Filipe; Miolane, Léo; Prigent, Catherine; Pham Duc, Binh; Papa, Fabrice; Fluet-Chouinard, Etienne; Lehner, Bernhard

    2017-04-01

    The Global Inundation Extent from Multi-Satellites (GIEMS) provides multi-year monthly variations of the global surface water extent at 25kmx25km resolution. It is derived from multiple satellite observations. Its spatial resolution is usually compatible with climate model outputs and with global land surface model grids but is clearly not adequate for local applications that require the characterization of small individual water bodies. There is today a strong demand for high-resolution inundation extent datasets, for a large variety of applications such as water management, regional hydrological modeling, or for the analysis of mosquitos-related diseases. A new procedure is introduced to downscale the GIEMS low spatial resolution inundations to a 3 arc second (90 m) dataset. The methodology is based on topography and hydrography information from the HydroSHEDS database. A new floodability index is adopted and an innovative smoothing procedure is developed to ensure the smooth transition, in the high-resolution maps, between the low-resolution boxes from GIEMS. Topography information is relevant for natural hydrology environments controlled by elevation, but is more limited in human-modified basins. However, the proposed downscaling approach is compatible with forthcoming fusion with other more pertinent satellite information in these difficult regions. The resulting GIEMS-D3 database is the only high spatial resolution inundation database available globally at the monthly time scale over the 1993-2007 period. GIEMS-D3 is assessed by analyzing its spatial and temporal variability, and evaluated by comparisons to other independent satellite observations from visible (Google Earth and Landsat), infrared (MODIS) and active microwave (SAR).

  9. Ibmdbpy-spatial : An Open-source implementation of in-database geospatial analytics in Python

    NASA Astrophysics Data System (ADS)

    Roy, Avipsa; Fouché, Edouard; Rodriguez Morales, Rafael; Moehler, Gregor

    2017-04-01

    As the amount of spatial data acquired from several geodetic sources has grown over the years and as data infrastructure has become more powerful, the need for adoption of in-database analytic technology within geosciences has grown rapidly. In-database analytics on spatial data stored in a traditional enterprise data warehouse enables much faster retrieval and analysis for making better predictions about risks and opportunities, identifying trends and spot anomalies. Although there are a number of open-source spatial analysis libraries like geopandas and shapely available today, most of them have been restricted to manipulation and analysis of geometric objects with a dependency on GEOS and similar libraries. We present an open-source software package, written in Python, to fill the gap between spatial analysis and in-database analytics. Ibmdbpy-spatial provides a geospatial extension to the ibmdbpy package, implemented in 2015. It provides an interface for spatial data manipulation and access to in-database algorithms in IBM dashDB, a data warehouse platform with a spatial extender that runs as a service on IBM's cloud platform called Bluemix. Working in-database reduces the network overload, as the complete data need not be replicated into the user's local system altogether and only a subset of the entire dataset can be fetched into memory in a single instance. Ibmdbpy-spatial accelerates Python analytics by seamlessly pushing operations written in Python into the underlying database for execution using the dashDB spatial extender, thereby benefiting from in-database performance-enhancing features, such as columnar storage and parallel processing. The package is currently supported on Python versions from 2.7 up to 3.4. The basic architecture of the package consists of three main components - 1) a connection to the dashDB represented by the instance IdaDataBase, which uses a middleware API namely - pypyodbc or jaydebeapi to establish the database connection via ODBC or JDBC respectively, 2) an instance to represent the spatial data stored in the database as a dataframe in Python, called the IdaGeoDataFrame, with a specific geometry attribute which recognises a planar geometry column in dashDB and 3) Python wrappers for spatial functions like within, distance, area, buffer} and more which dashDB currently supports to make the querying process from Python much simpler for the users. The spatial functions translate well-known geopandas-like syntax into SQL queries utilising the database connection to perform spatial operations in-database and can operate on single geometries as well two different geometries from different IdaGeoDataFrames. The in-database queries strictly follow the standards of OpenGIS Implementation Specification for Geographic information - Simple feature access for SQL. The results of the operations obtained can thereby be accessed dynamically via interactive Jupyter notebooks from any system which supports Python, without any additional dependencies and can also be combined with other open source libraries such as matplotlib and folium in-built within Jupyter notebooks for visualization purposes. We built a use case to analyse crime hotspots in New York city to validate our implementation and visualized the results as a choropleth map for each borough.

  10. Large image microscope array for the compilation of multimodality whole organ image databases.

    PubMed

    Namati, Eman; De Ryk, Jessica; Thiesse, Jacqueline; Towfic, Zaid; Hoffman, Eric; Mclennan, Geoffrey

    2007-11-01

    Three-dimensional, structural and functional digital image databases have many applications in education, research, and clinical medicine. However, to date, apart from cryosectioning, there have been no reliable means to obtain whole-organ, spatially conserving histology. Our aim was to generate a system capable of acquiring high-resolution images, featuring microscopic detail that could still be spatially correlated to the whole organ. To fulfill these objectives required the construction of a system physically capable of creating very fine whole-organ sections and collecting high-magnification and resolution digital images. We therefore designed a large image microscope array (LIMA) to serially section and image entire unembedded organs while maintaining the structural integrity of the tissue. The LIMA consists of several integrated components: a novel large-blade vibrating microtome, a 1.3 megapixel peltier cooled charge-coupled device camera, a high-magnification microscope, and a three axis gantry above the microtome. A custom control program was developed to automate the entire sectioning and automated raster-scan imaging sequence. The system is capable of sectioning unembedded soft tissue down to a thickness of 40 microm at specimen dimensions of 200 x 300 mm to a total depth of 350 mm. The LIMA system has been tested on fixed lung from sheep and mice, resulting in large high-quality image data sets, with minimal distinguishable disturbance in the delicate alveolar structures. Copyright 2007 Wiley-Liss, Inc.

  11. New data sources and derived products for the SRER digital spatial database

    Treesearch

    Craig Wissler; Deborah Angell

    2003-01-01

    The Santa Rita Experimental Range (SRER) digital database was developed to automate and preserve ecological data and increase their accessibility. The digital data holdings include a spatial database that is used to integrate ecological data in a known reference system and to support spatial analyses. Recently, the Advanced Resource Technology (ART) facility has added...

  12. Using Exploratory Spatial Data Analysis to Leverage Social Indicator Databases: The Discovery of Interesting Patterns

    ERIC Educational Resources Information Center

    Anselin, Luc; Sridharan, Sanjeev; Gholston, Susan

    2007-01-01

    With the proliferation of social indicator databases, the need for powerful techniques to study patterns of change has grown. In this paper, the utility of spatial data analytical methods such as exploratory spatial data analysis (ESDA) is suggested as a means to leverage the information contained in social indicator databases. The principles…

  13. Transformation of social networks in the late pre-Hispanic US Southwest.

    PubMed

    Mills, Barbara J; Clark, Jeffery J; Peeples, Matthew A; Haas, W R; Roberts, John M; Hill, J Brett; Huntley, Deborah L; Borck, Lewis; Breiger, Ronald L; Clauset, Aaron; Shackley, M Steven

    2013-04-09

    The late pre-Hispanic period in the US Southwest (A.D. 1200-1450) was characterized by large-scale demographic changes, including long-distance migration and population aggregation. To reconstruct how these processes reshaped social networks, we compiled a comprehensive artifact database from major sites dating to this interval in the western Southwest. We combine social network analysis with geographic information systems approaches to reconstruct network dynamics over 250 y. We show how social networks were transformed across the region at previously undocumented spatial, temporal, and social scales. Using well-dated decorated ceramics, we track changes in network topology at 50-y intervals to show a dramatic shift in network density and settlement centrality from the northern to the southern Southwest after A.D. 1300. Both obsidian sourcing and ceramic data demonstrate that long-distance network relationships also shifted from north to south after migration. Surprisingly, social distance does not always correlate with spatial distance because of the presence of network relationships spanning long geographic distances. Our research shows how a large network in the southern Southwest grew and then collapsed, whereas networks became more fragmented in the northern Southwest but persisted. The study also illustrates how formal social network analysis may be applied to large-scale databases of material culture to illustrate multigenerational changes in network structure.

  14. Transformation of social networks in the late pre-Hispanic US Southwest

    PubMed Central

    Mills, Barbara J.; Clark, Jeffery J.; Peeples, Matthew A.; Haas, W. R.; Roberts, John M.; Hill, J. Brett; Huntley, Deborah L.; Borck, Lewis; Breiger, Ronald L.; Clauset, Aaron; Shackley, M. Steven

    2013-01-01

    The late pre-Hispanic period in the US Southwest (A.D. 1200–1450) was characterized by large-scale demographic changes, including long-distance migration and population aggregation. To reconstruct how these processes reshaped social networks, we compiled a comprehensive artifact database from major sites dating to this interval in the western Southwest. We combine social network analysis with geographic information systems approaches to reconstruct network dynamics over 250 y. We show how social networks were transformed across the region at previously undocumented spatial, temporal, and social scales. Using well-dated decorated ceramics, we track changes in network topology at 50-y intervals to show a dramatic shift in network density and settlement centrality from the northern to the southern Southwest after A.D. 1300. Both obsidian sourcing and ceramic data demonstrate that long-distance network relationships also shifted from north to south after migration. Surprisingly, social distance does not always correlate with spatial distance because of the presence of network relationships spanning long geographic distances. Our research shows how a large network in the southern Southwest grew and then collapsed, whereas networks became more fragmented in the northern Southwest but persisted. The study also illustrates how formal social network analysis may be applied to large-scale databases of material culture to illustrate multigenerational changes in network structure. PMID:23530201

  15. The Mass Function of Abell Clusters

    NASA Astrophysics Data System (ADS)

    Chen, J.; Huchra, J. P.; McNamara, B. R.; Mader, J.

    1998-12-01

    The velocity dispersion and mass functions for rich clusters of galaxies provide important constraints on models of the formation of Large-Scale Structure (e.g., Frenk et al. 1990). However, prior estimates of the velocity dispersion or mass function for galaxy clusters have been based on either very small samples of clusters (Bahcall and Cen 1993; Zabludoff et al. 1994) or large but incomplete samples (e.g., the Girardi et al. (1998) determination from a sample of clusters with more than 30 measured galaxy redshifts). In contrast, we approach the problem by constructing a volume-limited sample of Abell clusters. We collected individual galaxy redshifts for our sample from two major galaxy velocity databases, the NASA Extragalactic Database, NED, maintained at IPAC, and ZCAT, maintained at SAO. We assembled a database with velocity information for possible cluster members and then selected cluster members based on both spatial and velocity data. Cluster velocity dispersions and masses were calculated following the procedures of Danese, De Zotti, and di Tullio (1980) and Heisler, Tremaine, and Bahcall (1985), respectively. The final velocity dispersion and mass functions were analyzed in order to constrain cosmological parameters by comparison to the results of N-body simulations. Our data for the cluster sample as a whole and for the individual clusters (spatial maps and velocity histograms) in our sample is available on-line at http://cfa-www.harvard.edu/ huchra/clusters. This website will be updated as more data becomes available in the master redshift compilations, and will be expanded to include more clusters and large groups of galaxies.

  16. Application GIS on university planning: building a spatial database aided spatial decision

    NASA Astrophysics Data System (ADS)

    Miao, Lei; Wu, Xiaofang; Wang, Kun; Nong, Yu

    2007-06-01

    With the development of university and its size enlarging, kinds of resource need to effective management urgently. Spacial database is the right tool to assist administrator's spatial decision. And it's ready for digital campus with integrating existing OMS. It's researched about the campus planning in detail firstly. Following instanced by south china agriculture university it is practiced that how to build the geographic database of the campus building and house for university administrator's spatial decision.

  17. Development of a station based climate database for SWAT and APEX assessments in the U.S.

    USDA-ARS?s Scientific Manuscript database

    Water quality simulation models such as the Soil and Water Assessment Tool (SWAT) and Agricultural Policy EXtender (APEX) are widely used in the U.S. These models require large amounts of spatial and tabular data to simulate the natural world. Accurate and seamless daily climatic data are critical...

  18. Maintaining Multimedia Data in a Geospatial Database

    DTIC Science & Technology

    2012-09-01

    at PostgreSQL and MySQL as spatial databases was offered. Given their results, as each database produced result sets from zero to 100,000, it was...excelled given multiple conditions. A different look at PostgreSQL and MySQL as spatial databases was offered. Given their results, as each database... MySQL ................................................................................................14  B.  BENCHMARKING DATA RETRIEVED FROM TABLE

  19. The research and development of water resources management information system based on ArcGIS

    NASA Astrophysics Data System (ADS)

    Cui, Weiqun; Gao, Xiaoli; Li, Yuzhi; Cui, Zhencai

    According to that there are large amount of data, complexity of data type and format in the water resources management, we built the water resources calculation model and established the water resources management information system based on the advanced ArcGIS and Visual Studio.NET development platform. The system can integrate the spatial data and attribute data organically, and manage them uniformly. It can analyze spatial data, inquire by map and data bidirectionally, provide various charts and report forms automatically, link multimedia information, manage database etc. . So it can provide spatial and static synthetical information services for study, management and decision of water resources, regional geology and eco-environment etc..

  20. geophylobuilder 1.0: an arcgis extension for creating 'geophylogenies'.

    PubMed

    Kidd, David M; Liu, Xianhua

    2008-01-01

    Evolution is inherently a spatiotemporal process; however, despite this, phylogenetic and geographical data and models remain largely isolated from one another. Geographical information systems provide a ready-made spatial modelling, analysis and dissemination environment within which phylogenetic models can be explicitly linked with their associated spatial data and subsequently integrated with other georeferenced data sets describing the biotic and abiotic environment. geophylobuilder 1.0 is an extension for the arcgis geographical information system that builds a 'geophylogenetic' data model from a phylogenetic tree and associated geographical data. Geophylogenetic database objects can subsequently be queried, spatially analysed and visualized in both 2D and 3D within a geographical information systems. © 2007 The Authors.

  1. Geodata Modeling and Query in Geographic Information Systems

    NASA Technical Reports Server (NTRS)

    Adam, Nabil

    1996-01-01

    Geographic information systems (GIS) deal with collecting, modeling, man- aging, analyzing, and integrating spatial (locational) and non-spatial (attribute) data required for geographic applications. Examples of spatial data are digital maps, administrative boundaries, road networks, and those of non-spatial data are census counts, land elevations and soil characteristics. GIS shares common areas with a number of other disciplines such as computer- aided design, computer cartography, database management, and remote sensing. None of these disciplines however, can by themselves fully meet the requirements of a GIS application. Examples of such requirements include: the ability to use locational data to produce high quality plots, perform complex operations such as network analysis, enable spatial searching and overlay operations, support spatial analysis and modeling, and provide data management functions such as efficient storage, retrieval, and modification of large datasets; independence, integrity, and security of data; and concurrent access to multiple users. It is on the data management issues that we devote our discussions in this monograph. Traditionally, database management technology have been developed for business applications. Such applications require, among other things, capturing the data requirements of high-level business functions and developing machine- level implementations; supporting multiple views of data and yet providing integration that would minimize redundancy and maintain data integrity and security; providing a high-level language for data definition and manipulation; allowing concurrent access to multiple users; and processing user transactions in an efficient manner. The demands on database management systems have been for speed, reliability, efficiency, cost effectiveness, and user-friendliness. Significant progress have been made in all of these areas over the last two decades to the point that many generalized database platforms are now available for developing data intensive applications that run in real-time. While continuous improvement is still being made at a very fast-paced and competitive rate, new application areas such as computer aided design, image processing, VLSI design, and GIS have been identified by many as the next generation of database applications. These new application areas pose serious challenges to the currently available database technology. At the core of these challenges is the nature of data that is manipulated. In traditional database applications, the database objects do not have any spatial dimension, and as such, can be thought of as point data in a multi-dimensional space. For example, each instance of an entity EMPLOYEE will have a unique value corresponding to every attribute such as employee id, employee name, employee address and so on. Thus, every Employee instance can be thought of as a point in a multi-dimensional space where each dimension is represented by an attribute. Furthermore, all operations on such data are one-dimensional. Thus, users may retrieve all entities satisfying one or more constraints. Examples of such constraints include employees with addresses in a certain area code, or salaries within a certain range. Even though constraints can be specified on multiple attributes (dimensions), the search for such data is essentially orthogonal across these dimensions.

  2. CELL5M: A geospatial database of agricultural indicators for Africa South of the Sahara.

    PubMed

    Koo, Jawoo; Cox, Cindy M; Bacou, Melanie; Azzarri, Carlo; Guo, Zhe; Wood-Sichra, Ulrike; Gong, Queenie; You, Liangzhi

    2016-01-01

    Recent progress in large-scale georeferenced data collection is widening opportunities for combining multi-disciplinary datasets from biophysical to socioeconomic domains, advancing our analytical and modeling capacity. Granular spatial datasets provide critical information necessary for decision makers to identify target areas, assess baseline conditions, prioritize investment options, set goals and targets and monitor impacts. However, key challenges in reconciling data across themes, scales and borders restrict our capacity to produce global and regional maps and time series. This paper provides overview, structure and coverage of CELL5M-an open-access database of geospatial indicators at 5 arc-minute grid resolution-and introduces a range of analytical applications and case-uses. CELL5M covers a wide set of agriculture-relevant domains for all countries in Africa South of the Sahara and supports our understanding of multi-dimensional spatial variability inherent in farming landscapes throughout the region.

  3. Medical Image Databases

    PubMed Central

    Tagare, Hemant D.; Jaffe, C. Carl; Duncan, James

    1997-01-01

    Abstract Information contained in medical images differs considerably from that residing in alphanumeric format. The difference can be attributed to four characteristics: (1) the semantics of medical knowledge extractable from images is imprecise; (2) image information contains form and spatial data, which are not expressible in conventional language; (3) a large part of image information is geometric; (4) diagnostic inferences derived from images rest on an incomplete, continuously evolving model of normality. This paper explores the differentiating characteristics of text versus images and their impact on design of a medical image database intended to allow content-based indexing and retrieval. One strategy for implementing medical image databases is presented, which employs object-oriented iconic queries, semantics by association with prototypes, and a generic schema. PMID:9147338

  4. A reference dataset for deformable image registration spatial accuracy evaluation using the COPDgene study archive

    NASA Astrophysics Data System (ADS)

    Castillo, Richard; Castillo, Edward; Fuentes, David; Ahmad, Moiz; Wood, Abbie M.; Ludwig, Michelle S.; Guerrero, Thomas

    2013-05-01

    Landmark point-pairs provide a strategy to assess deformable image registration (DIR) accuracy in terms of the spatial registration of the underlying anatomy depicted in medical images. In this study, we propose to augment a publicly available database (www.dir-lab.com) of medical images with large sets of manually identified anatomic feature pairs between breath-hold computed tomography (BH-CT) images for DIR spatial accuracy evaluation. Ten BH-CT image pairs were randomly selected from the COPDgene study cases. Each patient had received CT imaging of the entire thorax in the supine position at one-fourth dose normal expiration and maximum effort full dose inspiration. Using dedicated in-house software, an imaging expert manually identified large sets of anatomic feature pairs between images. Estimates of inter- and intra-observer spatial variation in feature localization were determined by repeat measurements of multiple observers over subsets of randomly selected features. 7298 anatomic landmark features were manually paired between the 10 sets of images. Quantity of feature pairs per case ranged from 447 to 1172. Average 3D Euclidean landmark displacements varied substantially among cases, ranging from 12.29 (SD: 6.39) to 30.90 (SD: 14.05) mm. Repeat registration of uniformly sampled subsets of 150 landmarks for each case yielded estimates of observer localization error, which ranged in average from 0.58 (SD: 0.87) to 1.06 (SD: 2.38) mm for each case. The additions to the online web database (www.dir-lab.com) described in this work will broaden the applicability of the reference data, providing a freely available common dataset for targeted critical evaluation of DIR spatial accuracy performance in multiple clinical settings. Estimates of observer variance in feature localization suggest consistent spatial accuracy for all observers across both four-dimensional CT and COPDgene patient cohorts.

  5. GIS-project: geodynamic globe for global monitoring of geological processes

    NASA Astrophysics Data System (ADS)

    Ryakhovsky, V.; Rundquist, D.; Gatinsky, Yu.; Chesalova, E.

    2003-04-01

    A multilayer geodynamic globe at the scale 1:10,000,000 was created at the end of the nineties in the GIS Center of the Vernadsky Museum. A special soft-and-hardware complex was elaborated for its visualization with a set of multitarget object directed databases. The globe includes separate thematic covers represented by digital sets of spatial geological, geochemical, and geophysical information (maps, schemes, profiles, stratigraphic columns, arranged databases etc.). At present the largest databases included in the globe program are connected with petrochemical and isotopic data on magmatic rocks of the World Ocean and with the large and supperlarge mineral deposits. Software by the Environmental Scientific Research Institute (ESRI), USA as well as ArcScan vectrorizator were used for covers digitizing and database adaptation (ARC/INFO 7.0, 8.0). All layers of the geoinformational project were obtained by scanning of separate objects and their transfer to the real geographic co-ordinates of an equiintermediate conic projection. Then the covers were projected on plane degree-system geographic co-ordinates. Some attributive databases were formed for each thematic layer, and in the last stage all covers were combined into the single information system. Separate digital covers represent mathematical descriptions of geological objects and relations between them, such as Earth's altimetry, active fault systems, seismicity etc. Some grounds of the cartographic generalization were taken into consideration in time of covers compilation with projection and co-ordinate systems precisely answered a given scale. The globe allows us to carry out in the interactive regime the formation of coordinated with each other object-oriented databases and thematic covers directly connected with them. They can be spread for all the Earth and the near-Earth space, and for the most well known parts of divergent and convergent boundaries of the lithosphere plates. Such covers and time series reflect in diagram form a total combination and dynamics of data on the geological structure, geophysical fields, seismicity, geomagnetism, composition of rock complexes, and metalloge-ny of different areas on the Earth's surface. They give us possibility to scale, detail, and develop 3D spatial visualization. Information filling the covers could be replenished as in the existing so in newly formed databases with new data. The integrated analyses of the data allows us more precisely to define our ideas on regularities in development of lithosphere and mantle unhomogeneities using some original technologies. It also enables us to work out 3D digital models for geodynamic development of tectonic zones in convergent and divergent plate boundaries with the purpose of integrated monitoring of mineral resources and establishing correlation between seismicity, magmatic activity, and metallogeny in time-spatial co-ordinates. The created multifold geoinformation system gives a chance to execute an integral analyses of geoinformation flows in the interactive regime and, in particular, to establish some regularities in the time-spatial distribution and dynamics of main structural units in the lithosphere, as well as illuminate the connection between stages of their development and epochs of large and supperlarge mineral deposit formation. Now we try to use the system for prediction of large oil and gas concentration in the main sedimentary basins. The work was supported by RFBR, (grants 93-07-14680, 96-07-89499, 99-07-90030, 00-15-98535, 02-07-90140) and MTC.

  6. Canopies to Continents: What spatial scales are needed to represent landcover distributions in earth system models?

    NASA Astrophysics Data System (ADS)

    Guenther, A. B.; Duhl, T.

    2011-12-01

    Increasing computational resources have enabled a steady improvement in the spatial resolution used for earth system models. Land surface models and landcover distributions have kept ahead by providing higher spatial resolution than typically used in these models. Satellite observations have played a major role in providing high resolution landcover distributions over large regions or the entire earth surface but ground observations are needed to calibrate these data and provide accurate inputs for models. As our ability to resolve individual landscape components improves, it is important to consider what scale is sufficient for providing inputs to earth system models. The required spatial scale is dependent on the processes being represented and the scientific questions being addressed. This presentation will describe the development a contiguous U.S. landcover database using high resolution imagery (1 to 1000 meters) and surface observations of species composition and other landcover characteristics. The database includes plant functional types and species composition and is suitable for driving land surface models (CLM and MEGAN) that predict land surface exchange of carbon, water, energy and biogenic reactive gases (e.g., isoprene, sesquiterpenes, and NO). We investigate the sensitivity of model results to landcover distributions with spatial scales ranging over six orders of magnitude (1 meter to 1000000 meters). The implications for predictions of regional climate and air quality will be discussed along with recommendations for regional and global earth system modeling.

  7. Quantify spatial relations to discover handwritten graphical symbols

    NASA Astrophysics Data System (ADS)

    Li, Jinpeng; Mouchère, Harold; Viard-Gaudin, Christian

    2012-01-01

    To model a handwritten graphical language, spatial relations describe how the strokes are positioned in the 2-dimensional space. Most of existing handwriting recognition systems make use of some predefined spatial relations. However, considering a complex graphical language, it is hard to express manually all the spatial relations. Another possibility would be to use a clustering technique to discover the spatial relations. In this paper, we discuss how to create a relational graph between strokes (nodes) labeled with graphemes in a graphical language. Then we vectorize spatial relations (edges) for clustering and quantization. As the targeted application, we extract the repetitive sub-graphs (graphical symbols) composed of graphemes and learned spatial relations. On two handwriting databases, a simple mathematical expression database and a complex flowchart database, the unsupervised spatial relations outperform the predefined spatial relations. In addition, we visualize the frequent patterns on two text-lines containing Chinese characters.

  8. Bio-optical data integration based on a 4 D database system approach

    NASA Astrophysics Data System (ADS)

    Imai, N. N.; Shimabukuro, M. H.; Carmo, A. F. C.; Alcantara, E. H.; Rodrigues, T. W. P.; Watanabe, F. S. Y.

    2015-04-01

    Bio-optical characterization of water bodies requires spatio-temporal data about Inherent Optical Properties and Apparent Optical Properties which allow the comprehension of underwater light field aiming at the development of models for monitoring water quality. Measurements are taken to represent optical properties along a column of water, and then the spectral data must be related to depth. However, the spatial positions of measurement may differ since collecting instruments vary. In addition, the records should not refer to the same wavelengths. Additional difficulty is that distinct instruments store data in different formats. A data integration approach is needed to make these large and multi source data sets suitable for analysis. Thus, it becomes possible, even automatically, semi-empirical models evaluation, preceded by preliminary tasks of quality control. In this work it is presented a solution, in the stated scenario, based on spatial - geographic - database approach with the adoption of an object relational Database Management System - DBMS - due to the possibilities to represent all data collected in the field, in conjunction with data obtained by laboratory analysis and Remote Sensing images that have been taken at the time of field data collection. This data integration approach leads to a 4D representation since that its coordinate system includes 3D spatial coordinates - planimetric and depth - and the time when each data was taken. It was adopted PostgreSQL DBMS extended by PostGIS module to provide abilities to manage spatial/geospatial data. It was developed a prototype which has the mainly tools an analyst needs to prepare the data sets for analysis.

  9. Spatial resolution requirements for automated cartographic road extraction

    USGS Publications Warehouse

    Benjamin, S.; Gaydos, L.

    1990-01-01

    Ground resolution requirements for detection and extraction of road locations in a digitized large-scale photographic database were investigated. A color infrared photograph of Sunnyvale, California was scanned, registered to a map grid, and spatially degraded to 1- to 5-metre resolution pixels. Road locations in each data set were extracted using a combination of image processing and CAD programs. These locations were compared to a photointerpretation of road locations to determine a preferred pixel size for the extraction method. Based on road pixel omission error computations, a 3-metre pixel resolution appears to be the best choice for this extraction method. -Authors

  10. Advanced techniques for the storage and use of very large, heterogeneous spatial databases. The representation of geographic knowledge: Toward a universal framework. [relations (mathematics)

    NASA Technical Reports Server (NTRS)

    Peuquet, Donna J.

    1987-01-01

    A new approach to building geographic data models that is based on the fundamental characteristics of the data is presented. An overall theoretical framework for representing geographic data is proposed. An example of utilizing this framework in a Geographic Information System (GIS) context by combining artificial intelligence techniques with recent developments in spatial data processing techniques is given. Elements of data representation discussed include hierarchical structure, separation of locational and conceptual views, and the ability to store knowledge at variable levels of completeness and precision.

  11. Using a spatial and tabular database to generate statistics from terrain and spectral data for soil surveys

    USGS Publications Warehouse

    Horvath , E.A.; Fosnight, E.A.; Klingebiel, A.A.; Moore, D.G.; Stone, J.E.; Reybold, W.U.; Petersen, G.W.

    1987-01-01

    A methodology has been developed to create a spatial database by referencing digital elevation, Landsat multispectral scanner data, and digitized soil premap delineations of a number of adjacent 7.5-min quadrangle areas to a 30-m Universal Transverse Mercator projection. Slope and aspect transformations are calculated from elevation data and grouped according to field office specifications. An unsupervised classification is performed on a brightness and greenness transformation of the spectral data. The resulting spectral, slope, and aspect maps of each of the 7.5-min quadrangle areas are then plotted and submitted to the field office to be incorporated into the soil premapping stages of a soil survey. A tabular database is created from spatial data by generating descriptive statistics for each data layer within each soil premap delineation. The tabular data base is then entered into a data base management system to be accessed by the field office personnel during the soil survey and to be used for subsequent resource management decisions.Large amounts of data are collected and archived during resource inventories for public land management. Often these data are stored as stacks of maps or folders in a file system in someone's office, with the maps in a variety of formats, scales, and with various standards of accuracy depending on their purpose. This system of information storage and retrieval is cumbersome at best when several categories of information are needed simultaneously for analysis or as input to resource management models. Computers now provide the resource scientist with the opportunity to design increasingly complex models that require even more categories of resource-related information, thus compounding the problem.Recently there has been much emphasis on the use of geographic information systems (GIS) as an alternative method for map data archives and as a resource management tool. Considerable effort has been devoted to the generation of tabular databases, such as the U.S. Department of Agriculture's SCS/S015 (Soil Survey Staff, 1983), to archive the large amounts of information that are collected in conjunction with mapping of natural resources in an easily retrievable manner.During the past 4 years the U.S. Geological Survey's EROS Data Center, in a cooperative effort with the Bureau of Land Management (BLM) and the Soil Conservation Service (SCS), developed a procedure that uses spatial and tabular databases to generate elevation, slope, aspect, and spectral map products that can be used during soil premapping. The procedure results in tabular data, residing in a database management system, that are indexed to the final soil delineations and help quantify soil map unit composition.The procedure was developed and tested on soil surveys on over 600 000 ha in Wyoming, Nevada, and Idaho. A transfer of technology from the EROS Data Center to the BLM will enable the Denver BLM Service Center to use this procedure in soil survey operations on BLM lands. Also underway is a cooperative effort between the EROS Data Center and SCS to define and evaluate maps that can be produced as derivatives of digital elevation data for 7.5-min quadrangle areas, such as those used during the premapping stage of the soil surveys mentioned above, the idea being to make such products routinely available.The procedure emphasizes the applications of digital elevation and spectral data to order-three soil surveys on rangelands, and will:Incorporate digital terrain and spectral data into a spatial database for soil surveys.Provide hardcopy products (that can be generated from digital elevation model and spectral data) that are useful during the soil pre-mapping process.Incorporate soil premaps into a spatial database that can be accessed during the soil survey process along with terrain and spectral data.Summarize useful quantitative information for soil mapping and for making interpretations for resource management.

  12. Data management with a landslide inventory of the Franconian Alb (Germany) using a spatial database and GIS tools

    NASA Astrophysics Data System (ADS)

    Bemm, Stefan; Sandmeier, Christine; Wilde, Martina; Jaeger, Daniel; Schwindt, Daniel; Terhorst, Birgit

    2014-05-01

    The area of the Swabian-Franconian cuesta landscape (Southern Germany) is highly prone to landslides. This was apparent in the late spring of 2013, when numerous landslides occurred as a consequence of heavy and long-lasting rainfalls. The specific climatic situation caused numerous damages with serious impact on settlements and infrastructure. Knowledge on spatial distribution of landslides, processes and characteristics are important to evaluate the potential risk that can occur from mass movements in those areas. In the frame of two projects about 400 landslides were mapped and detailed data sets were compiled during years 2011 to 2014 at the Franconian Alb. The studies are related to the project "Slope stability and hazard zones in the northern Bavarian cuesta" (DFG, German Research Foundation) as well as to the LfU (The Bavarian Environment Agency) within the project "Georisks and climate change - hazard indication map Jura". The central goal of the present study is to create a spatial database for landslides. The database should contain all fundamental parameters to characterize the mass movements and should provide the potential for secure data storage and data management, as well as statistical evaluations. The spatial database was created with PostgreSQL, an object-relational database management system and PostGIS, a spatial database extender for PostgreSQL, which provides the possibility to store spatial and geographic objects and to connect to several GIS applications, like GRASS GIS, SAGA GIS, QGIS and GDAL, a geospatial library (Obe et al. 2011). Database access for querying, importing, and exporting spatial and non-spatial data is ensured by using GUI or non-GUI connections. The database allows the use of procedural languages for writing advanced functions in the R, Python or Perl programming languages. It is possible to work directly with the (spatial) data entirety of the database in R. The inventory of the database includes (amongst others), informations on location, landslide types and causes, geomorphological positions, geometries, hazards and damages, as well as assessments related to the activity of landslides. Furthermore, there are stored spatial objects, which represent the components of a landslide, in particular the scarps and the accumulation areas. Besides, waterways, map sheets, contour lines, detailed infrastructure data, digital elevation models, aspect and slope data are included. Examples of spatial queries to the database are intersections of raster and vector data for calculating values for slope gradients or aspects of landslide areas and for creating multiple, overlaying sections for the comparison of slopes, as well as distances to the infrastructure or to the next receiving drainage. Furthermore, getting informations on landslide magnitudes, distribution and clustering, as well as potential correlations concerning geomorphological or geological conditions. The data management concept in this study can be implemented for any academic, public or private use, because it is independent from any obligatory licenses. The created spatial database offers a platform for interdisciplinary research and socio-economic questions, as well as for landslide susceptibility and hazard indication mapping. Obe, R.O., Hsu, L.S. 2011. PostGIS in action. - pp 492, Manning Publications, Stamford

  13. Assessing SaTScan ability to detect space-time clusters in wildfires

    NASA Astrophysics Data System (ADS)

    Costa, Ricardo; Pereira, Mário; Caramelo, Liliana; Vega Orozco, Carmen; Kanevski, Mikhail

    2013-04-01

    Besides classical cluster analysis techniques which are able to analyse spatial and temporal data, SaTScan software analyses space-time data using the spatial, temporal or space-time scan statistics. This software requires the spatial coordinates of the fire, but since in the Rural Fire Portuguese Database (PRFD) (Pereira et al, 2011) the location of each fire is the parish where the ignition occurs, the fire spatial coordinates were considered as coordinates of the centroid of the parishes. Moreover, in general, the northern region is characterized by a large number of small parishes while the southern comprises parish much larger. The objectives of this study are: (i) to test the ability of SaTScan to detect the correct space-time clusters, in what respects to spatial and temporal location and size; and, (ii) to evaluate the effect of the dimensions of the parishes and of aggregating all fires occurred in a parish in a single point. Results obtained with a synthetic database where clusters were artificially created with different densities, in different regions of the country and with different sizes and durations, allow to conclude: the ability of SaTScan to correctly identify the clusters (location, shape and spatial and temporal dimension); and objectively assess the influence of the size of the parishes and windows used in space-time detection. Pereira, M. G., Malamud, B. D., Trigo, R. M., and Alves, P. I.: The history and characteristics of the 1980-2005 Portuguese rural fire database, Nat. Hazards Earth Syst. Sci., 11, 3343-3358, doi:10.5194/nhess-11-3343-2011, 2011 This work is supported by European Union Funds (FEDER/COMPETE - Operational Competitiveness Programme) and by national funds (FCT - Portuguese Foundation for Science and Technology) under the project FCOMP-01-0124-FEDER-022692, the project FLAIR (PTDC/AAC-AMB/104702/2008) and the EU 7th Framework Program through FUME (contract number 243888).

  14. Application of the automated spatial surveillance program to birth defects surveillance data.

    PubMed

    Gardner, Bennett R; Strickland, Matthew J; Correa, Adolfo

    2007-07-01

    Although many birth defects surveillance programs incorporate georeferenced records into their databases, practical methods for routine spatial surveillance are lacking. We present a macroprogram written for the software package R designed for routine exploratory spatial analysis of birth defects data, the Automated Spatial Surveillance Program (ASSP), and present an application of this program using spina bifida prevalence data for metropolitan Atlanta. Birth defects surveillance data were collected by the Metropolitan Atlanta Congenital Defects Program. We generated ASSP maps for two groups of years that correspond roughly to the periods before (1994-1998) and after (1999-2002) folic acid fortification of flour. ASSP maps display census tract-specific spina bifida prevalence, smoothed prevalence contours, and locations of statistically elevated prevalence. We used these maps to identify areas of elevated prevalence for spina bifida. We identified a large area of potential concern in the years following fortification of grains and cereals with folic acid. This area overlapped census tracts containing large numbers of Hispanic residents. The potential utility of ASSP for spatial disease monitoring was demonstrated by the identification of areas of high prevalence of spina bifida and may warrant further study and monitoring. We intend to further develop ASSP so that it becomes practical for routine spatial monitoring of birth defects. (c) 2007 Wiley-Liss, Inc.

  15. RiceAtlas, a spatial database of global rice calendars and production.

    PubMed

    Laborte, Alice G; Gutierrez, Mary Anne; Balanza, Jane Girly; Saito, Kazuki; Zwart, Sander J; Boschetti, Mirco; Murty, M V R; Villano, Lorena; Aunario, Jorrel Khalil; Reinke, Russell; Koo, Jawoo; Hijmans, Robert J; Nelson, Andrew

    2017-05-30

    Knowing where, when, and how much rice is planted and harvested is crucial information for understanding the effects of policy, trade, and global and technological change on food security. We developed RiceAtlas, a spatial database on the seasonal distribution of the world's rice production. It consists of data on rice planting and harvesting dates by growing season and estimates of monthly production for all rice-producing countries. Sources used for planting and harvesting dates include global and regional databases, national publications, online reports, and expert knowledge. Monthly production data were estimated based on annual or seasonal production statistics, and planting and harvesting dates. RiceAtlas has 2,725 spatial units. Compared with available global crop calendars, RiceAtlas is nearly ten times more spatially detailed and has nearly seven times more spatial units, with at least two seasons of calendar data, making RiceAtlas the most comprehensive and detailed spatial database on rice calendar and production.

  16. Enabling heterogenous multi-scale database for emergency service functions through geoinformation technologies

    NASA Astrophysics Data System (ADS)

    Bhanumurthy, V.; Venugopala Rao, K.; Srinivasa Rao, S.; Ram Mohan Rao, K.; Chandra, P. Satya; Vidhyasagar, J.; Diwakar, P. G.; Dadhwal, V. K.

    2014-11-01

    Geographical Information Science (GIS) is now graduated from traditional desktop system to Internet system. Internet GIS is emerging as one of the most promising technologies for addressing Emergency Management. Web services with different privileges are playing an important role in dissemination of the emergency services to the decision makers. Spatial database is one of the most important components in the successful implementation of Emergency Management. It contains spatial data in the form of raster, vector, linked with non-spatial information. Comprehensive data is required to handle emergency situation in different phases. These database elements comprise core data, hazard specific data, corresponding attribute data, and live data coming from the remote locations. Core data sets are minimum required data including base, thematic, infrastructure layers to handle disasters. Disaster specific information is required to handle a particular disaster situation like flood, cyclone, forest fire, earth quake, land slide, drought. In addition to this Emergency Management require many types of data with spatial and temporal attributes that should be made available to the key players in the right format at right time. The vector database needs to be complemented with required resolution satellite imagery for visualisation and analysis in disaster management. Therefore, the database is interconnected and comprehensive to meet the requirement of an Emergency Management. This kind of integrated, comprehensive and structured database with appropriate information is required to obtain right information at right time for the right people. However, building spatial database for Emergency Management is a challenging task because of the key issues such as availability of data, sharing policies, compatible geospatial standards, data interoperability etc. Therefore, to facilitate using, sharing, and integrating the spatial data, there is a need to define standards to build emergency database systems. These include aspects such as i) data integration procedures namely standard coding scheme, schema, meta data format, spatial format ii) database organisation mechanism covering data management, catalogues, data models iii) database dissemination through a suitable environment, as a standard service for effective service dissemination. National Database for Emergency Management (NDEM) is such a comprehensive database for addressing disasters in India at the national level. This paper explains standards for integrating, organising the multi-scale and multi-source data with effective emergency response using customized user interfaces for NDEM. It presents standard procedure for building comprehensive emergency information systems for enabling emergency specific functions through geospatial technologies.

  17. The database of the PREDICTS (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems) project.

    PubMed

    Hudson, Lawrence N; Newbold, Tim; Contu, Sara; Hill, Samantha L L; Lysenko, Igor; De Palma, Adriana; Phillips, Helen R P; Alhusseini, Tamera I; Bedford, Felicity E; Bennett, Dominic J; Booth, Hollie; Burton, Victoria J; Chng, Charlotte W T; Choimes, Argyrios; Correia, David L P; Day, Julie; Echeverría-Londoño, Susy; Emerson, Susan R; Gao, Di; Garon, Morgan; Harrison, Michelle L K; Ingram, Daniel J; Jung, Martin; Kemp, Victoria; Kirkpatrick, Lucinda; Martin, Callum D; Pan, Yuan; Pask-Hale, Gwilym D; Pynegar, Edwin L; Robinson, Alexandra N; Sanchez-Ortiz, Katia; Senior, Rebecca A; Simmons, Benno I; White, Hannah J; Zhang, Hanbin; Aben, Job; Abrahamczyk, Stefan; Adum, Gilbert B; Aguilar-Barquero, Virginia; Aizen, Marcelo A; Albertos, Belén; Alcala, E L; Del Mar Alguacil, Maria; Alignier, Audrey; Ancrenaz, Marc; Andersen, Alan N; Arbeláez-Cortés, Enrique; Armbrecht, Inge; Arroyo-Rodríguez, Víctor; Aumann, Tom; Axmacher, Jan C; Azhar, Badrul; Azpiroz, Adrián B; Baeten, Lander; Bakayoko, Adama; Báldi, András; Banks, John E; Baral, Sharad K; Barlow, Jos; Barratt, Barbara I P; Barrico, Lurdes; Bartolommei, Paola; Barton, Diane M; Basset, Yves; Batáry, Péter; Bates, Adam J; Baur, Bruno; Bayne, Erin M; Beja, Pedro; Benedick, Suzan; Berg, Åke; Bernard, Henry; Berry, Nicholas J; Bhatt, Dinesh; Bicknell, Jake E; Bihn, Jochen H; Blake, Robin J; Bobo, Kadiri S; Bóçon, Roberto; Boekhout, Teun; Böhning-Gaese, Katrin; Bonham, Kevin J; Borges, Paulo A V; Borges, Sérgio H; Boutin, Céline; Bouyer, Jérémy; Bragagnolo, Cibele; Brandt, Jodi S; Brearley, Francis Q; Brito, Isabel; Bros, Vicenç; Brunet, Jörg; Buczkowski, Grzegorz; Buddle, Christopher M; Bugter, Rob; Buscardo, Erika; Buse, Jörn; Cabra-García, Jimmy; Cáceres, Nilton C; Cagle, Nicolette L; Calviño-Cancela, María; Cameron, Sydney A; Cancello, Eliana M; Caparrós, Rut; Cardoso, Pedro; Carpenter, Dan; Carrijo, Tiago F; Carvalho, Anelena L; Cassano, Camila R; Castro, Helena; Castro-Luna, Alejandro A; Rolando, Cerda B; Cerezo, Alexis; Chapman, Kim Alan; Chauvat, Matthieu; Christensen, Morten; Clarke, Francis M; Cleary, Daniel F R; Colombo, Giorgio; Connop, Stuart P; Craig, Michael D; Cruz-López, Leopoldo; Cunningham, Saul A; D'Aniello, Biagio; D'Cruze, Neil; da Silva, Pedro Giovâni; Dallimer, Martin; Danquah, Emmanuel; Darvill, Ben; Dauber, Jens; Davis, Adrian L V; Dawson, Jeff; de Sassi, Claudio; de Thoisy, Benoit; Deheuvels, Olivier; Dejean, Alain; Devineau, Jean-Louis; Diekötter, Tim; Dolia, Jignasu V; Domínguez, Erwin; Dominguez-Haydar, Yamileth; Dorn, Silvia; Draper, Isabel; Dreber, Niels; Dumont, Bertrand; Dures, Simon G; Dynesius, Mats; Edenius, Lars; Eggleton, Paul; Eigenbrod, Felix; Elek, Zoltán; Entling, Martin H; Esler, Karen J; de Lima, Ricardo F; Faruk, Aisyah; Farwig, Nina; Fayle, Tom M; Felicioli, Antonio; Felton, Annika M; Fensham, Roderick J; Fernandez, Ignacio C; Ferreira, Catarina C; Ficetola, Gentile F; Fiera, Cristina; Filgueiras, Bruno K C; Fırıncıoğlu, Hüseyin K; Flaspohler, David; Floren, Andreas; Fonte, Steven J; Fournier, Anne; Fowler, Robert E; Franzén, Markus; Fraser, Lauchlan H; Fredriksson, Gabriella M; Freire, Geraldo B; Frizzo, Tiago L M; Fukuda, Daisuke; Furlani, Dario; Gaigher, René; Ganzhorn, Jörg U; García, Karla P; Garcia-R, Juan C; Garden, Jenni G; Garilleti, Ricardo; Ge, Bao-Ming; Gendreau-Berthiaume, Benoit; Gerard, Philippa J; Gheler-Costa, Carla; Gilbert, Benjamin; Giordani, Paolo; Giordano, Simonetta; Golodets, Carly; Gomes, Laurens G L; Gould, Rachelle K; Goulson, Dave; Gove, Aaron D; Granjon, Laurent; Grass, Ingo; Gray, Claudia L; Grogan, James; Gu, Weibin; Guardiola, Moisès; Gunawardene, Nihara R; Gutierrez, Alvaro G; Gutiérrez-Lamus, Doris L; Haarmeyer, Daniela H; Hanley, Mick E; Hanson, Thor; Hashim, Nor R; Hassan, Shombe N; Hatfield, Richard G; Hawes, Joseph E; Hayward, Matt W; Hébert, Christian; Helden, Alvin J; Henden, John-André; Henschel, Philipp; Hernández, Lionel; Herrera, James P; Herrmann, Farina; Herzog, Felix; Higuera-Diaz, Diego; Hilje, Branko; Höfer, Hubert; Hoffmann, Anke; Horgan, Finbarr G; Hornung, Elisabeth; Horváth, Roland; Hylander, Kristoffer; Isaacs-Cubides, Paola; Ishida, Hiroaki; Ishitani, Masahiro; Jacobs, Carmen T; Jaramillo, Víctor J; Jauker, Birgit; Hernández, F Jiménez; Johnson, McKenzie F; Jolli, Virat; Jonsell, Mats; Juliani, S Nur; Jung, Thomas S; Kapoor, Vena; Kappes, Heike; Kati, Vassiliki; Katovai, Eric; Kellner, Klaus; Kessler, Michael; Kirby, Kathryn R; Kittle, Andrew M; Knight, Mairi E; Knop, Eva; Kohler, Florian; Koivula, Matti; Kolb, Annette; Kone, Mouhamadou; Kőrösi, Ádám; Krauss, Jochen; Kumar, Ajith; Kumar, Raman; Kurz, David J; Kutt, Alex S; Lachat, Thibault; Lantschner, Victoria; Lara, Francisco; Lasky, Jesse R; Latta, Steven C; Laurance, William F; Lavelle, Patrick; Le Féon, Violette; LeBuhn, Gretchen; Légaré, Jean-Philippe; Lehouck, Valérie; Lencinas, María V; Lentini, Pia E; Letcher, Susan G; Li, Qi; Litchwark, Simon A; Littlewood, Nick A; Liu, Yunhui; Lo-Man-Hung, Nancy; López-Quintero, Carlos A; Louhaichi, Mounir; Lövei, Gabor L; Lucas-Borja, Manuel Esteban; Luja, Victor H; Luskin, Matthew S; MacSwiney G, M Cristina; Maeto, Kaoru; Magura, Tibor; Mallari, Neil Aldrin; Malone, Louise A; Malonza, Patrick K; Malumbres-Olarte, Jagoba; Mandujano, Salvador; Måren, Inger E; Marin-Spiotta, Erika; Marsh, Charles J; Marshall, E J P; Martínez, Eliana; Martínez Pastur, Guillermo; Moreno Mateos, David; Mayfield, Margaret M; Mazimpaka, Vicente; McCarthy, Jennifer L; McCarthy, Kyle P; McFrederick, Quinn S; McNamara, Sean; Medina, Nagore G; Medina, Rafael; Mena, Jose L; Mico, Estefania; Mikusinski, Grzegorz; Milder, Jeffrey C; Miller, James R; Miranda-Esquivel, Daniel R; Moir, Melinda L; Morales, Carolina L; Muchane, Mary N; Muchane, Muchai; Mudri-Stojnic, Sonja; Munira, A Nur; Muoñz-Alonso, Antonio; Munyekenye, B F; Naidoo, Robin; Naithani, A; Nakagawa, Michiko; Nakamura, Akihiro; Nakashima, Yoshihiro; Naoe, Shoji; Nates-Parra, Guiomar; Navarrete Gutierrez, Dario A; Navarro-Iriarte, Luis; Ndang'ang'a, Paul K; Neuschulz, Eike L; Ngai, Jacqueline T; Nicolas, Violaine; Nilsson, Sven G; Noreika, Norbertas; Norfolk, Olivia; Noriega, Jorge Ari; Norton, David A; Nöske, Nicole M; Nowakowski, A Justin; Numa, Catherine; O'Dea, Niall; O'Farrell, Patrick J; Oduro, William; Oertli, Sabine; Ofori-Boateng, Caleb; Oke, Christopher Omamoke; Oostra, Vicencio; Osgathorpe, Lynne M; Otavo, Samuel Eduardo; Page, Navendu V; Paritsis, Juan; Parra-H, Alejandro; Parry, Luke; Pe'er, Guy; Pearman, Peter B; Pelegrin, Nicolás; Pélissier, Raphaël; Peres, Carlos A; Peri, Pablo L; Persson, Anna S; Petanidou, Theodora; Peters, Marcell K; Pethiyagoda, Rohan S; Phalan, Ben; Philips, T Keith; Pillsbury, Finn C; Pincheira-Ulbrich, Jimmy; Pineda, Eduardo; Pino, Joan; Pizarro-Araya, Jaime; Plumptre, A J; Poggio, Santiago L; Politi, Natalia; Pons, Pere; Poveda, Katja; Power, Eileen F; Presley, Steven J; Proença, Vânia; Quaranta, Marino; Quintero, Carolina; Rader, Romina; Ramesh, B R; Ramirez-Pinilla, Martha P; Ranganathan, Jai; Rasmussen, Claus; Redpath-Downing, Nicola A; Reid, J Leighton; Reis, Yana T; Rey Benayas, José M; Rey-Velasco, Juan Carlos; Reynolds, Chevonne; Ribeiro, Danilo Bandini; Richards, Miriam H; Richardson, Barbara A; Richardson, Michael J; Ríos, Rodrigo Macip; Robinson, Richard; Robles, Carolina A; Römbke, Jörg; Romero-Duque, Luz Piedad; Rös, Matthias; Rosselli, Loreta; Rossiter, Stephen J; Roth, Dana S; Roulston, T'ai H; Rousseau, Laurent; Rubio, André V; Ruel, Jean-Claude; Sadler, Jonathan P; Sáfián, Szabolcs; Saldaña-Vázquez, Romeo A; Sam, Katerina; Samnegård, Ulrika; Santana, Joana; Santos, Xavier; Savage, Jade; Schellhorn, Nancy A; Schilthuizen, Menno; Schmiedel, Ute; Schmitt, Christine B; Schon, Nicole L; Schüepp, Christof; Schumann, Katharina; Schweiger, Oliver; Scott, Dawn M; Scott, Kenneth A; Sedlock, Jodi L; Seefeldt, Steven S; Shahabuddin, Ghazala; Shannon, Graeme; Sheil, Douglas; Sheldon, Frederick H; Shochat, Eyal; Siebert, Stefan J; Silva, Fernando A B; Simonetti, Javier A; Slade, Eleanor M; Smith, Jo; Smith-Pardo, Allan H; Sodhi, Navjot S; Somarriba, Eduardo J; Sosa, Ramón A; Soto Quiroga, Grimaldo; St-Laurent, Martin-Hugues; Starzomski, Brian M; Stefanescu, Constanti; Steffan-Dewenter, Ingolf; Stouffer, Philip C; Stout, Jane C; Strauch, Ayron M; Struebig, Matthew J; Su, Zhimin; Suarez-Rubio, Marcela; Sugiura, Shinji; Summerville, Keith S; Sung, Yik-Hei; Sutrisno, Hari; Svenning, Jens-Christian; Teder, Tiit; Threlfall, Caragh G; Tiitsaar, Anu; Todd, Jacqui H; Tonietto, Rebecca K; Torre, Ignasi; Tóthmérész, Béla; Tscharntke, Teja; Turner, Edgar C; Tylianakis, Jason M; Uehara-Prado, Marcio; Urbina-Cardona, Nicolas; Vallan, Denis; Vanbergen, Adam J; Vasconcelos, Heraldo L; Vassilev, Kiril; Verboven, Hans A F; Verdasca, Maria João; Verdú, José R; Vergara, Carlos H; Vergara, Pablo M; Verhulst, Jort; Virgilio, Massimiliano; Vu, Lien Van; Waite, Edward M; Walker, Tony R; Wang, Hua-Feng; Wang, Yanping; Watling, James I; Weller, Britta; Wells, Konstans; Westphal, Catrin; Wiafe, Edward D; Williams, Christopher D; Willig, Michael R; Woinarski, John C Z; Wolf, Jan H D; Wolters, Volkmar; Woodcock, Ben A; Wu, Jihua; Wunderle, Joseph M; Yamaura, Yuichi; Yoshikura, Satoko; Yu, Douglas W; Zaitsev, Andrey S; Zeidler, Juliane; Zou, Fasheng; Collen, Ben; Ewers, Rob M; Mace, Georgina M; Purves, Drew W; Scharlemann, Jörn P W; Purvis, Andy

    2017-01-01

    The PREDICTS project-Projecting Responses of Ecological Diversity In Changing Terrestrial Systems (www.predicts.org.uk)-has collated from published studies a large, reasonably representative database of comparable samples of biodiversity from multiple sites that differ in the nature or intensity of human impacts relating to land use. We have used this evidence base to develop global and regional statistical models of how local biodiversity responds to these measures. We describe and make freely available this 2016 release of the database, containing more than 3.2 million records sampled at over 26,000 locations and representing over 47,000 species. We outline how the database can help in answering a range of questions in ecology and conservation biology. To our knowledge, this is the largest and most geographically and taxonomically representative database of spatial comparisons of biodiversity that has been collated to date; it will be useful to researchers and international efforts wishing to model and understand the global status of biodiversity.

  18. Towards a Selenographic Information System: Apollo 15 Mission Digitization

    NASA Astrophysics Data System (ADS)

    Votava, J. E.; Petro, N. E.

    2012-12-01

    The Apollo missions represent some of the most technically complex and extensively documented explorations ever endeavored by mankind. The surface experiments performed and the lunar samples collected in-situ have helped form our understanding of the Moon's geologic history and the history of our Solar System. Unfortunately, a complication exists in the analysis and accessibility of these large volumes of lunar data and historical Apollo Era documents due to their multiple formats and disconnected web and print locations. Described here is a project to modernize, spatially reference, and link the lunar data into a comprehensive SELENOGRAPHIC INFORMATION SYSTEM, starting with the Apollo 15 mission. Like its terrestrial counter-parts, Geographic Information System (GIS) programs, such as ArcGIS, allow for easy integration, access, analysis, and display of large amounts of spatially-related data. Documentation in this new database includes surface photographs, panoramas, samples and their laboratory studies (major element and rare earth element weight percents), planned and actual vehicle traverses, and field notes. Using high-resolution (<0.25 m/pixel) images from the Lunar Reconnaissance Orbiter Camera (LROC) the rover (LRV) tracks and astronaut surface activities, along with field sketches from the Apollo 15 Preliminary Science Report (Swann, 1972), were digitized and mapped in ArcMap. Point features were created for each documented sample within the Lunar Sample Compendium (Meyer, 2010) and hyperlinked to the appropriate Compendium file (.PDF) at the stable archive site: http://curator.jsc.nasa.gov/lunar/compendium.cfm. Historical Apollo Era photographs and assembled panoramas were included as point features at each station that have been hyperlinked to the Apollo Lunar Surface Journal (ALSJ) online image library. The database has been set up to allow for the easy display of spatial variation of select attributes between samples. Attributes of interest that have data from the Compendium added directly into the database include age (Ga), mass, texture, major oxide elements (weight %), and Th and U (ppm). This project will produce an easily accessible and linked database that can offer technical and scientific information in its spatial context. While it is not possible given the enormous amounts of data, and the small allotment of time, to enter and/or link every detail to its map layer, the links that have been made here direct the user to rich, stable archive websites and web-based databases that are easy to navigate. While this project only created a product for the Apollo 15 mission, it is the model for spatially-referencing the other Apollo missions. Such a comprehensive lunar surface-activities database, a Selenographic Information System, will likely prove invaluable for future lunar studies. References: Meyer, C. (2010), The lunar sample compendium, June 2012 to August 2012, http://curator.jsc.nasa.gov/lunar/compendium.cfm, Astromaterials Res. & Exploration Sci., NASA L. B. Johnson Space Cent., Houston, TX. Swann, G. A. (1972), Preliminary geologic investigation of the Apollo 15 landing site, in Apollo 15 Preliminary Science Report, [NASA SP-289], pp. 5-1 - 5-112, NASA Manned Spacecraft Cent., Washington, D.C.

  19. Assessing species distribution using Google Street View: a pilot study with the Pine Processionary Moth.

    PubMed

    Rousselet, Jérôme; Imbert, Charles-Edouard; Dekri, Anissa; Garcia, Jacques; Goussard, Francis; Vincent, Bruno; Denux, Olivier; Robinet, Christelle; Dorkeld, Franck; Roques, Alain; Rossi, Jean-Pierre

    2013-01-01

    Mapping species spatial distribution using spatial inference and prediction requires a lot of data. Occurrence data are generally not easily available from the literature and are very time-consuming to collect in the field. For that reason, we designed a survey to explore to which extent large-scale databases such as Google maps and Google Street View could be used to derive valid occurrence data. We worked with the Pine Processionary Moth (PPM) Thaumetopoea pityocampa because the larvae of that moth build silk nests that are easily visible. The presence of the species at one location can therefore be inferred from visual records derived from the panoramic views available from Google Street View. We designed a standardized procedure allowing evaluating the presence of the PPM on a sampling grid covering the landscape under study. The outputs were compared to field data. We investigated two landscapes using grids of different extent and mesh size. Data derived from Google Street View were highly similar to field data in the large-scale analysis based on a square grid with a mesh of 16 km (96% of matching records). Using a 2 km mesh size led to a strong divergence between field and Google-derived data (46% of matching records). We conclude that Google database might provide useful occurrence data for mapping the distribution of species which presence can be visually evaluated such as the PPM. However, the accuracy of the output strongly depends on the spatial scales considered and on the sampling grid used. Other factors such as the coverage of Google Street View network with regards to sampling grid size and the spatial distribution of host trees with regards to road network may also be determinant.

  20. Assessing Species Distribution Using Google Street View: A Pilot Study with the Pine Processionary Moth

    PubMed Central

    Dekri, Anissa; Garcia, Jacques; Goussard, Francis; Vincent, Bruno; Denux, Olivier; Robinet, Christelle; Dorkeld, Franck; Roques, Alain; Rossi, Jean-Pierre

    2013-01-01

    Mapping species spatial distribution using spatial inference and prediction requires a lot of data. Occurrence data are generally not easily available from the literature and are very time-consuming to collect in the field. For that reason, we designed a survey to explore to which extent large-scale databases such as Google maps and Google street view could be used to derive valid occurrence data. We worked with the Pine Processionary Moth (PPM) Thaumetopoea pityocampa because the larvae of that moth build silk nests that are easily visible. The presence of the species at one location can therefore be inferred from visual records derived from the panoramic views available from Google street view. We designed a standardized procedure allowing evaluating the presence of the PPM on a sampling grid covering the landscape under study. The outputs were compared to field data. We investigated two landscapes using grids of different extent and mesh size. Data derived from Google street view were highly similar to field data in the large-scale analysis based on a square grid with a mesh of 16 km (96% of matching records). Using a 2 km mesh size led to a strong divergence between field and Google-derived data (46% of matching records). We conclude that Google database might provide useful occurrence data for mapping the distribution of species which presence can be visually evaluated such as the PPM. However, the accuracy of the output strongly depends on the spatial scales considered and on the sampling grid used. Other factors such as the coverage of Google street view network with regards to sampling grid size and the spatial distribution of host trees with regards to road network may also be determinant. PMID:24130675

  1. The Amma-Sat Database

    NASA Astrophysics Data System (ADS)

    Ramage, K.; Desbois, M.; Eymard, L.

    2004-12-01

    The African Monsoon Multidisciplinary Analysis project is a French initiative, which aims at identifying and analysing in details the multidisciplinary and multi-scales processes that lead to a better understanding of the physical mechanisms linked to the African Monsoon. The main components of the African Monsoon are: Atmospheric Dynamics, the Continental Water Cycle, Atmospheric Chemistry, Oceanic and Continental Surface Conditions. Satellites contribute to various objectives of the project both for process analysis and for large scale-long term studies: some series of satellites (METEOSAT, NOAA,.) have been flown for more than 20 years, ensuring a good quality monitoring of some of the West African atmosphere and surface characteristics. Moreover, several recent missions, and several projects will strongly improve and complement this survey. The AMMA project offers an opportunity to develop the exploitation of satellite data and to make collaboration between specialist and non-specialist users. In this purpose databases are being developed to collect all past and future satellite data related to the African Monsoon. It will then be possible to compare different types of data from different resolution, to validate satellite data with in situ measurements or numerical simulations. AMMA-SAT database main goal is to offer an easy access to satellite data to the AMMA scientific community. The database contains geophysical products estimated from operational or research algorithms and covering the different components of the AMMA project. Nevertheless, the choice has been made to group data within pertinent scales rather than within their thematic. In this purpose, five regions of interest where defined to extract the data: An area covering Tropical Atlantic and Africa for large scale studies, an area covering West Africa for mesoscale studies and three local areas surrounding sites of in situ observations. Within each of these regions satellite data are projected on a regular grid with a spatial resolution compatible with the spatial variability of the geophysical parameter. Data are stored in NetCDF files to facilitate their use. Satellite products can be selected using several spatial and temporal criteria and ordered through a web interface developed in PHP-MySQL. More common means of access are also available such as direct FTP or NFS access for identified users. A Live Access Server allows quick visualization of the data. A meta-data catalogue based on the Directory Interchange Format manages the documentation of each satellite product. The database is currently under development, but some products are already available. The database will be complete by the end of 2005.

  2. The extent of forest in dryland biomes

    Treesearch

    Jean-Francois Bastin; Nora Berrahmouni; Alan Grainger; Danae Maniatis; Danilo Mollicone; Rebecca Moore; Chiara Patriarca; Nicolas Picard; Ben Sparrow; Elena Maria Abraham; Kamel Aloui; Ayhan Atesoglu; Fabio Attore; Caglar Bassullu; Adia Bey; Monica Garzuglia; Luis G. GarcÌa-Montero; Nikee Groot; Greg Guerin; Lars Laestadius; Andrew J. Lowe; Bako Mamane; Giulio Marchi; Paul Patterson; Marcelo Rezende; Stefano Ricci; Ignacio Salcedo; Alfonso Sanchez-Paus Diaz; Fred Stolle; Venera Surappaeva; Rene Castro

    2017-01-01

    Dryland biomes cover two-fifths of Earth’s land surface, but their forest area is poorly known. Here, we report an estimate of global forest extent in dryland biomes, based on analyzing more than 210,000 0.5-hectare sample plots through a photo-interpretation approach using large databases of satellite imagery at (i) very high spatial resolution and (ii) very high...

  3. Diviner lunar radiometer gridded brightness temperatures from geodesic binning of modeled fields of view

    NASA Astrophysics Data System (ADS)

    Sefton-Nash, E.; Williams, J.-P.; Greenhagen, B. T.; Aye, K.-M.; Paige, D. A.

    2017-12-01

    An approach is presented to efficiently produce high quality gridded data records from the large, global point-based dataset returned by the Diviner Lunar Radiometer Experiment aboard NASA's Lunar Reconnaissance Orbiter. The need to minimize data volume and processing time in production of science-ready map products is increasingly important with the growth in data volume of planetary datasets. Diviner makes on average >1400 observations per second of radiance that is reflected and emitted from the lunar surface, using 189 detectors divided into 9 spectral channels. Data management and processing bottlenecks are amplified by modeling every observation as a probability distribution function over the field of view, which can increase the required processing time by 2-3 orders of magnitude. Geometric corrections, such as projection of data points onto a digital elevation model, are numerically intensive and therefore it is desirable to perform them only once. Our approach reduces bottlenecks through parallel binning and efficient storage of a pre-processed database of observations. Database construction is via subdivision of a geodesic icosahedral grid, with a spatial resolution that can be tailored to suit the field of view of the observing instrument. Global geodesic grids with high spatial resolution are normally impractically memory intensive. We therefore demonstrate a minimum storage and highly parallel method to bin very large numbers of data points onto such a grid. A database of the pre-processed and binned points is then used for production of mapped data products that is significantly faster than if unprocessed points were used. We explore quality controls in the production of gridded data records by conditional interpolation, allowed only where data density is sufficient. The resultant effects on the spatial continuity and uncertainty in maps of lunar brightness temperatures is illustrated. We identify four binning regimes based on trades between the spatial resolution of the grid, the size of the FOV and the on-target spacing of observations. Our approach may be applicable and beneficial for many existing and future point-based planetary datasets.

  4. Spatial Covariability of Temperature and Hydroclimate as a Function of Timescale During the Common Era

    NASA Astrophysics Data System (ADS)

    McKay, N.

    2017-12-01

    As timescale increases from years to centuries, the spatial scale of covariability in the climate system is hypothesized to increase as well. Covarying spatial scales are larger for temperature than for hydroclimate, however, both aspects of the climate system show systematic changes on large-spatial scales on orbital to tectonic timescales. The extent to which this phenomenon is evident in temperature and hydroclimate at centennial timescales is largely unknown. Recent syntheses of multidecadal to century-scale variability in hydroclimate during the past 2k in the Arctic, North America, and Australasia show little spatial covariability in hydroclimate during the Common Era. To determine 1) the evidence for systematic relationships between the spatial scale of climate covariability as a function of timescale, and 2) whether century-scale hydroclimate variability deviates from the relationship between spatial covariability and timescale, we quantify this phenomenon during the Common Era by calculating the e-folding distance in large instrumental and paleoclimate datasets. We calculate this metric of spatial covariability, at different timescales (1, 10 and 100-yr), for a large network of temperature and precipitation observations from the Global Historical Climatology Network (n=2447), from v2.0.0 of the PAGES2k temperature database (n=692), and from moisture-sensitive paleoclimate records North America, the Arctic, and the Iso2k project (n = 328). Initial results support the hypothesis that the spatial scale of covariability is larger for temperature, than for precipitation or paleoclimate hydroclimate indicators. Spatially, e-folding distances for temperature are largest at low latitudes and over the ocean. Both instrumental and proxy temperature data show clear evidence for increasing spatial extent as a function of timescale, but this phenomenon is very weak in the hydroclimate data analyzed here. In the proxy hydroclimate data, which are predominantly indicators of effective moisture, e-folding distance increases from annual to decadal timescales, but does not continue to increase to centennial timescales. Future work includes examining additional instrumental and proxy datasets of moisture variability, and extending the analysis to millennial timescales of variability.

  5. Spatial database for a global assessment of undiscovered copper resources: Chapter Z in Global mineral resource assessment

    USGS Publications Warehouse

    Dicken, Connie L.; Dunlap, Pamela; Parks, Heather L.; Hammarstrom, Jane M.; Zientek, Michael L.; Zientek, Michael L.; Hammarstrom, Jane M.; Johnson, Kathleen M.

    2016-07-13

    As part of the first-ever U.S. Geological Survey global assessment of undiscovered copper resources, data common to several regional spatial databases published by the U.S. Geological Survey, including one report from Finland and one from Greenland, were standardized, updated, and compiled into a global copper resource database. This integrated collection of spatial databases provides location, geologic and mineral resource data, and source references for deposits, significant prospects, and areas permissive for undiscovered deposits of both porphyry copper and sediment-hosted copper. The copper resource database allows for efficient modeling on a global scale in a geographic information system (GIS) and is provided in an Esri ArcGIS file geodatabase format.

  6. A Global Geospatial Database of 5000+ Historic Flood Event Extents

    NASA Astrophysics Data System (ADS)

    Tellman, B.; Sullivan, J.; Doyle, C.; Kettner, A.; Brakenridge, G. R.; Erickson, T.; Slayback, D. A.

    2017-12-01

    A key dataset that is missing for global flood model validation and understanding historic spatial flood vulnerability is a global historical geo-database of flood event extents. Decades of earth observing satellites and cloud computing now make it possible to not only detect floods in near real time, but to run these water detection algorithms back in time to capture the spatial extent of large numbers of specific events. This talk will show results from the largest global historical flood database developed to date. We use the Dartmouth Flood Observatory flood catalogue to map over 5000 floods (from 1985-2017) using MODIS, Landsat, and Sentinel-1 Satellites. All events are available for public download via the Earth Engine Catalogue and via a website that allows the user to query floods by area or date, assess population exposure trends over time, and download flood extents in geospatial format.In this talk, we will highlight major trends in global flood exposure per continent, land use type, and eco-region. We will also make suggestions how to use this dataset in conjunction with other global sets to i) validate global flood models, ii) assess the potential role of climatic change in flood exposure iii) understand how urbanization and other land change processes may influence spatial flood exposure iv) assess how innovative flood interventions (e.g. wetland restoration) influence flood patterns v) control for event magnitude to assess the role of social vulnerability and damage assessment vi) aid in rapid probabilistic risk assessment to enable microinsurance markets. Authors on this paper are already using the database for the later three applications and will show examples of wetland intervention analysis in Argentina, social vulnerability analysis in the USA, and micro insurance in India.

  7. Accelerating Pathology Image Data Cross-Comparison on CPU-GPU Hybrid Systems

    PubMed Central

    Wang, Kaibo; Huai, Yin; Lee, Rubao; Wang, Fusheng; Zhang, Xiaodong; Saltz, Joel H.

    2012-01-01

    As an important application of spatial databases in pathology imaging analysis, cross-comparing the spatial boundaries of a huge amount of segmented micro-anatomic objects demands extremely data- and compute-intensive operations, requiring high throughput at an affordable cost. However, the performance of spatial database systems has not been satisfactory since their implementations of spatial operations cannot fully utilize the power of modern parallel hardware. In this paper, we provide a customized software solution that exploits GPUs and multi-core CPUs to accelerate spatial cross-comparison in a cost-effective way. Our solution consists of an efficient GPU algorithm and a pipelined system framework with task migration support. Extensive experiments with real-world data sets demonstrate the effectiveness of our solution, which improves the performance of spatial cross-comparison by over 18 times compared with a parallelized spatial database approach. PMID:23355955

  8. Indexing and retrieval of MPEG compressed video

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; Doermann, David S.

    1998-04-01

    To keep pace with the increased popularity of digital video as an archival medium, the development of techniques for fast and efficient analysis of ideo streams is essential. In particular, solutions to the problems of storing, indexing, browsing, and retrieving video data from large multimedia databases are necessary to a low access to these collections. Given that video is often stored efficiently in a compressed format, the costly overhead of decompression can be reduced by analyzing the compressed representation directly. In earlier work, we presented compressed domain parsing techniques which identified shots, subshots, and scenes. In this article, we present efficient key frame selection, feature extraction, indexing, and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame type independent representation which normalizes spatial and temporal features including frame type, frame size, macroblock encoding, and motion compensation vectors. Features for indexing are derived directly from this representation and mapped to a low- dimensional space where they can be accessed using standard database techniques. Spatial information is used as primary index into the database and temporal information is used to rank retrieved clips and enhance the robustness of the system. The techniques presented enable efficient indexing, querying, and retrieval of compressed video as demonstrated by our system which typically takes a fraction of a second to retrieve similar video scenes from a database, with over 95 percent recall.

  9. A data model and database for high-resolution pathology analytical image informatics.

    PubMed

    Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel

    2011-01-01

    The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming increasingly feasible for basic, clinical, and translational research studies to produce thousands of whole-slide images. Systematic analysis of these large datasets requires efficient data management support for representing and indexing results from hundreds of interrelated analyses generating very large volumes of quantifications such as shape and texture and of classifications of the quantified features. We have designed a data model and a database to address the data management requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines. The data model represents virtual slide related image, annotation, markup and feature information. The database supports a wide range of metadata and spatial queries on images, annotations, markups, and features. We currently have three databases running on a Dell PowerEdge T410 server with CentOS 5.5 Linux operating system. The database server is IBM DB2 Enterprise Edition 9.7.2. The set of databases consists of 1) a TMA database containing image analysis results from 4740 cases of breast cancer, with 641 MB storage size; 2) an algorithm validation database, which stores markups and annotations from two segmentation algorithms and two parameter sets on 18 selected slides, with 66 GB storage size; and 3) an in silico brain tumor study database comprising results from 307 TCGA slides, with 365 GB storage size. The latter two databases also contain human-generated annotations and markups for regions and nuclei. Modeling and managing pathology image analysis results in a database provide immediate benefits on the value and usability of data in a research study. The database provides powerful query capabilities, which are otherwise difficult or cumbersome to support by other approaches such as programming languages. Standardized, semantic annotated data representation and interfaces also make it possible to more efficiently share image data and analysis results.

  10. Accounting for Rainfall Spatial Variability in Prediction of Flash Floods

    NASA Astrophysics Data System (ADS)

    Saharia, M.; Kirstetter, P. E.; Gourley, J. J.; Hong, Y.; Vergara, H. J.

    2016-12-01

    Flash floods are a particularly damaging natural hazard worldwide in terms of both fatalities and property damage. In the United States, the lack of a comprehensive database that catalogues information related to flash flood timing, location, causative rainfall, and basin geomorphology has hindered broad characterization studies. First a representative and long archive of more than 20,000 flooding events during 2002-2011 is used to analyze the spatial and temporal variability of flash floods. We also derive large number of spatially distributed geomorphological and climatological parameters such as basin area, mean annual precipitation, basin slope etc. to identify static basin characteristics that influence flood response. For the same period, the National Severe Storms Laboratory (NSSL) has produced a decadal archive of Multi-Radar/Multi-Sensor (MRMS) radar-only precipitation rates at 1-km spatial resolution with 5-min temporal resolution. This provides an unprecedented opportunity to analyze the impact of event-level precipitation variability on flooding using a big data approach. To analyze the impact of sub-basin scale rainfall spatial variability on flooding, certain indices such as the first and second scaled moment of rainfall, horizontal gap, vertical gap etc. are computed from the MRMS dataset. Finally, flooding characteristics such as rise time, lag time, and peak discharge are linked to derived geomorphologic, climatologic, and rainfall indices to identify basin characteristics that drive flash floods. Next the model is used to predict flash flooding characteristics all over the continental U.S., specifically over regions poorly covered by hydrological observations. So far studies involving rainfall variability indices have only been performed on a case study basis, and a large scale approach is expected to provide a deeper insight into how sub-basin scale precipitation variability affects flooding. Finally, these findings are validated using the National Weather Service storm reports and a historical flood fatalities database. This analysis framework will serve as a baseline for evaluating distributed hydrologic model simulations such as the Flooded Locations And Simulated Hydrographs Project (FLASH) (http://flash.ou.edu).

  11. A Web-Based GIS for Reporting Water Usage in the High Plains Underground Water Conservation District

    NASA Astrophysics Data System (ADS)

    Jia, M.; Deeds, N.; Winckler, M.

    2012-12-01

    The High Plains Underground Water Conservation District (HPWD) is the largest and oldest of the Texas water conservation districts, and oversees approximately 1.7 million irrigated acres. Recent rule changes have motivated HPWD to develop a more automated system to allow owners and operators to report well locations, meter locations, meter readings, the association between meters and wells, and contiguous acres. INTERA, Inc. has developed a web-based interactive system for HPWD water users to report water usage and for the district to better manage its water resources. The HPWD web management system utilizes state-of-the-art GIS techniques, including cloud-based Amazon EC2 virtual machine, ArcGIS Server, ArcSDE and ArcGIS Viewer for Flex, to support web-based water use management. The system enables users to navigate to their area of interest using a well-established base-map and perform a variety of operations and inquiries against their spatial features. The application currently has six components: user privilege management, property management, water meter registration, area registration, meter-well association and water use report. The system is composed of two main databases: spatial database and non-spatial database. With the help of Adobe Flex application at the front end and ArcGIS Server as the middle-ware, the spatial feature geometry and attributes update will be reflected immediately in the back end. As a result, property owners, along with the HPWD staff, collaborate together to weave the fabric of the spatial database. Interactions between the spatial and non-spatial databases are established by Windows Communication Foundation (WCF) services to record water-use report, user-property associations, owner-area associations, as well as meter-well associations. Mobile capabilities will be enabled in the near future for field workers to collect data and synchronize them to the spatial database. The entire solution is built on a highly scalable cloud server to dynamically allocate the computational resources so as to reduce the cost on security and hardware maintenance. In addition to the default capabilities provided by ESRI, customizations include 1) enabling interactions between spatial and non-spatial databases, 2) providing role-based feature editing, 3) dynamically filtering spatial features on the map based on user accounts and 4) comprehensive data validation.

  12. Enhancing Disaster Management: Development of a Spatial Database of Day Care Centers in the USA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Nagendra; Tuttle, Mark A.; Bhaduri, Budhendra L.

    Children under the age of five constitute around 7% of the total U.S. population and represent a segment of the population, which is totally dependent on others for day-to-day activities. A significant proportion of this population spends time in some form of day care arrangement while their parents are away from home. Accounting for those children during emergencies is of high priority, which requires a broad understanding of the locations of such day care centers. As concentrations of at risk population, the spatial location of day care centers is critical for any type of emergency preparedness and response (EPR). However,more » until recently, the U.S. emergency preparedness and response community did not have access to a comprehensive spatial database of day care centers at the national scale. This paper describes an approach for the development of the first comprehensive spatial database of day care center locations throughout the USA utilizing a variety of data harvesting techniques to integrate information from widely disparate data sources followed by geolocating for spatial precision. In the context of disaster management, such spatially refined demographic databases hold tremendous potential for improving high resolution population distribution and dynamics models and databases.« less

  13. Enhancing Disaster Management: Development of a Spatial Database of Day Care Centers in the USA

    DOE PAGES

    Singh, Nagendra; Tuttle, Mark A.; Bhaduri, Budhendra L.

    2015-07-30

    Children under the age of five constitute around 7% of the total U.S. population and represent a segment of the population, which is totally dependent on others for day-to-day activities. A significant proportion of this population spends time in some form of day care arrangement while their parents are away from home. Accounting for those children during emergencies is of high priority, which requires a broad understanding of the locations of such day care centers. As concentrations of at risk population, the spatial location of day care centers is critical for any type of emergency preparedness and response (EPR). However,more » until recently, the U.S. emergency preparedness and response community did not have access to a comprehensive spatial database of day care centers at the national scale. This paper describes an approach for the development of the first comprehensive spatial database of day care center locations throughout the USA utilizing a variety of data harvesting techniques to integrate information from widely disparate data sources followed by geolocating for spatial precision. In the context of disaster management, such spatially refined demographic databases hold tremendous potential for improving high resolution population distribution and dynamics models and databases.« less

  14. An Updating System for the Gridded Population Database of China Based on Remote Sensing, GIS and Spatial Database Technologies.

    PubMed

    Yang, Xiaohuan; Huang, Yaohuan; Dong, Pinliang; Jiang, Dong; Liu, Honghui

    2009-01-01

    The spatial distribution of population is closely related to land use and land cover (LULC) patterns on both regional and global scales. Population can be redistributed onto geo-referenced square grids according to this relation. In the past decades, various approaches to monitoring LULC using remote sensing and Geographic Information Systems (GIS) have been developed, which makes it possible for efficient updating of geo-referenced population data. A Spatial Population Updating System (SPUS) is developed for updating the gridded population database of China based on remote sensing, GIS and spatial database technologies, with a spatial resolution of 1 km by 1 km. The SPUS can process standard Moderate Resolution Imaging Spectroradiometer (MODIS L1B) data integrated with a Pattern Decomposition Method (PDM) and an LULC-Conversion Model to obtain patterns of land use and land cover, and provide input parameters for a Population Spatialization Model (PSM). The PSM embedded in SPUS is used for generating 1 km by 1 km gridded population data in each population distribution region based on natural and socio-economic variables. Validation results from finer township-level census data of Yishui County suggest that the gridded population database produced by the SPUS is reliable.

  15. Characterizing worldwide patterns of fluvial geomorphology and hydrology with the Global River Widths from Landsat (GRWL) database

    NASA Astrophysics Data System (ADS)

    Allen, G. H.; Pavelsky, T.

    2015-12-01

    The width of a river reflects complex interactions between river water hydraulics and other physical factors like bank erosional resistance, sediment supply, and human-made structures. A broad range of fluvial process studies use spatially distributed river width data to understand and quantify flood hazards, river water flux, or fluvial greenhouse gas efflux. Ongoing technological advances in remote sensing, computing power, and model sophistication are moving river system science towards global-scale studies that aim to understand the Earth's fluvial system as a whole. As such, a global spatially distributed database of river location and width is necessary to better constrain these studies. Here we present the Global River Width from Landsat (GRWL) Database, the first global-scale database of river planform at mean discharge. With a resolution of 30 m, GRWL consists of 58 million measurements of river centerline location, width, and braiding index. In total, GRWL measures 2.1 million km of rivers wider than 30 m, corresponding to 602 thousand km2 of river water surface area, a metric used to calculate global greenhouse gas emissions from rivers to the atmosphere. Using data from GRWL, we find that ~20% of the world's rivers are located above 60ºN where little high quality information exists about rivers of any kind. Further, we find that ~10% of the world's large rivers are multichannel, which may impact the development of the new generation of regional and global hydrodynamic models. We also investigate the spatial controls of global fluvial geomorphology and river hydrology by comparing climate, topography, geology, and human population density to GRWL measurements. The GRWL Database will be made publically available upon publication to facilitate improved understanding of Earth's fluvial system. Finally, GRWL will be used as an a priori data for the joint NASA/CNES Surface Water and Ocean Topography (SWOT) Satellite Mission, planned for launch in 2020.

  16. Spatial inventory integrating raster databases and point sample data. [Geographic Information System for timber inventory

    NASA Technical Reports Server (NTRS)

    Strahler, A. H.; Woodcock, C. E.; Logan, T. L.

    1983-01-01

    A timber inventory of the Eldorado National Forest, located in east-central California, provides an example of the use of a Geographic Information System (GIS) to stratify large areas of land for sampling and the collection of statistical data. The raster-based GIS format of the VICAR/IBIS software system allows simple and rapid tabulation of areas, and facilitates the selection of random locations for ground sampling. Algorithms that simplify the complex spatial pattern of raster-based information, and convert raster format data to strings of coordinate vectors, provide a link to conventional vector-based geographic information systems.

  17. Recovery and validation of historical sediment quality data from coastal and estuarine areas: An integrated approach

    USGS Publications Warehouse

    Manheim, F.T.; Buchholtz ten Brink, Marilyn R.; Mecray, E.L.

    1998-01-01

    A comprehensive database of sediment chemistry and environmental parameters has been compiled for Boston Harbor and Massachusetts Bay. This work illustrates methodologies for rescuing and validating sediment data from heterogeneous historical sources. It greatly expands spatial and temporal data coverage of estuarine and coastal sediments. The database contains about 3500 samples containing inorganic chemical, organic, texture and other environmental data dating from 1955 to 1994. Cooperation with local and federal agencies as well as universities was essential in locating and screening documents for the database. More than 80% of references utilized came from sources with limited distribution (gray literature). Task sharing was facilitated by a comprehensive and clearly defined data dictionary for sediments. It also served as a data entry template and flat file format for data processing and as a basis for interpretation and graphical illustration. Standard QA/QC protocols are usually inapplicable to historical sediment data. In this work outliers and data quality problems were identified by batch screening techniques that also provide visualizations of data relationships and geochemical affinities. No data were excluded, but qualifying comments warn users of problem data. For Boston Harbor, the proportion of irreparable or seriously questioned data was remarkably small (<5%), although concentration values for metals and organic contaminants spanned 3 orders of magnitude for many elements or compounds. Data from the historical database provide alternatives to dated cores for measuring changes in surficial sediment contamination level with time. The data indicate that spatial inhomogeneity in harbor environments can be large with respect to sediment-hosted contaminants. Boston Inner Harbor surficial sediments showed decreases in concentrations of Cu, Hg, and Zn of 40 to 60% over a 17-year period.A comprehensive database of sediment chemistry and environmental parameters has been compiled for Boston Harbor and Massachusetts Bay. This work illustrates methodologies for rescuing and validating sediment data from heterogeneous historical sources. It greatly expands spatial and temporal data coverage of estuarine and coastal sediments. The database contains about 3500 samples containing inorganic chemical, organic, texture and other environmental data dating from 1995 to 1994. Cooperation with local and federal agencies as well as universities was essential in locating and screening documents for the database. More than 80% of references utilized came from sources with limited distribution (gray Task sharing was facilitated by a comprehensive and clearly defined data dictionary for sediments. It also served as a data entry template and flat file format for data processing and as a basis for interpretation and graphical illustration. Standard QA/QC protocols are usually inapplicable to historical sediment data. In this work outliers and data quality problems were identified by batch screening techniques that also provide visualizations of data relationships and geochemical affinities. No data were excluded, but qualifying comments warn users of problem data. For Boston Harbor, the proportion of irreparable or seriously questioned data was remarkably small (<5%), although concentration values for metals and organic contaminants spanned 3 orders of magnitude for many elements or compounds. Data from the historical database provide alternatives to dated cores for measuring changes in surficial sediment contamination level with time. The data indicate that spatial inhomogeneity in harbor environments can be large with respect to sediment-hosted contaminants. Boston Inner Harbor surficial sediments showed decreases in concentrations Cu, Hg, and Zn of 40 to 60% over a 17-year period.

  18. An Efficient Method for the Retrieval of Objects by Topological Relations in Spatial Database Systems.

    ERIC Educational Resources Information Center

    Lin, P. L.; Tan, W. H.

    2003-01-01

    Presents a new method to improve the performance of query processing in a spatial database. Experiments demonstrated that performance of database systems can be improved because both the number of objects accessed and number of objects requiring detailed inspection are much less than those in the previous approach. (AEF)

  19. Validation databases for simulation models: aboveground biomass and net primary productive, (NPP) estimation using eastwide FIA data

    Treesearch

    Jennifer C. Jenkins; Richard A. Birdsey

    2000-01-01

    As interest grows in the role of forest growth in the carbon cycle, and as simulation models are applied to predict future forest productivity at large spatial scales, the need for reliable and field-based data for evaluation of model estimates is clear. We created estimates of potential forest biomass and annual aboveground production for the Chesapeake Bay watershed...

  20. The role of digital cartographic data in the geosciences

    USGS Publications Warehouse

    Guptill, S.C.

    1983-01-01

    The increasing demand of the Nation's natural resource developers for the manipulation, analysis, and display of large quantities of earth-science data has necessitated the use of computers and the building of geoscience information systems. These systems require, in digital form, the spatial data on map products. The basic cartographic data shown on quadrangle maps provide a foundation for the addition of geological and geophysical data. If geoscience information systems are to realize their full potential, large amounts of digital cartographic base data must be available. A major goal of the U.S. Geological Survey is to create, maintain, manage, and distribute a national cartographic and geographic digital database. This unified database will contain numerous categories (hydrography, hypsography, land use, etc.) that, through the use of standardized data-element definitions and formats, can be used easily and flexibly to prepare cartographic products and perform geoscience analysis. ?? 1983.

  1. A spatio-temporal landslide inventory for the NW of Spain: BAPA database

    NASA Astrophysics Data System (ADS)

    Valenzuela, Pablo; Domínguez-Cuesta, María José; Mora García, Manuel Antonio; Jiménez-Sánchez, Montserrat

    2017-09-01

    A landslide database has been created for the Principality of Asturias, NW Spain: the BAPA (Base de datos de Argayos del Principado de Asturias - Principality of Asturias Landslide Database). Data collection is mainly performed through searching local newspaper archives. Moreover, a BAPA App and a BAPA website (http://geol.uniovi.es/BAPA) have been developed to obtain additional information from citizens and institutions. Presently, the dataset covers the period 1980-2015, recording 2063 individual landslides. The use of free cartographic servers, such as Google Maps, Google Street View and Iberpix (Government of Spain), combined with the spatial descriptions and pictures contained in the press news, makes it possible to assess different levels of spatial accuracy. In the database, 59% of the records show an exact spatial location, and 51% of the records provided accurate dates, showing the usefulness of press archives as temporal records. Thus, 32% of the landslides show the highest spatial and temporal accuracy levels. The database also gathers information about the type and characteristics of the landslides, the triggering factors and the damage and costs caused. Field work was conducted to validate the methodology used in assessing the spatial location, temporal occurrence and characteristics of the landslides.

  2. The Iranian National Geodata Revision Strategy and Realization Based on Geodatabase

    NASA Astrophysics Data System (ADS)

    Haeri, M.; Fasihi, A.; Ayazi, S. M.

    2012-07-01

    In recent years, using of spatial database for storing and managing spatial data has become a hot topic in the field of GIS. Accordingly National Cartographic Center of Iran (NCC) produces - from time to time - some spatial data which is usually included in some databases. One of the NCC major projects was designing National Topographic Database (NTDB). NCC decided to create National Topographic Database of the entire country-based on 1:25000 coverage maps. The standard of NTDB was published in 1994 and its database was created at the same time. In NTDB geometric data was stored in MicroStation design format (DGN) which each feature has a link to its attribute data (stored in Microsoft Access file). Also NTDB file was produced in a sheet-wise mode and then stored in a file-based style. Besides map compilation, revision of existing maps has already been started. Key problems of NCC are revision strategy, NTDB file-based style storage and operator challenges (NCC operators are almost preferred to edit and revise geometry data in CAD environments). A GeoDatabase solution for national Geodata, based on NTDB map files and operators' revision preferences, is introduced and released herein. The proposed solution extends the traditional methods to have a seamless spatial database which it can be revised in CAD and GIS environment, simultaneously. The proposed system is the common data framework to create a central data repository for spatial data storage and management.

  3. CERES Search and Subset Tool

    Atmospheric Science Data Center

    2016-06-24

    ... data granules using a high resolution spatial metadata database and directly accessing the archived data granules. Subset results are ... data granules using a high resolution spatial metadata database and directly accessing the archived data granules. Subset results are ...

  4. Farmer data sourcing. The case study of the spatial soil information maps in South Tyrol.

    NASA Astrophysics Data System (ADS)

    Della Chiesa, Stefano; Niedrist, Georg; Thalheimer, Martin; Hafner, Hansjörg; La Cecilia, Daniele

    2017-04-01

    Nord-Italian region South Tyrol is Europe's largest apple growing area exporting ca. 15% in Europe and 2% worldwide. Vineyards represent ca. 1% of Italian production. In order to deliver high quality food, most of the farmers in South Tyrol follow sustainable farming practices. One of the key practice is the sustainable soil management, where farmers collect regularly (each 5 years) soil samples and send for analyses to improve cultivation management, yield and finally profitability. However, such data generally remain inaccessible. On this regard, in South Tyrol, private interests and the public administration have established a long tradition of collaboration with the local farming industry. This has granted to the collection of large spatial and temporal database of soil analyses along all the cultivated areas. Thanks to this best practice, information on soil properties are centralized and geocoded. The large dataset consist mainly in soil information of texture, humus content, pH and microelements availability such as, K, Mg, Bor, Mn, Cu Zn. This data was finally spatialized by mean of geostatistical methods and several high-resolution digital maps were created. In this contribution, we present the best practice where farmers data source soil information in South Tyrol. Show the capability of a large spatial-temporal geocoded soil dataset to reproduce detailed digital soil property maps and to assess long-term changes in soil properties. Finally, implication and potential application are discussed.

  5. Geospatial Database for Strata Objects Based on Land Administration Domain Model (ladm)

    NASA Astrophysics Data System (ADS)

    Nasorudin, N. N.; Hassan, M. I.; Zulkifli, N. A.; Rahman, A. Abdul

    2016-09-01

    Recently in our country, the construction of buildings become more complex and it seems that strata objects database becomes more important in registering the real world as people now own and use multilevel of spaces. Furthermore, strata title was increasingly important and need to be well-managed. LADM is a standard model for land administration and it allows integrated 2D and 3D representation of spatial units. LADM also known as ISO 19152. The aim of this paper is to develop a strata objects database using LADM. This paper discusses the current 2D geospatial database and needs for 3D geospatial database in future. This paper also attempts to develop a strata objects database using a standard data model (LADM) and to analyze the developed strata objects database using LADM data model. The current cadastre system in Malaysia includes the strata title is discussed in this paper. The problems in the 2D geospatial database were listed and the needs for 3D geospatial database in future also is discussed. The processes to design a strata objects database are conceptual, logical and physical database design. The strata objects database will allow us to find the information on both non-spatial and spatial strata title information thus shows the location of the strata unit. This development of strata objects database may help to handle the strata title and information.

  6. Stochastic Downscaling of Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Rasera, Luiz Gustavo; Mariethoz, Gregoire; Lane, Stuart N.

    2016-04-01

    High-resolution digital elevation models (HR-DEMs) are extremely important for the understanding of small-scale geomorphic processes in Alpine environments. In the last decade, remote sensing techniques have experienced a major technological evolution, enabling fast and precise acquisition of HR-DEMs. However, sensors designed to measure elevation data still feature different spatial resolution and coverage capabilities. Terrestrial altimetry allows the acquisition of HR-DEMs with centimeter to millimeter-level precision, but only within small spatial extents and often with dead ground problems. Conversely, satellite radiometric sensors are able to gather elevation measurements over large areas but with limited spatial resolution. In the present study, we propose an algorithm to downscale low-resolution satellite-based DEMs using topographic patterns extracted from HR-DEMs derived for example from ground-based and airborne altimetry. The method consists of a multiple-point geostatistical simulation technique able to generate high-resolution elevation data from low-resolution digital elevation models (LR-DEMs). Initially, two collocated DEMs with different spatial resolutions serve as an input to construct a database of topographic patterns, which is also used to infer the statistical relationships between the two scales. High-resolution elevation patterns are then retrieved from the database to downscale a LR-DEM through a stochastic simulation process. The output of the simulations are multiple equally probable DEMs with higher spatial resolution that also depict the large-scale geomorphic structures present in the original LR-DEM. As these multiple models reflect the uncertainty related to the downscaling, they can be employed to quantify the uncertainty of phenomena that are dependent on fine topography, such as catchment hydrological processes. The proposed methodology is illustrated for a case study in the Swiss Alps. A swissALTI3D HR-DEM (with 5 m resolution) and a SRTM-derived LR-DEM from the Western Alps are used to downscale a SRTM-based LR-DEM from the eastern part of the Alps. The results show that the method is capable of generating multiple high-resolution synthetic DEMs that reproduce the spatial structure and statistics of the original DEM.

  7. Spatiotemporal database of US congressional elections, 1896–2014

    PubMed Central

    Wolf, Levi John

    2017-01-01

    High-quality historical data about US Congressional elections has long provided common ground for electoral studies. However, advances in geographic information science have recently made it efficient to compile, distribute, and analyze large spatio-temporal data sets on the structure of US Congressional districts. A single spatio-temporal data set that relates US Congressional election results to the spatial extent of the constituencies has not yet been developed. To address this, existing high-quality data sets of elections returns were combined with a spatiotemporal data set on Congressional district boundaries to generate a new spatio-temporal database of US Congressional election results that are explicitly linked to the geospatial data about the districts themselves. PMID:28809849

  8. Performance analysis of a dual-tree algorithm for computing spatial distance histograms

    PubMed Central

    Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni

    2011-01-01

    Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753

  9. Spatial distribution of GRBs and large scale structure of the Universe

    NASA Astrophysics Data System (ADS)

    Bagoly, Zsolt; Rácz, István I.; Balázs, Lajos G.; Tóth, L. Viktor; Horváth, István

    We studied the space distribution of the starburst galaxies from Millennium XXL database at z = 0.82. We examined the starburst distribution in the classical Millennium I (De Lucia et al. (2006)) using a semi-analytical model for the genesis of the galaxies. We simulated a starburst galaxies sample with Markov Chain Monte Carlo method. The connection between the large scale structures homogenous and starburst groups distribution (Kofman and Shandarin 1998), Suhhonenko et al. (2011), Liivamägi et al. (2012), Park et al. (2012), Horvath et al. (2014), Horvath et al. (2015)) on a defined scale were checked too.

  10. Intensity-hue-saturation-based image fusion using iterative linear regression

    NASA Astrophysics Data System (ADS)

    Cetin, Mufit; Tepecik, Abdulkadir

    2016-10-01

    The image fusion process basically produces a high-resolution image by combining the superior features of a low-resolution spatial image and a high-resolution panchromatic image. Despite its common usage due to its fast computing capability and high sharpening ability, the intensity-hue-saturation (IHS) fusion method may cause some color distortions, especially when a large number of gray value differences exist among the images to be combined. This paper proposes a spatially adaptive IHS (SA-IHS) technique to avoid these distortions by automatically adjusting the exact spatial information to be injected into the multispectral image during the fusion process. The SA-IHS method essentially suppresses the effects of those pixels that cause the spectral distortions by assigning weaker weights to them and avoiding a large number of redundancies on the fused image. The experimental database consists of IKONOS images, and the experimental results both visually and statistically prove the enhancement of the proposed algorithm when compared with the several other IHS-like methods such as IHS, generalized IHS, fast IHS, and generalized adaptive IHS.

  11. Hazards of Extreme Weather: Flood Fatalities in Texas

    NASA Astrophysics Data System (ADS)

    Sharif, H. O.; Jackson, T.; Bin-Shafique, S.

    2009-12-01

    The Federal Emergency Management Agency (FEMA) considers flooding “America’s Number One Natural Hazard”. Despite flood management efforts in many communities, U.S. flood damages remain high, due, in large part, to increasing population and property development in flood-prone areas. Floods are the leading cause of fatalities related to natural disasters in Texas. Texas leads the nation in flash flood fatalities. There are three times more fatalities in Texas (840) than the following state Pennsylvania (265). This study examined flood fatalities that occurred in Texas between 1960 and 2008. Flood fatality statistics were extracted from three sources: flood fatality databases from the National Climatic Data Center, the Spatial Hazard Event and Loss Database for the United States, and the Texas Department of State Health Services. The data collected for flood fatalities include the date, time, gender, age, location, and weather conditions. Inconsistencies among the three databases were identified and discussed. Analysis reveals that most fatalities result from driving into flood water (about 65%). Spatial analysis indicates that more fatalities occurred in counties containing major urban centers. Hydrologic analysis of a flood event that resulted in five fatalities was performed. A hydrologic model was able to simulate the water level at a location where a vehicle was swept away by flood water resulting in the death of the driver.

  12. Spatial Statistics of Large Astronomical Databases: An Algorithmic Approach

    NASA Technical Reports Server (NTRS)

    Szapudi, Istvan

    2004-01-01

    In this AISRP, the we have demonstrated that the correlation function i) can be calculated for MAP in minutes (about 45 minutes for Planck) on a modest 500Mhz workstation ii) the corresponding method, although theoretically suboptimal, produces nearly optimal results for realistic noise and cut sky. This trillion fold improvement in speed over the standard maximum likelihood technique opens up tremendous new possibilities, which will be persued in the follow up.

  13. Supporting the operational use of process based hydrological models and NASA Earth Observations for use in land management and post-fire remediation through a Rapid Response Erosion Database (RRED).

    NASA Astrophysics Data System (ADS)

    Miller, M. E.; Elliot, W.; Billmire, M.; Robichaud, P. R.; Banach, D. M.

    2017-12-01

    We have built a Rapid Response Erosion Database (RRED, http://rred.mtri.org/rred/) for the continental United States to allow land managers to access properly formatted spatial model inputs for the Water Erosion Prediction Project (WEPP). Spatially-explicit process-based models like WEPP require spatial inputs that include digital elevation models (DEMs), soil, climate and land cover. The online database delivers either a 10m or 30m USGS DEM, land cover derived from the Landfire project, and soil data derived from SSURGO and STATSGO datasets. The spatial layers are projected into UTM coordinates and pre-registered for modeling. WEPP soil parameter files are also created along with linkage files to match both spatial land cover and soils data with the appropriate WEPP parameter files. Our goal is to make process-based models more accessible by preparing spatial inputs ahead of time allowing modelers to focus on addressing scenarios of concern. The database provides comprehensive support for post-fire hydrological modeling by allowing users to upload spatial soil burn severity maps, and within moments returns spatial model inputs. Rapid response is critical following natural disasters. After moderate and high severity wildfires, flooding, erosion, and debris flows are a major threat to life, property and municipal water supplies. Mitigation measures must be rapidly implemented if they are to be effective, but they are expensive and cannot be applied everywhere. Fire, runoff, and erosion risks also are highly heterogeneous in space, creating an urgent need for rapid, spatially-explicit assessment. The database has been used to help assess and plan remediation on over a dozen wildfires in the Western US. Future plans include expanding spatial coverage, improving model input data and supporting additional models. Our goal is to facilitate the use of the best possible datasets and models to support the conservation of soil and water.

  14. Spatial Designation of Critical Habitats for Endangered and Threatened Species in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuttle, Mark A; Singh, Nagendra; Sabesan, Aarthy

    Establishing biological reserves or "hot spots" for endangered and threatened species is critical to support real-world species regulatory and management problems. Geographic data on the distribution of endangered and threatened species can be used to improve ongoing efforts for species conservation in the United States. At present no spatial database exists which maps out the location endangered species for the US. However, spatial descriptions do exists for the habitat associated with all endangered species, but in a form not readily suitable to use in a geographic information system (GIS). In our study, the principal challenge was extracting spatial data describingmore » these critical habitats for 472 species from over 1000 pages of the federal register. In addition, an appropriate database schema was designed to accommodate the different tiers of information associated with the species along with the confidence of designation; the interpreted location data was geo-referenced to the county enumeration unit producing a spatial database of endangered species for the whole of US. The significance of these critical habitat designations, database scheme and methodologies will be discussed.« less

  15. A compilation of spatial digital databases for selected U.S. Geological Survey nonfuel mineral resource assessments for parts of Idaho and Montana

    USGS Publications Warehouse

    Carlson, Mary H.; Zientek, Michael L.; Causey, J. Douglas; Kayser, Helen Z.; Spanski, Gregory T.; Wilson, Anna B.; Van Gosen, Bradley S.; Trautwein, Charles M.

    2007-01-01

    This report compiles selected results from 13 U.S. Geological Survey (USGS) mineral resource assessment studies conducted in Idaho and Montana into consistent spatial databases that can be used in a geographic information system. The 183 spatial databases represent areas of mineral potential delineated in these studies and include attributes on mineral deposit type, level of mineral potential, certainty, and a reference. The assessments were conducted for five 1? x 2? quadrangles (Butte, Challis, Choteau, Dillon, and Wallace), several U.S. Forest Service (USFS) National Forests (including Challis, Custer, Gallatin, Helena, and Payette), and one Bureau of Land Management (BLM) Resource Area (Dillon). The data contained in the spatial databases are based on published information: no new interpretations are made. This digital compilation is part of an ongoing effort to provide mineral resource information formatted for use in spatial analysis. In particular, this is one of several reports prepared to address USFS needs for science information as forest management plans are revised in the Northern Rocky Mountains.

  16. California dragonfly and damselfly (Odonata) database: temporal and spatial distribution of species records collected over the past century

    PubMed Central

    Ball-Damerow, Joan E.; Oboyski, Peter T.; Resh, Vincent H.

    2015-01-01

    Abstract The recently completed Odonata database for California consists of specimen records from the major entomology collections of the state, large Odonata collections outside of the state, previous literature, historical and recent field surveys, and from enthusiast group observations. The database includes 32,025 total records and 19,000 unique records for 106 species of dragonflies and damselflies, with records spanning 1879–2013. Records have been geographically referenced using the point-radius method to assign coordinates and an uncertainty radius to specimen locations. In addition to describing techniques used in data acquisition, georeferencing, and quality control, we present assessments of the temporal, spatial, and taxonomic distribution of records. We use this information to identify biases in the data, and to determine changes in species prevalence, latitudinal ranges, and elevation ranges when comparing records before 1976 and after 1979. The average latitude of where records occurred increased by 78 km over these time periods. While average elevation did not change significantly, the average minimum elevation across species declined by 108 m. Odonata distribution may be generally shifting northwards as temperature warms and to lower minimum elevations in response to increased summer water availability in low-elevation agricultural regions. The unexpected decline in elevation may also be partially the result of bias in recent collections towards centers of human population, which tend to occur at lower elevations. This study emphasizes the need to address temporal, spatial, and taxonomic biases in museum and observational records in order to produce reliable conclusions from such data. PMID:25709531

  17. The Monitoring Erosion of Agricultural Land and spatial database of erosion events

    NASA Astrophysics Data System (ADS)

    Kapicka, Jiri; Zizala, Daniel

    2013-04-01

    In 2011 originated in The Czech Republic The Monitoring Erosion of Agricultural Land as joint project of State Land Office (SLO) and Research Institute for Soil and Water Conservation (RISWC). The aim of the project is collecting and record keeping information about erosion events on agricultural land and their evaluation. The main idea is a creation of a spatial database that will be source of data and information for evaluation and modeling erosion process, for proposal of preventive measures and measures to reduce negative impacts of erosion events. A subject of monitoring is the manifestations of water erosion, wind erosion and slope deformation in which cause damaged agriculture land. A website, available on http://me.vumop.cz, is used as a tool for keeping and browsing information about monitored events. SLO employees carry out record keeping. RISWC is specialist institute in the Monitoring Erosion of Agricultural Land that performs keeping the spatial database, running the website, managing the record keeping of events, analysis the cause of origins events and statistical evaluations of keeping events and proposed measures. Records are inserted into the database using the user interface of the website which has map server as a component. Website is based on database technology PostgreSQL with superstructure PostGIS and MapServer UMN. Each record is in the database spatial localized by a drawing and it contains description information about character of event (data, situation description etc.) then there are recorded information about land cover and about grown crops. A part of database is photodocumentation which is taken in field reconnaissance which is performed within two days after notify of event. Another part of database are information about precipitations from accessible precipitation gauges. Website allows to do simple spatial analysis as are area calculation, slope calculation, percentage representation of GAEC etc.. Database structure was designed on the base of needs analysis inputs to mathematical models. Mathematical models are used for detailed analysis of chosen erosion events which include soil analysis. Till the end 2012 has had the database 135 events. The content of database still accrues and gives rise to the extensive source of data that is usable for testing mathematical models.

  18. GIS for public health : A study of Andhra Pradesh

    NASA Astrophysics Data System (ADS)

    Shrinagesh, B.; Kalpana, Markandey; Kiran, Baktula

    2014-06-01

    Geographic information systems and remote sensing have capabilities that are ideally suited for use in infectious disease surveillance and control, particularly for the many vector-borne neglected diseases that are often found in poor populations in remote rural areas. They are also highly relevant to meet the demands of outbreak investigation and response, where prompt location of cases, rapid communication of information, and quick mapping of the epidemic's dynamics are vital. The situation has changed dramatically over the past few years. GIS helps in determining geographic distribution of diseases, analysing spatial and temporal trends, Mapping populations at risk, Stratifying risk factors, Assessing resource allocation, Planning and targeting interventions, Monitoring diseases and interventions over time. There are vast disparities in people's health even among the different districts across the state of Andhra Pradesh largely attributed to the resource allocation by the state government. Despite having centers of excellence in healthcare delivery, these facilities are limited and are inadequate in meeting the current healthcare demands. The main objectives are to study the prevalent diseases in Andhra Pradesh, to study the infrastructural facilities available in A.P. The methodology includes the Spatial Database, which will be mostly in the form of digitized format. The Non-Spatial Database includes both secondary data as well as the primary data.

  19. GeoPAT: A toolbox for pattern-based information retrieval from large geospatial databases

    NASA Astrophysics Data System (ADS)

    Jasiewicz, Jarosław; Netzel, Paweł; Stepinski, Tomasz

    2015-07-01

    Geospatial Pattern Analysis Toolbox (GeoPAT) is a collection of GRASS GIS modules for carrying out pattern-based geospatial analysis of images and other spatial datasets. The need for pattern-based analysis arises when images/rasters contain rich spatial information either because of their very high resolution or their very large spatial extent. Elementary units of pattern-based analysis are scenes - patches of surface consisting of a complex arrangement of individual pixels (patterns). GeoPAT modules implement popular GIS algorithms, such as query, overlay, and segmentation, to operate on the grid of scenes. To achieve these capabilities GeoPAT includes a library of scene signatures - compact numerical descriptors of patterns, and a library of distance functions - providing numerical means of assessing dissimilarity between scenes. Ancillary GeoPAT modules use these functions to construct a grid of scenes or to assign signatures to individual scenes having regular or irregular geometries. Thus GeoPAT combines knowledge retrieval from patterns with mapping tasks within a single integrated GIS environment. GeoPAT is designed to identify and analyze complex, highly generalized classes in spatial datasets. Examples include distinguishing between different styles of urban settlements using VHR images, delineating different landscape types in land cover maps, and mapping physiographic units from DEM. The concept of pattern-based spatial analysis is explained and the roles of all modules and functions are described. A case study example pertaining to delineation of landscape types in a subregion of NLCD is given. Performance evaluation is included to highlight GeoPAT's applicability to very large datasets. The GeoPAT toolbox is available for download from

  20. Conversion of environmental data to a digital-spatial database, Puget Sound area, Washington

    USGS Publications Warehouse

    Uhrich, M.A.; McGrath, T.S.

    1997-01-01

    Data and maps from the Puget Sound Environmental Atlas, compiled for the U.S. Environmental Protection Agency, the Puget Sound Water Quality Authority, and the U.S. Army Corps of Engineers, have been converted into a digital-spatial database using a geographic information system. Environmental data for the Puget Sound area,collected from sources other than the Puget SoundEnvironmental Atlas by different Federal, State, andlocal agencies, also have been converted into thisdigital-spatial database. Background on the geographic-information-system planning process, the design and implementation of the geographic information-system database, and the reasons for conversion to this digital-spatial database are included in this report. The Puget Sound Environmental Atlas data layers include information about seabird nesting areas, eelgrass and kelp habitat, marine mammal and fish areas, and shellfish resources and bed certification. Data layers, from sources other than the Puget Sound Environmental Atlas, include the Puget Sound shoreline, the water-body system, shellfish growing areas, recreational shellfish beaches, sewage-treatment outfalls, upland hydrography,watershed and political boundaries, and geographicnames. The sources of data, descriptions of the datalayers, and the steps and errors of processing associated with conversion to a digital-spatial database used in development of the Puget Sound Geographic Information System also are included in this report. The appendixes contain data dictionaries for each of the resource layers and error values for the conversion of Puget SoundEnvironmental Atlas data.

  1. Spatial variation of volcanic rock geochemistry in the Virunga Volcanic Province: Statistical analysis of an integrated database

    NASA Astrophysics Data System (ADS)

    Barette, Florian; Poppe, Sam; Smets, Benoît; Benbakkar, Mhammed; Kervyn, Matthieu

    2017-10-01

    We present an integrated, spatially-explicit database of existing geochemical major-element analyses available from (post-) colonial scientific reports, PhD Theses and international publications for the Virunga Volcanic Province, located in the western branch of the East African Rift System. This volcanic province is characterised by alkaline volcanism, including silica-undersaturated, alkaline and potassic lavas. The database contains a total of 908 geochemical analyses of eruptive rocks for the entire volcanic province with a localisation for most samples. A preliminary analysis of the overall consistency of the database, using statistical techniques on sets of geochemical analyses with contrasted analytical methods or dates, demonstrates that the database is consistent. We applied a principal component analysis and cluster analysis on whole-rock major element compositions included in the database to study the spatial variation of the chemical composition of eruptive products in the Virunga Volcanic Province. These statistical analyses identify spatially distributed clusters of eruptive products. The known geochemical contrasts are highlighted by the spatial analysis, such as the unique geochemical signature of Nyiragongo lavas compared to other Virunga lavas, the geochemical heterogeneity of the Bulengo area, and the trachyte flows of Karisimbi volcano. Most importantly, we identified separate clusters of eruptive products which originate from primitive magmatic sources. These lavas of primitive composition are preferentially located along NE-SW inherited rift structures, often at distance from the central Virunga volcanoes. Our results illustrate the relevance of a spatial analysis on integrated geochemical data for a volcanic province, as a complement to classical petrological investigations. This approach indeed helps to characterise geochemical variations within a complex of magmatic systems and to identify specific petrologic and geochemical investigations that should be tackled within a study area.

  2. Implementation of 3D spatial indexing and compression in a large-scale molecular dynamics simulation database for rapid atomic contact detection.

    PubMed

    Toofanny, Rudesh D; Simms, Andrew M; Beck, David A C; Daggett, Valerie

    2011-08-10

    Molecular dynamics (MD) simulations offer the ability to observe the dynamics and interactions of both whole macromolecules and individual atoms as a function of time. Taken in context with experimental data, atomic interactions from simulation provide insight into the mechanics of protein folding, dynamics, and function. The calculation of atomic interactions or contacts from an MD trajectory is computationally demanding and the work required grows exponentially with the size of the simulation system. We describe the implementation of a spatial indexing algorithm in our multi-terabyte MD simulation database that significantly reduces the run-time required for discovery of contacts. The approach is applied to the Dynameomics project data. Spatial indexing, also known as spatial hashing, is a method that divides the simulation space into regular sized bins and attributes an index to each bin. Since, the calculation of contacts is widely employed in the simulation field, we also use this as the basis for testing compression of data tables. We investigate the effects of compression of the trajectory coordinate tables with different options of data and index compression within MS SQL SERVER 2008. Our implementation of spatial indexing speeds up the calculation of contacts over a 1 nanosecond (ns) simulation window by between 14% and 90% (i.e., 1.2 and 10.3 times faster). For a 'full' simulation trajectory (51 ns) spatial indexing reduces the calculation run-time between 31 and 81% (between 1.4 and 5.3 times faster). Compression resulted in reduced table sizes but resulted in no significant difference in the total execution time for neighbour discovery. The greatest compression (~36%) was achieved using page level compression on both the data and indexes. The spatial indexing scheme significantly decreases the time taken to calculate atomic contacts and could be applied to other multidimensional neighbor discovery problems. The speed up enables on-the-fly calculation and visualization of contacts and rapid cross simulation analysis for knowledge discovery. Using page compression for the atomic coordinate tables and indexes saves ~36% of disk space without any significant decrease in calculation time and should be considered for other non-transactional databases in MS SQL SERVER 2008.

  3. Implementation of 3D spatial indexing and compression in a large-scale molecular dynamics simulation database for rapid atomic contact detection

    PubMed Central

    2011-01-01

    Background Molecular dynamics (MD) simulations offer the ability to observe the dynamics and interactions of both whole macromolecules and individual atoms as a function of time. Taken in context with experimental data, atomic interactions from simulation provide insight into the mechanics of protein folding, dynamics, and function. The calculation of atomic interactions or contacts from an MD trajectory is computationally demanding and the work required grows exponentially with the size of the simulation system. We describe the implementation of a spatial indexing algorithm in our multi-terabyte MD simulation database that significantly reduces the run-time required for discovery of contacts. The approach is applied to the Dynameomics project data. Spatial indexing, also known as spatial hashing, is a method that divides the simulation space into regular sized bins and attributes an index to each bin. Since, the calculation of contacts is widely employed in the simulation field, we also use this as the basis for testing compression of data tables. We investigate the effects of compression of the trajectory coordinate tables with different options of data and index compression within MS SQL SERVER 2008. Results Our implementation of spatial indexing speeds up the calculation of contacts over a 1 nanosecond (ns) simulation window by between 14% and 90% (i.e., 1.2 and 10.3 times faster). For a 'full' simulation trajectory (51 ns) spatial indexing reduces the calculation run-time between 31 and 81% (between 1.4 and 5.3 times faster). Compression resulted in reduced table sizes but resulted in no significant difference in the total execution time for neighbour discovery. The greatest compression (~36%) was achieved using page level compression on both the data and indexes. Conclusions The spatial indexing scheme significantly decreases the time taken to calculate atomic contacts and could be applied to other multidimensional neighbor discovery problems. The speed up enables on-the-fly calculation and visualization of contacts and rapid cross simulation analysis for knowledge discovery. Using page compression for the atomic coordinate tables and indexes saves ~36% of disk space without any significant decrease in calculation time and should be considered for other non-transactional databases in MS SQL SERVER 2008. PMID:21831299

  4. Mapping mHealth (mobile health) and mobile penetrations in sub-Saharan Africa for strategic regional collaboration in mHealth scale-up: an application of exploratory spatial data analysis.

    PubMed

    Lee, Seohyun; Cho, Yoon-Min; Kim, Sun-Young

    2017-08-22

    Mobile health (mHealth), a term used for healthcare delivery via mobile devices, has gained attention as an innovative technology for better access to healthcare and support for performance of health workers in the global health context. Despite large expansion of mHealth across sub-Saharan Africa, regional collaboration for scale-up has not made progress since last decade. As a groundwork for strategic planning for regional collaboration, the study attempted to identify spatial patterns of mHealth implementation in sub-Saharan Africa using an exploratory spatial data analysis. In order to obtain comprehensive data on the total number of mHelath programs implemented between 2006 and 2016 in each of the 48 sub-Saharan Africa countries, we performed a systematic data collection from various sources, including: the WHO eHealth Database, the World Bank Projects & Operations Database, and the USAID mHealth Database. Additional spatial analysis was performed for mobile cellular subscriptions per 100 people to suggest strategic regional collaboration for improving mobile penetration rates along with the mHealth initiative. Global Moran's I and Local Indicator of Spatial Association (LISA) were calculated for mHealth programs and mobile subscriptions per 100 population to investigate spatial autocorrelation, which indicates the presence of local clustering and spatial disparities. From our systematic data collection, the total number of mHealth programs implemented in sub-Saharan Africa between 2006 and 2016 was 487 (same programs implemented in multiple countries were counted separately). Of these, the eastern region with 17 countries and the western region with 16 countries had 287 and 145 mHealth programs, respectively. Despite low levels of global autocorrelation, LISA enabled us to detect meaningful local clusters. Overall, the eastern part of sub-Saharan Africa shows high-high association for mHealth programs. As for mobile subscription rates per 100 population, the northern area shows extensive low-low association. This study aimed to shed some light on the potential for strategic regional collaboration for scale-up of mHealth and mobile penetration. Firstly, countries in the eastern area with much experience can take the lead role in pursuing regional collaboration for mHealth programs in sub-Saharan Africa. Secondly, collective effort in improving mobile penetration rates for the northern area is recommended.

  5. Review of Spatial-Database System Usability: Recommendations for the ADDNS Project

    DTIC Science & Technology

    2007-12-01

    basic GIS background information , with a closer look at spatial databases. A GIS is also a computer- based system designed to capture, manage...foundation for deploying enterprise-wide spatial information systems . According to Oracle® [18], it enables accurate delivery of location- based services...Toronto TR 2007-141 Lanter, D.P. (1991). Design of a lineage- based meta-data base for GIS. Cartography and Geographic Information Systems , 18

  6. Spatial working memory capacity predicts bias in estimates of location.

    PubMed

    Crawford, L Elizabeth; Landy, David; Salthouse, Timothy A

    2016-09-01

    Spatial memory research has attributed systematic bias in location estimates to a combination of a noisy memory trace with a prior structure that people impose on the space. Little is known about intraindividual stability and interindividual variation in these patterns of bias. In the current work, we align recent empirical and theoretical work on working memory capacity limits and spatial memory bias to generate the prediction that those with lower working memory capacity will show greater bias in memory of the location of a single item. Reanalyzing data from a large study of cognitive aging, we find support for this prediction. Fitting separate models to individuals' data revealed a surprising variety of strategies. Some were consistent with Bayesian models of spatial category use, however roughly half of participants biased estimates outward in a way not predicted by current models and others seemed to combine these strategies. These analyses highlight the importance of studying individuals when developing general models of cognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Spatial adaptive sampling in multiscale simulation

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.

    2014-07-01

    In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 ×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.

  8. Oceanography Information System of Spanish Institute of Oceanography (IEO)

    NASA Astrophysics Data System (ADS)

    Tello, Olvido; Gómez, María; González, Sonsoles

    2016-04-01

    Since 1914, the Spanish Institute of Oceanography (IEO) performs multidisciplinary studies of the marine environment. In same case are systematic studies and in others are specific studies for special requirements (El Hierro submarine volcanic episode, spill Prestige, others.). Different methodologies and data acquisition techniques are used depending on studies aims. The acquired data are stored and presented in different formats. The information is organized into different databases according to the subject and the variables represented (geology, fisheries, aquaculture, pollution, habitats, etc.). Related to physical and chemical oceanography data, in 1964 was created the DATA CENTER of IEO (CEDO), in order to organize the data about physical and chemical variables, to standardize this information and to serve the international data network SeaDataNet. www.seadatanet.org. This database integrates data about temperature, salinity, nutrients, and tidal data. CEDO allows consult and download the data. http://indamar.ieo.es On the other hand, related to data about marine species in 1999 was developed SIRENO DATABASE. All data about species collected in oceanographic surveys carried out by researches of IEO, and data from observers on fishing vessels are incorporated in SIRENO database. In this database is stored catch data, biomass, abundance, etc. This system is based on architecture ORACLE. Due to the large amount of information collected over the 100 years of IEO history, there is a clear need to organize, standardize, integrate and relate the different databases and information, and to provide interoperability and access to the information. Consequently, in 2000 it emerged the first initiative to organize the IEO spatial information in an Oceanography Information System, based on a Geographical Information System (GIS). The GIS was consolidated as IEO institutional GIS and was created the Spatial Data Infrastructure of IEO (IDEO) following trend of INSPIRE. All data included in the GIS have their corresponding metadata about ISO19115 and INSPIRE. IDEO is based on Web services, Quality of Services, Open standards, ISO (OGC) and INSPIRE standards, and both provide access to the geographical marine information of IEO. The GIS allows the information to be organized, visualized, consulted and analyzed. The data from different IEO databases are integrated into a GIS corporate Geodatabase (Esri format). This tool is essential in the decision making of aspects like: - Protection of marine environment - Sustainable management of resources - Natural Hazards. - Marine spatial planning. Examples of the use of GIS as a spatial analysis tool are: - Mud volcanoes explored in LIFE-INDEMARES project. - Cartographic series about Spanish continental shelf, developed from data integrated in IEO marine GIS, acquired from oceanographic surveys in ESPACE project. - Cartography developed from the information gathered in Initial Assessment of Marine Strategy Framework Directive. - Studies of natural hazards related to submarine canyons in southeast region marine Spanish. Currently the IEO is participating in many European initiatives, especially in several lots of EMODNET. The IEO besides is working in consonance with INSPIRE, Growth Blue, Horizon 2020, etc., to contribute to, the knowledge of marine environment, its protection and its spatial planning are extremely relevant issues. In order to facilitate the access to the Spatial Data Infrastructure of IEO, the IEO Geoportal was developed in 2012. It mainly involves a metadata catalog, access to the data viewers and Web Services of IDEO. http://www.geo-ideo.ieo.es/geoportalideo/catalog/main/home.page

  9. Preliminary surficial geologic map database of the Amboy 30 x 60 minute quadrangle, California

    USGS Publications Warehouse

    Bedford, David R.; Miller, David M.; Phelps, Geoffrey A.

    2006-01-01

    The surficial geologic map database of the Amboy 30x60 minute quadrangle presents characteristics of surficial materials for an area approximately 5,000 km2 in the eastern Mojave Desert of California. This map consists of new surficial mapping conducted between 2000 and 2005, as well as compilations of previous surficial mapping. Surficial geology units are mapped and described based on depositional process and age categories that reflect the mode of deposition, pedogenic effects occurring post-deposition, and, where appropriate, the lithologic nature of the material. The physical properties recorded in the database focus on those that drive hydrologic, biologic, and physical processes such as particle size distribution (PSD) and bulk density. This version of the database is distributed with point data representing locations of samples for both laboratory determined physical properties and semi-quantitative field-based information. Future publications will include the field and laboratory data as well as maps of distributed physical properties across the landscape tied to physical process models where appropriate. The database is distributed in three parts: documentation, spatial map-based data, and printable map graphics of the database. Documentation includes this file, which provides a discussion of the surficial geology and describes the format and content of the map data, a database 'readme' file, which describes the database contents, and FGDC metadata for the spatial map information. Spatial data are distributed as Arc/Info coverage in ESRI interchange (e00) format, or as tabular data in the form of DBF3-file (.DBF) file formats. Map graphics files are distributed as Postscript and Adobe Portable Document Format (PDF) files, and are appropriate for representing a view of the spatial database at the mapped scale.

  10. Fast Updating National Geo-Spatial Databases with High Resolution Imagery: China's Methodology and Experience

    NASA Astrophysics Data System (ADS)

    Chen, J.; Wang, D.; Zhao, R. L.; Zhang, H.; Liao, A.; Jiu, J.

    2014-04-01

    Geospatial databases are irreplaceable national treasure of immense importance. Their up-to-dateness referring to its consistency with respect to the real world plays a critical role in its value and applications. The continuous updating of map databases at 1:50,000 scales is a massive and difficult task for larger countries of the size of more than several million's kilometer squares. This paper presents the research and technological development to support the national map updating at 1:50,000 scales in China, including the development of updating models and methods, production tools and systems for large-scale and rapid updating, as well as the design and implementation of the continuous updating workflow. The use of many data sources and the integration of these data to form a high accuracy, quality checked product were required. It had in turn required up to date techniques of image matching, semantic integration, generalization, data base management and conflict resolution. Design and develop specific software tools and packages to support the large-scale updating production with high resolution imagery and large-scale data generalization, such as map generalization, GIS-supported change interpretation from imagery, DEM interpolation, image matching-based orthophoto generation, data control at different levels. A national 1:50,000 databases updating strategy and its production workflow were designed, including a full coverage updating pattern characterized by all element topographic data modeling, change detection in all related areas, and whole process data quality controlling, a series of technical production specifications, and a network of updating production units in different geographic places in the country.

  11. Functional Nonlinear Mixed Effects Models For Longitudinal Image Data

    PubMed Central

    Luo, Xinchao; Zhu, Lixing; Kong, Linglong; Zhu, Hongtu

    2015-01-01

    Motivated by studying large-scale longitudinal image data, we propose a novel functional nonlinear mixed effects modeling (FN-MEM) framework to model the nonlinear spatial-temporal growth patterns of brain structure and function and their association with covariates of interest (e.g., time or diagnostic status). Our FNMEM explicitly quantifies a random nonlinear association map of individual trajectories. We develop an efficient estimation method to estimate the nonlinear growth function and the covariance operator of the spatial-temporal process. We propose a global test and a simultaneous confidence band for some specific growth patterns. We conduct Monte Carlo simulation to examine the finite-sample performance of the proposed procedures. We apply FNMEM to investigate the spatial-temporal dynamics of white-matter fiber skeletons in a national database for autism research. Our FNMEM may provide a valuable tool for charting the developmental trajectories of various neuropsychiatric and neurodegenerative disorders. PMID:26213453

  12. Developing a spatial-temporal method for the geographic investigation of shoeprint evidence.

    PubMed

    Lin, Ge; Elmes, Gregory; Walnoha, Mike; Chen, Xiannian

    2009-01-01

    This article examines the potential of a spatial-temporal method for analysis of forensic shoeprint data. The large volume of shoeprint evidence recovered at crime scenes results in varied success in matching a print to a known shoe type and subsequently linking sets of matched prints to suspected offenders. Unlike DNA and fingerprint data, a major challenge is to reduce the uncertainty in linking sets of matched shoeprints to a suspected serial offender. Shoeprint data for 2004 were imported from the Greater London Metropolitan Area Bigfoot database into a geographic information system, and a spatial-temporal algorithm developed for this project. The results show that by using distance and time constraints interactively, the number of candidate shoeprints that can implicate one or few suspects can be substantially reduced. It concludes that the use of space-time and other ancillary information within a geographic information system can be quite helpful for forensic investigation.

  13. Spatial Indexing for Data Searching in Mobile Sensing Environments.

    PubMed

    Zhou, Yuchao; De, Suparna; Wang, Wei; Moessner, Klaus; Palaniswami, Marimuthu S

    2017-06-18

    Data searching and retrieval is one of the fundamental functionalities in many Web of Things applications, which need to collect, process and analyze huge amounts of sensor stream data. The problem in fact has been well studied for data generated by sensors that are installed at fixed locations; however, challenges emerge along with the popularity of opportunistic sensing applications in which mobile sensors keep reporting observation and measurement data at variable intervals and changing geographical locations. To address these challenges, we develop the Geohash-Grid Tree, a spatial indexing technique specially designed for searching data integrated from heterogeneous sources in a mobile sensing environment. Results of the experiments on a real-world dataset collected from the SmartSantander smart city testbed show that the index structure allows efficient search based on spatial distance, range and time windows in a large time series database.

  14. Spatial Indexing for Data Searching in Mobile Sensing Environments

    PubMed Central

    Zhou, Yuchao; De, Suparna; Wang, Wei; Moessner, Klaus; Palaniswami, Marimuthu S.

    2017-01-01

    Data searching and retrieval is one of the fundamental functionalities in many Web of Things applications, which need to collect, process and analyze huge amounts of sensor stream data. The problem in fact has been well studied for data generated by sensors that are installed at fixed locations; however, challenges emerge along with the popularity of opportunistic sensing applications in which mobile sensors keep reporting observation and measurement data at variable intervals and changing geographical locations. To address these challenges, we develop the Geohash-Grid Tree, a spatial indexing technique specially designed for searching data integrated from heterogeneous sources in a mobile sensing environment. Results of the experiments on a real-world dataset collected from the SmartSantander smart city testbed show that the index structure allows efficient search based on spatial distance, range and time windows in a large time series database. PMID:28629156

  15. A spatial classification and database for management, research, and policy making: The Great Lakes aquatic habitat framework

    EPA Science Inventory

    Managing the world’s largest and complex freshwater ecosystem, the Laurentian Great Lakes, requires a spatially hierarchical basin-wide database of ecological and socioeconomic information that are comparable across the region. To meet such a need, we developed a hierarchi...

  16. In-database processing of a large collection of remote sensing data: applications and implementation

    NASA Astrophysics Data System (ADS)

    Kikhtenko, Vladimir; Mamash, Elena; Chubarov, Dmitri; Voronina, Polina

    2016-04-01

    Large archives of remote sensing data are now available to scientists, yet the need to work with individual satellite scenes or product files constrains studies that span a wide temporal range or spatial extent. The resources (storage capacity, computing power and network bandwidth) required for such studies are often beyond the capabilities of individual geoscientists. This problem has been tackled before in remote sensing research and inspired several information systems. Some of them such as NASA Giovanni [1] and Google Earth Engine have already proved their utility for science. Analysis tasks involving large volumes of numerical data are not unique to Earth Sciences. Recent advances in data science are enabled by the development of in-database processing engines that bring processing closer to storage, use declarative query languages to facilitate parallel scalability and provide high-level abstraction of the whole dataset. We build on the idea of bridging the gap between file archives containing remote sensing data and databases by integrating files into relational database as foreign data sources and performing analytical processing inside the database engine. Thereby higher level query language can efficiently address problems of arbitrary size: from accessing the data associated with a specific pixel or a grid cell to complex aggregation over spatial or temporal extents over a large number of individual data files. This approach was implemented using PostgreSQL for a Siberian regional archive of satellite data products holding hundreds of terabytes of measurements from multiple sensors and missions taken over a decade-long span. While preserving the original storage layout and therefore compatibility with existing applications the in-database processing engine provides a toolkit for provisioning remote sensing data in scientific workflows and applications. The use of SQL - a widely used higher level declarative query language - simplifies interoperability between desktop GIS, web applications and geographic web services and interactive scientific applications (MATLAB, IPython). The system is also automatically ingesting direct readout data from meteorological and research satellites in near-real time with distributed acquisition workflows managed by Taverna workflow engine [2]. The system has demonstrated its utility in performing non-trivial analytic processing such as the computation of the Robust Satellite Technique (RST) indices [3]. It had been useful in different tasks such as studying urban heat islands, analyzing patterns in the distribution of wildfire occurrences, detecting phenomena related to seismic and earthquake activity. Initial experience has highlighted several limitations of the proposed approach yet it has demonstrated ability to facilitate the use of large archives of remote sensing data by geoscientists. 1. J.G. Acker, G. Leptoukh, Online analysis enhances use of NASA Earth science data. EOS Trans. AGU, 2007, 88(2), P. 14-17. 2. D. Hull, K. Wolsfencroft, R. Stevens, C. Goble, M.R. Pocock, P. Li and T. Oinn, Taverna: a tool for building and running workflows of services. Nucleic Acids Research. 2006. V. 34. P. W729-W732. 3. V. Tramutoli, G. Di Bello, N. Pergola, S. Piscitelli, Robust satellite techniques for remote sensing of seismically active areas // Annals of Geophysics. 2001. no. 44(2). P. 295-312.

  17. A spatial classification and database for management, research, and policy making: The Great Lakes aquatic habitat framework

    USGS Publications Warehouse

    Wang, Lizhu; Riseng, Catherine M.; Mason, Lacey; Werhrly, Kevin; Rutherford, Edward; McKenna, James E.; Castiglione, Chris; Johnson, Lucinda B.; Infante, Dana M.; Sowa, Scott P.; Robertson, Mike; Schaeffer, Jeff; Khoury, Mary; Gaiot, John; Hollenhurst, Tom; Brooks, Colin N.; Coscarelli, Mark

    2015-01-01

    Managing the world's largest and most complex freshwater ecosystem, the Laurentian Great Lakes, requires a spatially hierarchical basin-wide database of ecological and socioeconomic information that is comparable across the region. To meet such a need, we developed a spatial classification framework and database — Great Lakes Aquatic Habitat Framework (GLAHF). GLAHF consists of catchments, coastal terrestrial, coastal margin, nearshore, and offshore zones that encompass the entire Great Lakes Basin. The catchments captured in the database as river pour points or coastline segments are attributed with data known to influence physicochemical and biological characteristics of the lakes from the catchments. The coastal terrestrial zone consists of 30-m grid cells attributed with data from the terrestrial region that has direct connection with the lakes. The coastal margin and nearshore zones consist of 30-m grid cells attributed with data describing the coastline conditions, coastal human disturbances, and moderately to highly variable physicochemical and biological characteristics. The offshore zone consists of 1.8-km grid cells attributed with data that are spatially less variable compared with the other aquatic zones. These spatial classification zones and their associated data are nested within lake sub-basins and political boundaries and allow the synthesis of information from grid cells to classification zones, within and among political boundaries, lake sub-basins, Great Lakes, or within the entire Great Lakes Basin. This spatially structured database could help the development of basin-wide management plans, prioritize locations for funding and specific management actions, track protection and restoration progress, and conduct research for science-based decision making.

  18. National Transportation Atlas Databases : 2002

    DOT National Transportation Integrated Search

    2002-01-01

    The National Transportation Atlas Databases 2002 (NTAD2002) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  19. National Transportation Atlas Databases : 2010

    DOT National Transportation Integrated Search

    2010-01-01

    The National Transportation Atlas Databases 2010 (NTAD2010) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  20. National Transportation Atlas Databases : 2006

    DOT National Transportation Integrated Search

    2006-01-01

    The National Transportation Atlas Databases 2006 (NTAD2006) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  1. National Transportation Atlas Databases : 2005

    DOT National Transportation Integrated Search

    2005-01-01

    The National Transportation Atlas Databases 2005 (NTAD2005) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  2. National Transportation Atlas Databases : 2008

    DOT National Transportation Integrated Search

    2008-01-01

    The National Transportation Atlas Databases 2008 (NTAD2008) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  3. National Transportation Atlas Databases : 2003

    DOT National Transportation Integrated Search

    2003-01-01

    The National Transportation Atlas Databases 2003 (NTAD2003) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  4. National Transportation Atlas Databases : 2004

    DOT National Transportation Integrated Search

    2004-01-01

    The National Transportation Atlas Databases 2004 (NTAD2004) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  5. National Transportation Atlas Databases : 2009

    DOT National Transportation Integrated Search

    2009-01-01

    The National Transportation Atlas Databases 2009 (NTAD2009) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  6. National Transportation Atlas Databases : 2007

    DOT National Transportation Integrated Search

    2007-01-01

    The National Transportation Atlas Databases 2007 (NTAD2007) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  7. National Transportation Atlas Databases : 2012

    DOT National Transportation Integrated Search

    2012-01-01

    The National Transportation Atlas Databases 2012 (NTAD2012) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  8. National Transportation Atlas Databases : 2011

    DOT National Transportation Integrated Search

    2011-01-01

    The National Transportation Atlas Databases 2011 (NTAD2011) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  9. A DBMS architecture for global change research

    NASA Astrophysics Data System (ADS)

    Hachem, Nabil I.; Gennert, Michael A.; Ward, Matthew O.

    1993-08-01

    The goal of this research is the design and development of an integrated system for the management of very large scientific databases, cartographic/geographic information processing, and exploratory scientific data analysis for global change research. The system will represent both spatial and temporal knowledge about natural and man-made entities on the eath's surface, following an object-oriented paradigm. A user will be able to derive, modify, and apply, procedures to perform operations on the data, including comparison, derivation, prediction, validation, and visualization. This work represents an effort to extend the database technology with an intrinsic class of operators, which is extensible and responds to the growing needs of scientific research. Of significance is the integration of many diverse forms of data into the database, including cartography, geography, hydrography, hypsography, images, and urban planning data. Equally important is the maintenance of metadata, that is, data about the data, such as coordinate transformation parameters, map scales, and audit trails of previous processing operations. This project will impact the fields of geographical information systems and global change research as well as the database community. It will provide an integrated database management testbed for scientific research, and a testbed for the development of analysis tools to understand and predict global change.

  10. Extraction of land cover change information from ENVISAT-ASAR data in Chengdu Plain

    NASA Astrophysics Data System (ADS)

    Xu, Wenbo; Fan, Jinlong; Huang, Jianxi; Tian, Yichen; Zhang, Yong

    2006-10-01

    Land cover data are essential to most global change research objectives, including the assessment of current environmental conditions and the simulation of future environmental scenarios that ultimately lead to public policy development. Chinese Academy of Sciences generated a nationwide land cover database in order to carry out the quantification and spatial characterization of land use/cover changes (LUCC) in 1990s. In order to improve the reliability of the database, we will update the database anytime. But it is difficult to obtain remote sensing data to extract land cover change information in large-scale. It is hard to acquire optical remote sensing data in Chengdu plain, so the objective of this research was to evaluate multitemporal ENVISAT advanced synthetic aperture radar (ASAR) data for extracting land cover change information. Based on the fieldwork and the nationwide 1:100000 land cover database, the paper assesses several land cover changes in Chengdu plain, for example: crop to buildings, forest to buildings, and forest to bare land. The results show that ENVISAT ASAR data have great potential for the applications of extracting land cover change information.

  11. Mining moving object trajectories in location-based services for spatio-temporal database update

    NASA Astrophysics Data System (ADS)

    Guo, Danhuai; Cui, Weihong

    2008-10-01

    Advances in wireless transmission and mobile technology applied to LBS (Location-based Services) flood us with amounts of moving objects data. Vast amounts of gathered data from position sensors of mobile phones, PDAs, or vehicles hide interesting and valuable knowledge and describe the behavior of moving objects. The correlation between temporal moving patterns of moving objects and geo-feature spatio-temporal attribute was ignored, and the value of spatio-temporal trajectory data was not fully exploited too. Urban expanding or frequent town plan change bring about a large amount of outdated or imprecise data in spatial database of LBS, and they cannot be updated timely and efficiently by manual processing. In this paper we introduce a data mining approach to movement pattern extraction of moving objects, build a model to describe the relationship between movement patterns of LBS mobile objects and their environment, and put up with a spatio-temporal database update strategy in LBS database based on trajectories spatiotemporal mining. Experimental evaluation reveals excellent performance of the proposed model and strategy. Our original contribution include formulation of model of interaction between trajectory and its environment, design of spatio-temporal database update strategy based on moving objects data mining, and the experimental application of spatio-temporal database update by mining moving objects trajectories.

  12. Ultrafast Target Recognition via Super-Parallel Holograph Based Correlator, RAM and Associative Memory

    DTIC Science & Technology

    2008-03-11

    JTC) 2𔃾 based on a dynamic material answers the challenge of fast correlation with large databases. Images retrieved from the SPHRAM and used as the...transform (JTC) and matched spatial filter or VanderLugt ( VLC ) correlators, either of which can be implemented in real-time by degenerate four wave-mixing in...proposed system, consisting of the SPHROM coupled with a shift-invariant real-time VLC . The correlation is performed in the VLC architecture to

  13. Coincident scales of forest feedback on climate and conservation in a diversity hot spot

    PubMed Central

    Webb, Thomas J; Gaston, Kevin J; Hannah, Lee; Ian Woodward, F

    2005-01-01

    The dynamic relationship between vegetation and climate is now widely acknowledged. Climate influences the distribution of vegetation; and through a number of feedback mechanisms vegetation affects climate. This implies that land-use changes such as deforestation will have climatic consequences. However, the spatial scales at which such feedbacks occur remain largely unknown. Here, we use a large database of precipitation and tree cover records for an area of the biodiversity-rich Atlantic forest region in south eastern Brazil to investigate the forest–rainfall feedback at a range of spatial scales from ca 101–104 km2. We show that the strength of the feedback increases up to scales of at least 103 km2, with the climate at a particular locality influenced by the pattern of landcover extending over a large area. Thus, smaller forest fragments, even if well protected, may suffer degradation due to the climate responding to land-use change in the surrounding area. Atlantic forest vertebrate taxa also require large areas of forest to support viable populations. Areas of forest of ca 103 km2 would be large enough to support such populations at the same time as minimizing the risk of climatic feedbacks resulting from deforestation. PMID:16608697

  14. Coincident scales of forest feedback on climate and conservation in a diversity hot spot.

    PubMed

    Webb, Thomas J; Gaston, Kevin J; Hannah, Lee; Ian Woodward, F

    2006-03-22

    The dynamic relationship between vegetation and climate is now widely acknowledged. Climate influences the distribution of vegetation; and through a number of feedback mechanisms vegetation affects climate. This implies that land-use changes such as deforestation will have climatic consequences. However, the spatial scales at which such feedbacks occur remain largely unknown. Here, we use a large database of precipitation and tree cover records for an area of the biodiversity-rich Atlantic forest region in south eastern Brazil to investigate the forest-rainfall feedback at a range of spatial scales from ca 10(1)-10(4) km2. We show that the strength of the feedback increases up to scales of at least 10(3) km2, with the climate at a particular locality influenced by the pattern of landcover extending over a large area. Thus, smaller forest fragments, even if well protected, may suffer degradation due to the climate responding to land-use change in the surrounding area. Atlantic forest vertebrate taxa also require large areas of forest to support viable populations. Areas of forest of ca 10(3) km2 would be large enough to support such populations at the same time as minimizing the risk of climatic feedbacks resulting from deforestation.

  15. Relational Database for the Geology of the Northern Rocky Mountains - Idaho, Montana, and Washington

    USGS Publications Warehouse

    Causey, J. Douglas; Zientek, Michael L.; Bookstrom, Arthur A.; Frost, Thomas P.; Evans, Karl V.; Wilson, Anna B.; Van Gosen, Bradley S.; Boleneus, David E.; Pitts, Rebecca A.

    2008-01-01

    A relational database was created to prepare and organize geologic map-unit and lithologic descriptions for input into a spatial database for the geology of the northern Rocky Mountains, a compilation of forty-three geologic maps for parts of Idaho, Montana, and Washington in U.S. Geological Survey Open File Report 2005-1235. Not all of the information was transferred to and incorporated in the spatial database due to physical file limitations. This report releases that part of the relational database that was completed for that earlier product. In addition to descriptive geologic information for the northern Rocky Mountains region, the relational database contains a substantial bibliography of geologic literature for the area. The relational database nrgeo.mdb (linked below) is available in Microsoft Access version 2000, a proprietary database program. The relational database contains data tables and other tables used to define terms, relationships between the data tables, and hierarchical relationships in the data; forms used to enter data; and queries used to extract data.

  16. An approach for mapping large-area impervious surfaces: Synergistic use of Landsat-7 ETM+ and high spatial resolution imagery

    USGS Publications Warehouse

    Yang, Limin; Huang, Chengquan; Homer, Collin G.; Wylie, Bruce K.; Coan, Michael

    2003-01-01

    A wide range of urban ecosystem studies, including urban hydrology, urban climate, land use planning, and resource management, require current and accurate geospatial data of urban impervious surfaces. We developed an approach to quantify urban impervious surfaces as a continuous variable by using multisensor and multisource datasets. Subpixel percent impervious surfaces at 30-m resolution were mapped using a regression tree model. The utility, practicality, and affordability of the proposed method for large-area imperviousness mapping were tested over three spatial scales (Sioux Falls, South Dakota, Richmond, Virginia, and the Chesapeake Bay areas of the United States). Average error of predicted versus actual percent impervious surface ranged from 8.8 to 11.4%, with correlation coefficients from 0.82 to 0.91. The approach is being implemented to map impervious surfaces for the entire United States as one of the major components of the circa 2000 national land cover database.

  17. High-resolution spatial databases of monthly climate variables (1961-2010) over a complex terrain region in southwestern China

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Xu, An-Ding; Liu, Hong-Bin

    2015-01-01

    Climate data in gridded format are critical for understanding climate change and its impact on eco-environment. The aim of the current study is to develop spatial databases for three climate variables (maximum, minimum temperatures, and relative humidity) over a large region with complex topography in southwestern China. Five widely used approaches including inverse distance weighting, ordinary kriging, universal kriging, co-kriging, and thin-plate smoothing spline were tested. Root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) showed that thin-plate smoothing spline with latitude, longitude, and elevation outperformed other models. Average RMSE, MAE, and MAPE of the best models were 1.16 °C, 0.74 °C, and 7.38 % for maximum temperature; 0.826 °C, 0.58 °C, and 6.41 % for minimum temperature; and 3.44, 2.28, and 3.21 % for relative humidity, respectively. Spatial datasets of annual and monthly climate variables with 1-km resolution covering the period 1961-2010 were then obtained using the best performance methods. Comparative study showed that the current outcomes were in well agreement with public datasets. Based on the gridded datasets, changes in temperature variables were investigated across the study area. Future study might be needed to capture the uncertainty induced by environmental conditions through remote sensing and knowledge-based methods.

  18. Detecting Spatial Patterns of Natural Hazards from the Wikipedia Knowledge Base

    NASA Astrophysics Data System (ADS)

    Fan, J.; Stewart, K.

    2015-07-01

    The Wikipedia database is a data source of immense richness and variety. Included in this database are thousands of geotagged articles, including, for example, almost real-time updates on current and historic natural hazards. This includes usercontributed information about the location of natural hazards, the extent of the disasters, and many details relating to response, impact, and recovery. In this research, a computational framework is proposed to detect spatial patterns of natural hazards from the Wikipedia database by combining topic modeling methods with spatial analysis techniques. The computation is performed on the Neon Cluster, a high performance-computing cluster at the University of Iowa. This work uses wildfires as the exemplar hazard, but this framework is easily generalizable to other types of hazards, such as hurricanes or flooding. Latent Dirichlet Allocation (LDA) modeling is first employed to train the entire English Wikipedia dump, transforming the database dump into a 500-dimension topic model. Over 230,000 geo-tagged articles are then extracted from the Wikipedia database, spatially covering the contiguous United States. The geo-tagged articles are converted into an LDA topic space based on the topic model, with each article being represented as a weighted multidimension topic vector. By treating each article's topic vector as an observed point in geographic space, a probability surface is calculated for each of the topics. In this work, Wikipedia articles about wildfires are extracted from the Wikipedia database, forming a wildfire corpus and creating a basis for the topic vector analysis. The spatial distribution of wildfire outbreaks in the US is estimated by calculating the weighted sum of the topic probability surfaces using a map algebra approach, and mapped using GIS. To provide an evaluation of the approach, the estimation is compared to wildfire hazard potential maps created by the USDA Forest service.

  19. CampusGIS of the University of Cologne: a tool for orientation, navigation, and management

    NASA Astrophysics Data System (ADS)

    Baaser, U.; Gnyp, M. L.; Hennig, S.; Hoffmeister, D.; Köhn, N.; Laudien, R.; Bareth, G.

    2006-10-01

    The working group for GIS and Remote Sensing at the Department of Geography at the University of Cologne has established a WebGIS called CampusGIS of the University of Cologne. The overall task of the CampusGIS is the connection of several existing databases at the University of Cologne with spatial data. These existing databases comprise data about staff, buildings, rooms, lectures, and general infrastructure like bus stops etc. These information were yet not linked to their spatial relation. Therefore, a GIS-based method is developed to link all the different databases to spatial entities. Due to the philosophy of the CampusGIS, an online-GUI is programmed which enables users to search for staff, buildings, or institutions. The query results are linked to the GIS database which allows the visualization of the spatial location of the searched entity. This system was established in 2005 and is operational since early 2006. In this contribution, the focus is on further developments. First results of (i) including routing services in, (ii) programming GUIs for mobile devices for, and (iii) including infrastructure management tools in the CampusGIS are presented. Consequently, the CampusGIS is not only available for spatial information retrieval and orientation. It also serves for on-campus navigation and administrative management.

  20. Spatial cyberinfrastructures, ontologies, and the humanities.

    PubMed

    Sieber, Renee E; Wellen, Christopher C; Jin, Yuan

    2011-04-05

    We report on research into building a cyberinfrastructure for Chinese biographical and geographic data. Our cyberinfrastructure contains (i) the McGill-Harvard-Yenching Library Ming Qing Women's Writings database (MQWW), the only online database on historical Chinese women's writings, (ii) the China Biographical Database, the authority for Chinese historical people, and (iii) the China Historical Geographical Information System, one of the first historical geographic information systems. Key to this integration is that linked databases retain separate identities as bases of knowledge, while they possess sufficient semantic interoperability to allow for multidatabase concepts and to support cross-database queries on an ad hoc basis. Computational ontologies create underlying semantics for database access. This paper focuses on the spatial component in a humanities cyberinfrastructure, which includes issues of conflicting data, heterogeneous data models, disambiguation, and geographic scale. First, we describe the methodology for integrating the databases. Then we detail the system architecture, which includes a tier of ontologies and schema. We describe the user interface and applications that allow for cross-database queries. For instance, users should be able to analyze the data, examine hypotheses on spatial and temporal relationships, and generate historical maps with datasets from MQWW for research, teaching, and publication on Chinese women writers, their familial relations, publishing venues, and the literary and social communities. Last, we discuss the social side of cyberinfrastructure development, as people are considered to be as critical as the technical components for its success.

  1. Estimating Biofuel Feedstock Water Footprints Using System Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inman, Daniel; Warner, Ethan; Stright, Dana

    Increased biofuel production has prompted concerns about the environmental tradeoffs of biofuels compared to petroleum-based fuels. Biofuel production in general, and feedstock production in particular, is under increased scrutiny. Water footprinting (measuring direct and indirect water use) has been proposed as one measure to evaluate water use in the context of concerns about depleting rural water supplies through activities such as irrigation for large-scale agriculture. Water footprinting literature has often been limited in one or more key aspects: complete assessment across multiple water stocks (e.g., vadose zone, surface, and ground water stocks), geographical resolution of data, consistent representation of manymore » feedstocks, and flexibility to perform scenario analysis. We developed a model called BioSpatial H2O using a system dynamics modeling and database framework. BioSpatial H2O could be used to consistently evaluate the complete water footprints of multiple biomass feedstocks at high geospatial resolutions. BioSpatial H2O has the flexibility to perform simultaneous scenario analysis of current and potential future crops under alternative yield and climate conditions. In this proof-of-concept paper, we modeled corn grain (Zea mays L.) and soybeans (Glycine max) under current conditions as illustrative results. BioSpatial H2O links to a unique database that houses annual spatially explicit climate, soil, and plant physiological data. Parameters from the database are used as inputs to our system dynamics model for estimating annual crop water requirements using daily time steps. Based on our review of the literature, estimated green water footprints are comparable to other modeled results, suggesting that BioSpatial H2O is computationally sound for future scenario analysis. Our modeling framework builds on previous water use analyses to provide a platform for scenario-based assessment. BioSpatial H2O's system dynamics is a flexible and user-friendly interface for on-demand, spatially explicit, water use scenario analysis for many US agricultural crops. Built-in controls permit users to quickly make modifications to the model assumptions, such as those affecting yield, and to see the implications of those results in real time. BioSpatial H2O's dynamic capabilities and adjustable climate data allow for analyses of water use and management scenarios to inform current and potential future bioenergy policies. The model could also be adapted for scenario analysis of alternative climatic conditions and comparison of multiple crops. The results of such an analysis would help identify risks associated with water use competition among feedstocks in certain regions. Results could also inform research and development efforts that seek to reduce water-related risks of biofuel pathways.« less

  2. Integrating GIS, Archeology, and the Internet.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sera White; Brenda Ringe Pace; Randy Lee

    2004-08-01

    At the Idaho National Engineering and Environmental Laboratory's (INEEL) Cultural Resource Management Office, a newly developed Data Management Tool (DMT) is improving management and long-term stewardship of cultural resources. The fully integrated system links an archaeological database, a historical database, and a research database to spatial data through a customized user interface using ArcIMS and Active Server Pages. Components of the new DMT are tailored specifically to the INEEL and include automated data entry forms for historic and prehistoric archaeological sites, specialized queries and reports that address both yearly and project-specific documentation requirements, and unique field recording forms. The predictivemore » modeling component increases the DMT’s value for land use planning and long-term stewardship. The DMT enhances the efficiency of archive searches, improving customer service, oversight, and management of the large INEEL cultural resource inventory. In the future, the DMT will facilitate data sharing with regulatory agencies, tribal organizations, and the general public.« less

  3. Large perceptual distortions of locomotor action space occur in ground-based coordinates: Angular expansion and the large-scale horizontal-vertical illusion.

    PubMed

    Klein, Brennan J; Li, Zhi; Durgin, Frank H

    2016-04-01

    What is the natural reference frame for seeing large-scale spatial scenes in locomotor action space? Prior studies indicate an asymmetric angular expansion in perceived direction in large-scale environments: Angular elevation relative to the horizon is perceptually exaggerated by a factor of 1.5, whereas azimuthal direction is exaggerated by a factor of about 1.25. Here participants made angular and spatial judgments when upright or on their sides to dissociate egocentric from allocentric reference frames. In Experiment 1, it was found that body orientation did not affect the magnitude of the up-down exaggeration of direction, suggesting that the relevant orientation reference frame for this directional bias is allocentric rather than egocentric. In Experiment 2, the comparison of large-scale horizontal and vertical extents was somewhat affected by viewer orientation, but only to the extent necessitated by the classic (5%) horizontal-vertical illusion (HVI) that is known to be retinotopic. Large-scale vertical extents continued to appear much larger than horizontal ground extents when observers lay sideways. When the visual world was reoriented in Experiment 3, the bias remained tied to the ground-based allocentric reference frame. The allocentric HVI is quantitatively consistent with differential angular exaggerations previously measured for elevation and azimuth in locomotor space. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Incorporation of spatial interactions in location networks to identify critical geo-referenced routes for assessing disease control measures on a large-scale campus.

    PubMed

    Wen, Tzai-Hung; Chin, Wei Chien Benny

    2015-04-14

    Respiratory diseases mainly spread through interpersonal contact. Class suspension is the most direct strategy to prevent the spread of disease through elementary or secondary schools by blocking the contact network. However, as university students usually attend courses in different buildings, the daily contact patterns on a university campus are complicated, and once disease clusters have occurred, suspending classes is far from an efficient strategy to control disease spread. The purpose of this study is to propose a methodological framework for generating campus location networks from a routine administration database, analyzing the community structure of the network, and identifying the critical links and nodes for blocking respiratory disease transmission. The data comes from the student enrollment records of a major comprehensive university in Taiwan. We combined the social network analysis and spatial interaction model to establish a geo-referenced community structure among the classroom buildings. We also identified the critical links among the communities that were acting as contact bridges and explored the changes in the location network after the sequential removal of the high-risk buildings. Instead of conducting a questionnaire survey, the study established a standard procedure for constructing a location network on a large-scale campus from a routine curriculum database. We also present how a location network structure at a campus could function to target the high-risk buildings as the bridges connecting communities for blocking disease transmission.

  5. Big biology meets microclimatology: defining thermal niches of ectotherms at landscape scales for conservation planning.

    PubMed

    Isaak, Daniel J; Wenger, Seth J; Young, Michael K

    2017-04-01

    Temperature profoundly affects ecology, a fact ever more evident as the ability to measure thermal environments increases and global changes alter these environments. The spatial structure of thermalscapes is especially relevant to the distribution and abundance of ectothermic organisms, but the ability to describe biothermal relationships at extents and grains relevant to conservation planning has been limited by small or sparse data sets. Here, we combine a large occurrence database of >23 000 aquatic species surveys with stream microclimate scenarios supported by an equally large temperature database for a 149 000-km mountain stream network to describe thermal relationships for 14 fish and amphibian species. Species occurrence probabilities peaked across a wide range of temperatures (7.0-18.8°C) but distinct warm- or cold-edge distribution boundaries were apparent for all species and represented environments where populations may be most sensitive to thermal changes. Warm-edge boundary temperatures for a native species of conservation concern were used with geospatial data sets and a habitat occupancy model to highlight subsets of the network where conservation measures could benefit local populations by maintaining cool temperatures. Linking that strategic approach to local estimates of habitat impairment remains a key challenge but is also an opportunity to build relationships and develop synergies between the research, management, and regulatory communities. As with any data mining or species distribution modeling exercise, care is required in analysis and interpretation of results, but the use of large biological data sets with accurate microclimate scenarios can provide valuable information about the thermal ecology of many ectotherms and a spatially explicit way of guiding conservation investments. © 2017 by the Ecological Society of America.

  6. Spatial and symbolic queries for 3D image data

    NASA Astrophysics Data System (ADS)

    Benson, Daniel C.; Zick, Gregory L.

    1992-04-01

    We present a query system for an object-oriented biomedical imaging database containing 3-D anatomical structures and their corresponding 2-D images. The graphical interface facilitates the formation of spatial queries, nonspatial or symbolic queries, and combined spatial/symbolic queries. A query editor is used for the creation and manipulation of 3-D query objects as volumes, surfaces, lines, and points. Symbolic predicates are formulated through a combination of text fields and multiple choice selections. Query results, which may include images, image contents, composite objects, graphics, and alphanumeric data, are displayed in multiple views. Objects returned by the query may be selected directly within the views for further inspection or modification, or for use as query objects in subsequent queries. Our image database query system provides visual feedback and manipulation of spatial query objects, multiple views of volume data, and the ability to combine spatial and symbolic queries. The system allows for incremental enhancement of existing objects and the addition of new objects and spatial relationships. The query system is designed for databases containing symbolic and spatial data. This paper discuses its application to data acquired in biomedical 3- D image reconstruction, but it is applicable to other areas such as CAD/CAM, geographical information systems, and computer vision.

  7. Using online database for landslide susceptibility assessment with an example from the Veneto Region (north-eastern Italy).

    NASA Astrophysics Data System (ADS)

    Floris, Mario; Squarzoni, Cristina; Zorzi, Luca; D'Alpaos, Andrea; Iafelice, Maria

    2010-05-01

    Landslide susceptibility maps describe landslide-prone areas by the spatial correlation between landslides and related factors, derived from different kinds of datasets: geological, geotechnical and geomechanical maps, hydrogeological maps, landslides maps, vector and raster terrain data, real-time inclinometer and pore pressure data. In the last decade, thanks to the increasing use of web-based tools for management, sharing and communication of territorial information, many Web-based Geographical Information Systems (WebGIS) were created by local governments or nations, University and Research Centres. Nowadays there is a strong proliferation of geological WebGIS or GeoBrowser, allowing free download of spatial information. There are global Cartographical Portals that provide a free download of DTM and other vector data related to the whole planet (http://www.webgis.com). At major scale, there are WebGIS regarding entire nation (http://www.agiweb.org), or specific region of a country (http://www.mrt.tas.gov.au), or single municipality (http://sitn.ne.ch/). Moreover, portals managed by local government and academic government (http://turtle.ags.gov.ab.ca/Peace_River/Site/) or by a private agency (http://www.bbt-se.com) are noteworthy. In Italy, the first national projects for the creation of WebGIS and web-based databases begun during the 1980s, and evolved, through years, to the present number of different WebGIS, which have different territorial extensions: national (Italian National Cartographical Portal, http://www.pcn.minambiente.it; E-GEO Project, http://www.egeo.unisi.it), interregional (River Tiber Basin Authority, www.abtevere.it ), and regional (Veneto Region, www.regione.veneto.it). In this way we investigated most of the Italian WebGIS in order to verify their geographic range and the availability and quality of data useful for landslide hazard analyses. We noticed a large variability of the accessing information among the different browsers. In particular, the Trento and Bolzano Provinces Geobrowsers (http://www.provincia.bz.it; http://www.territorio.provincia.tn.it) provide a large availability of data respect to the other regional and interregional WebGIS, which generally allow only the download of topographic data. Recently, the Italian Institute for Environmental Protection and Research, ISPRA (Istituto Superiore per la Protezione e la ricerca Ambientale), makes available and free usable the Italian Inventory of Landslides (IFFI Project). The inventory contains information derived from the census of all the instability phenomena in Italy, offering a base-cognitive instrument for the landslide hazard evaluation. For the landslide hazard assessment it is essential to evaluate the real effectiveness of the available data. Hence, we test the effectiveness of the web databases to evaluate the landslides susceptibility in the Euganean Hill Regional Park (185.5 km2), located at SE of Padua (Veneto Region, Italy). We used data available from three online spatial databases: Veneto Region Cartographic Portal (http://www.regione.veneto.it), for vector terrain data at 1:5000 scale; the IFFI archive (http://www.sinanet.apat.it), for information concerning landslides; and the National Cartographic Portal of the Italian Ministry of Environment (http://www.pcn.minambiente.it), for the multi-temporal orthophotos. The landslide susceptibility was evaluated using a simple probabilistic analysis considering the relationships between landslides and DEM-derived factors, such as slope, curvature and aspect. For the validation of the analysis, we made a spatial test by subdividing the study area in two sectors: training area and test area. The obtained results show that the actual no-completeness of online available spatial databases related to the Veneto Region allows only regional and medium scale (>1:25,000) susceptibility analysis. Data about lithology, land use, groundwater and others relevant factors are absent. In addition, the lack of data on the temporal evolution of the landslides permits only a spatial analysis, impeding a complete evaluation of the landslide hazard.

  8. Issues and prospects for the next generation of the spatial data transfer standard (SDTS)

    USGS Publications Warehouse

    Arctur, D.; Hair, D.; Timson, G.; Martin, E.P.; Fegeas, R.

    1998-01-01

    The Spatial Data Transfer Standard (SDTS) was designed to be capable of representing virtually any data model, rather than being a prescription for a single data model. It has fallen short of this ambitious goal for a number of reasons, which this paper investigates. In addition to issues that might have been anticipated in its design, a number of new issues have arisen since its initial development. These include the need to support explicit feature definitions, incremental update, value-added extensions, and change tracking within large, national databases. It is time to consider the next stage of evolution for SDTS. This paper suggests development of an Object Profile for SDTS that would integrate concepts for a dynamic schema structure, OpenGIS interface, and CORBA IDL.

  9. WikiPEATia - a web based platform for assembling peatland data through ‘crowd sourcing’

    NASA Astrophysics Data System (ADS)

    Wisser, D.; Glidden, S.; Fieseher, C.; Treat, C. C.; Routhier, M.; Frolking, S. E.

    2009-12-01

    The Earth System Science community is realizing that peatlands are an important and unique terrestrial ecosystem that has not yet been well-integrated into large-scale earth system analyses. A major hurdle is the lack of accessible, geospatial data of peatland distribution, coupled with data on peatland properties (e.g., vegetation composition, peat depth, basal dates, soil chemistry, peatland class) at the global scale. This data, however, is available at the local scale. Although a comprehensive global database on peatlands probably lags similar data on more economically important ecosystems such as forests, grasslands, croplands, a large amount of field data have been collected over the past several decades. A few efforts have been made to map peatlands at large scales but existing data have not been assembled into a single geospatial database that is publicly accessible or do not depict data with a level of detail that is needed in the Earth System Science Community. A global peatland database would contribute to advances in a number of research fields such as hydrology, vegetation and ecosystem modeling, permafrost modeling, and earth system modeling. We present a Web 2.0 approach that uses state-of-the-art webserver and innovative online mapping technologies and is designed to create such a global database through ‘crowd-sourcing’. Primary functions of the online system include form-driven textual user input of peatland research metadata, spatial data input of peatland areas via a mapping interface, database editing and querying editing capabilities, as well as advanced visualization and data analysis tools. WikiPEATia provides an integrated information technology platform for assembling, integrating, and posting peatland-related geospatial datasets facilitates and encourages research community involvement. A successful effort will make existing peatland data much more useful to the research community, and will help to identify significant data gaps.

  10. Cultural macroevolution on neighbor graphs : vertical and horizontal transmission among Western North American Indian societies.

    PubMed

    Towner, Mary C; Grote, Mark N; Venti, Jay; Borgerhoff Mulder, Monique

    2012-09-01

    What are the driving forces of cultural macroevolution, the evolution of cultural traits that characterize societies or populations? This question has engaged anthropologists for more than a century, with little consensus regarding the answer. We develop and fit autologistic models, built upon both spatial and linguistic neighbor graphs, for 44 cultural traits of 172 societies in the Western North American Indian (WNAI) database. For each trait, we compare models including or excluding one or both neighbor graphs, and for the majority of traits we find strong evidence in favor of a model which uses both spatial and linguistic neighbors to predict a trait's distribution. Our results run counter to the assertion that cultural trait distributions can be explained largely by the transmission of traits from parent to daughter populations and are thus best analyzed with phylogenies. In contrast, we show that vertical and horizontal transmission pathways can be incorporated in a single model, that both transmission modes may indeed operate on the same trait, and that for most traits in the WNAI database, accounting for only one mode of transmission would result in a loss of information.

  11. Application of GIS in public health in India: A literature-based review, analysis, and recommendations.

    PubMed

    Ruiz, Marilyn O'Hara; Sharma, Arun Kumar

    2016-01-01

    The implementation of geospatial technologies and methods for improving health has become widespread in many nations, but India's adoption of these approaches has been fairly slow. With a large population, ongoing public health challenges, and a growing economy with an emphasis on innovative technologies, the adoption of spatial approaches to disease surveillance, spatial epidemiology, and implementation of health policies in India has great potential for both success and efficacy. Through our evaluation of scientific papers selected through a structured key phrase review of the National Center for Biotechnology Information on the database PubMed, we found that current spatial approaches to health research in India are fairly descriptive in nature, but the use of more complex models and statistics is increasing. The institutional home of the authors is skewed regionally, with Delhi and South India more likely to show evidence of use. The need for scientists engaged in spatial health analysis to first digitize basic data, such as maps of road networks, hydrological features, and land use, is a strong impediment to efficiency, and their work would certainly advance more quickly without this requirement.

  12. EMAP and EMAGE: a framework for understanding spatially organized data.

    PubMed

    Baldock, Richard A; Bard, Jonathan B L; Burger, Albert; Burton, Nicolas; Christiansen, Jeff; Feng, Guanjie; Hill, Bill; Houghton, Derek; Kaufman, Matthew; Rao, Jianguo; Sharpe, James; Ross, Allyson; Stevenson, Peter; Venkataraman, Shanmugasundaram; Waterhouse, Andrew; Yang, Yiya; Davidson, Duncan R

    2003-01-01

    The Edinburgh MouseAtlas Project (EMAP) is a time-series of mouse-embryo volumetric models. The models provide a context-free spatial framework onto which structural interpretations and experimental data can be mapped. This enables collation, comparison, and query of complex spatial patterns with respect to each other and with respect to known or hypothesized structure. The atlas also includes a time-dependent anatomical ontology and mapping between the ontology and the spatial models in the form of delineated anatomical regions or tissues. The models provide a natural, graphical context for browsing and visualizing complex data. The Edinburgh Mouse Atlas Gene-Expression Database (EMAGE) is one of the first applications of the EMAP framework and provides a spatially mapped gene-expression database with associated tools for data mapping, submission, and query. In this article, we describe the underlying principles of the Atlas and the gene-expression database, and provide a practical introduction to the use of the EMAP and EMAGE tools, including use of new techniques for whole body gene-expression data capture and mapping.

  13. Spatial Digital Database for the Geologic Map of Oregon

    USGS Publications Warehouse

    Walker, George W.; MacLeod, Norman S.; Miller, Robert J.; Raines, Gary L.; Connors, Katherine A.

    2003-01-01

    Introduction This report describes and makes available a geologic digital spatial database (orgeo) representing the geologic map of Oregon (Walker and MacLeod, 1991). The original paper publication was printed as a single map sheet at a scale of 1:500,000, accompanied by a second sheet containing map unit descriptions and ancillary data. A digital version of the Walker and MacLeod (1991) map was included in Raines and others (1996). The dataset provided by this open-file report supersedes the earlier published digital version (Raines and others, 1996). This digital spatial database is one of many being created by the U.S. Geological Survey as an ongoing effort to provide geologic information for use in spatial analysis in a geographic information system (GIS). This database can be queried in many ways to produce a variety of geologic maps. This database is not meant to be used or displayed at any scale larger than 1:500,000 (for example, 1:100,000). This report describes the methods used to convert the geologic map data into a digital format, describes the ArcInfo GIS file structures and relationships, and explains how to download the digital files from the U.S. Geological Survey public access World Wide Web site on the Internet. Scanned images of the printed map (Walker and MacLeod, 1991), their correlation of map units, and their explanation of map symbols are also available for download.

  14. Spatial cyberinfrastructures, ontologies, and the humanities

    PubMed Central

    Sieber, Renee E.; Wellen, Christopher C.; Jin, Yuan

    2011-01-01

    We report on research into building a cyberinfrastructure for Chinese biographical and geographic data. Our cyberinfrastructure contains (i) the McGill-Harvard-Yenching Library Ming Qing Women's Writings database (MQWW), the only online database on historical Chinese women's writings, (ii) the China Biographical Database, the authority for Chinese historical people, and (iii) the China Historical Geographical Information System, one of the first historical geographic information systems. Key to this integration is that linked databases retain separate identities as bases of knowledge, while they possess sufficient semantic interoperability to allow for multidatabase concepts and to support cross-database queries on an ad hoc basis. Computational ontologies create underlying semantics for database access. This paper focuses on the spatial component in a humanities cyberinfrastructure, which includes issues of conflicting data, heterogeneous data models, disambiguation, and geographic scale. First, we describe the methodology for integrating the databases. Then we detail the system architecture, which includes a tier of ontologies and schema. We describe the user interface and applications that allow for cross-database queries. For instance, users should be able to analyze the data, examine hypotheses on spatial and temporal relationships, and generate historical maps with datasets from MQWW for research, teaching, and publication on Chinese women writers, their familial relations, publishing venues, and the literary and social communities. Last, we discuss the social side of cyberinfrastructure development, as people are considered to be as critical as the technical components for its success. PMID:21444819

  15. An Algorithm of Association Rule Mining for Microbial Energy Prospection

    PubMed Central

    Shaheen, Muhammad; Shahbaz, Muhammad

    2017-01-01

    The presence of hydrocarbons beneath earth’s surface produces some microbiological anomalies in soils and sediments. The detection of such microbial populations involves pure bio chemical processes which are specialized, expensive and time consuming. This paper proposes a new algorithm of context based association rule mining on non spatial data. The algorithm is a modified form of already developed algorithm which was for spatial database only. The algorithm is applied to mine context based association rules on microbial database to extract interesting and useful associations of microbial attributes with existence of hydrocarbon reserve. The surface and soil manifestations caused by the presence of hydrocarbon oxidizing microbes are selected from existing literature and stored in a shared database. The algorithm is applied on the said database to generate direct and indirect associations among the stored microbial indicators. These associations are then correlated with the probability of hydrocarbon’s existence. The numerical evaluation shows better accuracy for non-spatial data as compared to conventional algorithms at generating reliable and robust rules. PMID:28393846

  16. Managing Data in a GIS Environment

    NASA Technical Reports Server (NTRS)

    Beltran, Maria; Yiasemis, Haris

    1997-01-01

    A Geographic Information System (GIS) is a computer-based system that enables capture, modeling, manipulation, retrieval, analysis and presentation of geographically referenced data. A GIS operates in a dynamic environment of spatial and temporal information. This information is held in a database like any other information system, but performance is more of an issue for a geographic database than a traditional database due to the nature of the data. What distinguishes a GIS from other information systems is the spatial and temporal dimensions of the data and the volume of data (several gigabytes). Most traditional information systems are usually based around tables and textual reports, whereas GIS requires the use of cartographic forms and other visualization techniques. Much of the data can be represented using computer graphics, but a GIS is not a graphics database. A graphical system is concerned with the manipulation and presentation of graphical objects whereas a GIS handles geographic objects that have not only spatial dimensions but non-visual, i e., attribute and components. Furthermore, the nature of the data on which a GIS operates makes the traditional relational database approach inadequate for retrieving data and answering queries that reference spatial data. The purpose of this paper is to describe the efficiency issues behind storage and retrieval of data within a GIS database. Section 2 gives a general background on GIS, and describes the issues involved in custom vs. commercial and hybrid vs. integrated geographic information systems. Section 3 describes the efficiency issues concerning the management of data within a GIS environment. The paper ends with a summary of the main concerns of this paper.

  17. Fossil-Fuel C02 Emissions Database and Exploration System

    NASA Astrophysics Data System (ADS)

    Krassovski, M.; Boden, T.; Andres, R. J.; Blasing, T. J.

    2012-12-01

    The Carbon Dioxide Information Analysis Center (CDIAC) at Oak Ridge National Laboratory (ORNL) quantifies the release of carbon from fossil-fuel use and cement production at global, regional, and national spatial scales. The CDIAC emission time series estimates are based largely on annual energy statistics published at the national level by the United Nations (UN). CDIAC has developed a relational database to house collected data and information and a web-based interface to help users worldwide identify, explore and download desired emission data. The available information is divided in two major group: time series and gridded data. The time series data is offered for global, regional and national scales. Publications containing historical energy statistics make it possible to estimate fossil fuel CO2 emissions back to 1751. Etemad et al. (1991) published a summary compilation that tabulates coal, brown coal, peat, and crude oil production by nation and year. Footnotes in the Etemad et al.(1991) publication extend the energy statistics time series back to 1751. Summary compilations of fossil fuel trade were published by Mitchell (1983, 1992, 1993, 1995). Mitchell's work tabulates solid and liquid fuel imports and exports by nation and year. These pre-1950 production and trade data were digitized and CO2 emission calculations were made following the procedures discussed in Marland and Rotty (1984) and Boden et al. (1995). The gridded data presents annual and monthly estimates. Annual data presents a time series recording 1° latitude by 1° longitude CO2 emissions in units of million metric tons of carbon per year from anthropogenic sources for 1751-2008. The monthly, fossil-fuel CO2 emissions estimates from 1950-2008 provided in this database are derived from time series of global, regional, and national fossil-fuel CO2 emissions (Boden et al. 2011), the references therein, and the methodology described in Andres et al. (2011). The data accessible here take these tabular, national, mass-emissions data and distribute them spatially on a one degree latitude by one degree longitude grid. The within-country spatial distribution is achieved through a fixed population distribution as reported in Andres et al. (1996). This presentation introduces newly build database and web interface, reflects the present state and functionality of the Fossil-Fuel CO2 Emissions Database and Exploration System as well as future plans for expansion.

  18. The distribution of soil phosphorus for global biogeochemical modeling

    DOE PAGES

    Yang, Xiaojuan; Post, Wilfred M.; Thornton, Peter E.; ...

    2013-04-16

    We discuss that phosphorus (P) is a major element required for biological activity in terrestrial ecosystems. Although the total P content in most soils can be large, only a small fraction is available or in an organic form for biological utilization because it is bound either in incompletely weathered mineral particles, adsorbed on mineral surfaces, or, over the time of soil formation, made unavailable by secondary mineral formation (occluded). In order to adequately represent phosphorus availability in global biogeochemistry–climate models, a representation of the amount and form of P in soils globally is required. We develop an approach that buildsmore » on existing knowledge of soil P processes and databases of parent material and soil P measurements to provide spatially explicit estimates of different forms of naturally occurring soil P on the global scale. We assembled data on the various forms of phosphorus in soils globally, chronosequence information, and several global spatial databases to develop a map of total soil P and the distribution among mineral bound, labile, organic, occluded, and secondary P forms in soils globally. The amount of P, to 50cm soil depth, in soil labile, organic, occluded, and secondary pools is 3.6 ± 3, 8.6 ± 6, 12.2 ± 8, and 3.2 ± 2 Pg P (Petagrams of P, 1 Pg = 1 × 10 15g) respectively. The amount in soil mineral particles to the same depth is estimated at 13.0 ± 8 Pg P for a global soil total of 40.6 ± 18 Pg P. The large uncertainty in our estimates reflects our limited understanding of the processes controlling soil P transformations during pedogenesis and a deficiency in the number of soil P measurements. In spite of the large uncertainty, the estimated global spatial variation and distribution of different soil P forms presented in this study will be useful for global biogeochemistry models that include P as a limiting element in biological production by providing initial estimates of the available soil P for plant uptake and microbial utilization.« less

  19. On the morphometry of terrestrial shield volcanoes

    NASA Astrophysics Data System (ADS)

    Grosse, Pablo; Kervyn, Matthieu

    2016-04-01

    Shield volcanoes are described as low angle edifices that have convex up topographic profiles and are built primarily by the accumulation of lava flows. This generic view of shields' morphology is based on a limited number of monogenetic shields from Iceland and Mexico, and a small set of large oceanic islands (Hawaii, Galapagos). Here, the morphometry of over 150 monogenetic and polygenetic shield volcanoes, identified inthe Global Volcanism Network database, are analysed quantitatively from 90-meter resolution DEMs using the MORVOLC algorithm. An additional set of 20 volcanoes identified as stratovolcanoes but having low slopes and being dominantly built up by accumulation of lava flows are documented for comparison. Results show that there is a large variation in shield size (volumes range from 0.1 to >1000 km3), profile shape (height/basal width ratios range from 0.01 to 0.1), flank slope gradients, elongation and summit truncation. Correlation and principal component analysis of the obtained quantitative database enables to identify 4 key morphometric descriptors: size, steepness, plan shape and truncation. Using these descriptors through clustering analysis, a new classification scheme is proposed. It highlights the control of the magma feeding system - either central, along a linear structure, or spatially diffuse - on the resulting shield volcano morphology. Genetic relationships and evolutionary trends between contrasted morphological end-members can be highlighted within this new scheme. Additional findings are that the Galapagos-type morphology with a central deep caldera and steep upper flanks are characteristic of other shields. A series of large oceanic shields have slopes systematically much steeper than the low gradients (<4-8°) generally attributed to large Hawaiian-type shields. Finally, the continuum of morphologies from flat shields to steeper complex volcanic constructs considered as stratovolcanoes calls for a revision of this oversimplified distinction, taking into account the lava/pyroclasts ratio and the spatial distribution of eruptive vents.

  20. EPA Tribal Areas (4 of 4): Alaska Native Allotments

    EPA Pesticide Factsheets

    This dataset is a spatial representation of the Public Land Survey System (PLSS) in Alaska, generated from land survey records. The data represents a seamless spatial portrayal of native allotment land parcels, their legal descriptions, corner positioning and markings, and survey measurements. This data is intended for mapping purposes only and is not a substitute or replacement for the legal land survey records or other legal documents.Measurement and attribute data are collected from survey records using data entry screens into a relational database. The database design is based upon the FGDC Cadastral Content Data Standard. Corner positions are derived by geodetic calculations using measurement records. Closure and edgematching are applied to produce a seamless dataset. The resultant features do not preserve the original geometry of survey measurements, but the record measurements are reported as attributes. Additional boundary data are derived by spatial capture, protraction and GIS processing. The spatial features are stored and managed within the relational database, with active links to the represented measurement and attribute data.

  1. A category adjustment approach to memory for spatial location in natural scenes.

    PubMed

    Holden, Mark P; Curby, Kim M; Newcombe, Nora S; Shipley, Thomas F

    2010-05-01

    Memories for spatial locations often show systematic errors toward the central value of the surrounding region. This bias has been explained using a Bayesian model in which fine-grained and categorical information are combined (Huttenlocher, Hedges, & Duncan, 1991). However, experiments testing this model have largely used locations contained in simple geometric shapes. Use of this paradigm raises 2 issues. First, do results generalize to the complex natural world? Second, what types of information might be used to segment complex spaces into constituent categories? Experiment 1 addressed the 1st question by showing a bias toward prototypical values in memory for spatial locations in complex natural scenes. Experiment 2 addressed the 2nd question by manipulating the availability of basic visual cues (using color negatives) or of semantic information about the scene (using inverted images). Error patterns suggest that both perceptual and conceptual information are involved in segmentation. The possible neurological foundations of location memory of this kind are discussed. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  2. Pharmit: interactive exploration of chemical space.

    PubMed

    Sunseri, Jocelyn; Koes, David Ryan

    2016-07-08

    Pharmit (http://pharmit.csb.pitt.edu) provides an online, interactive environment for the virtual screening of large compound databases using pharmacophores, molecular shape and energy minimization. Users can import, create and edit virtual screening queries in an interactive browser-based interface. Queries are specified in terms of a pharmacophore, a spatial arrangement of the essential features of an interaction, and molecular shape. Search results can be further ranked and filtered using energy minimization. In addition to a number of pre-built databases of popular compound libraries, users may submit their own compound libraries for screening. Pharmit uses state-of-the-art sub-linear algorithms to provide interactive screening of millions of compounds. Queries typically take a few seconds to a few minutes depending on their complexity. This allows users to iteratively refine their search during a single session. The easy access to large chemical datasets provided by Pharmit simplifies and accelerates structure-based drug design. Pharmit is available under a dual BSD/GPL open-source license. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Terrestrial Sediments of the Earth: Development of a Global Unconsolidated Sediments Map Database (GUM)

    NASA Astrophysics Data System (ADS)

    Börker, J.; Hartmann, J.; Amann, T.; Romero-Mujalli, G.

    2018-04-01

    Mapped unconsolidated sediments cover half of the global land surface. They are of considerable importance for many Earth surface processes like weathering, hydrological fluxes or biogeochemical cycles. Ignoring their characteristics or spatial extent may lead to misinterpretations in Earth System studies. Therefore, a new Global Unconsolidated Sediments Map database (GUM) was compiled, using regional maps specifically representing unconsolidated and quaternary sediments. The new GUM database provides insights into the regional distribution of unconsolidated sediments and their properties. The GUM comprises 911,551 polygons and describes not only sediment types and subtypes, but also parameters like grain size, mineralogy, age and thickness where available. Previous global lithological maps or databases lacked detail for reported unconsolidated sediment areas or missed large areas, and reported a global coverage of 25 to 30%, considering the ice-free land area. Here, alluvial sediments cover about 23% of the mapped total ice-free area, followed by aeolian sediments (˜21%), glacial sediments (˜20%), and colluvial sediments (˜16%). A specific focus during the creation of the database was on the distribution of loess deposits, since loess is highly reactive and relevant to understand geochemical cycles related to dust deposition and weathering processes. An additional layer compiling pyroclastic sediment is added, which merges consolidated and unconsolidated pyroclastic sediments. The compilation shows latitudinal abundances of sediment types related to climate of the past. The GUM database is available at the PANGAEA database (https://doi.org/10.1594/PANGAEA.884822).

  4. NLCD - MODIS albedo data

    EPA Pesticide Factsheets

    The NLCD-MODIS land cover-albedo database integrates high-quality MODIS albedo observations with areas of homogeneous land cover from NLCD. The spatial resolution (pixel size) of the database is 480m-x-480m aligned to the standardized UGSG Albers Equal-Area projection. The spatial extent of the database is the continental United States. This dataset is associated with the following publication:Wickham , J., C.A. Barnes, and T. Wade. Combining NLCD and MODIS to Create a Land Cover-Albedo Dataset for the Continental United States. REMOTE SENSING OF ENVIRONMENT. Elsevier Science Ltd, New York, NY, USA, 170(0): 143-153, (2015).

  5. An Investigation of the Fine Spatial Structure of Meteor Streams Using the Relational Database ``Meteor''

    NASA Astrophysics Data System (ADS)

    Karpov, A. V.; Yumagulov, E. Z.

    2003-05-01

    We have restored and ordered the archive of meteor observations carried out with a meteor radar complex ``KGU-M5'' since 1986. A relational database has been formed under the control of the Database Management System (DBMS) Oracle 8. We also improved and tested a statistical method for studying the fine spatial structure of meteor streams with allowance for the specific features of application of the DBMS. Statistical analysis of the results of observations made it possible to obtain information about the substance distribution in the Quadrantid, Geminid, and Perseid meteor streams.

  6. Identifying the relevant features of the National Digital Cadastral Database (NDCDB) for spatial analysis by using the Delphi Technique

    NASA Astrophysics Data System (ADS)

    Halim, N. Z. A.; Sulaiman, S. A.; Talib, K.; Ng, E. G.

    2018-02-01

    This paper explains the process carried out in identifying the relevant features of the National Digital Cadastral Database (NDCDB) for spatial analysis. The research was initially a part of a larger research exercise to identify the significance of NDCDB from the legal, technical, role and land-based analysis perspectives. The research methodology of applying the Delphi technique is substantially discussed in this paper. A heterogeneous panel of 14 experts was created to determine the importance of NDCDB from the technical relevance standpoint. Three statements describing the relevant features of NDCDB for spatial analysis were established after three rounds of consensus building. It highlighted the NDCDB’s characteristics such as its spatial accuracy, functions, and criteria as a facilitating tool for spatial analysis. By recognising the relevant features of NDCDB for spatial analysis in this study, practical application of NDCDB for various analysis and purpose can be widely implemented.

  7. Object classification and outliers analysis in the forthcoming Gaia mission

    NASA Astrophysics Data System (ADS)

    Ordóñez-Blanco, D.; Arcay, B.; Dafonte, C.; Manteiga, M.; Ulla, A.

    2010-12-01

    Astrophysics is evolving towards the rational optimization of costly observational material by the intelligent exploitation of large astronomical databases from both terrestrial telescopes and spatial mission archives. However, there has been relatively little advance in the development of highly scalable data exploitation and analysis tools needed to generate the scientific returns from these large and expensively obtained datasets. Among the upcoming projects of astronomical instrumentation, Gaia is the next cornerstone ESA mission. The Gaia survey foresees the creation of a data archive and its future exploitation with automated or semi-automated analysis tools. This work reviews some of the work that is being developed by the Gaia Data Processing and Analysis Consortium for the object classification and analysis of outliers in the forthcoming mission.

  8. Detecting natural occlusion boundaries using local cues

    PubMed Central

    DiMattina, Christopher; Fox, Sean A.; Lewicki, Michael S.

    2012-01-01

    Occlusion boundaries and junctions provide important cues for inferring three-dimensional scene organization from two-dimensional images. Although several investigators in machine vision have developed algorithms for detecting occlusions and other edges in natural images, relatively few psychophysics or neurophysiology studies have investigated what features are used by the visual system to detect natural occlusions. In this study, we addressed this question using a psychophysical experiment where subjects discriminated image patches containing occlusions from patches containing surfaces. Image patches were drawn from a novel occlusion database containing labeled occlusion boundaries and textured surfaces in a variety of natural scenes. Consistent with related previous work, we found that relatively large image patches were needed to attain reliable performance, suggesting that human subjects integrate complex information over a large spatial region to detect natural occlusions. By defining machine observers using a set of previously studied features measured from natural occlusions and surfaces, we demonstrate that simple features defined at the spatial scale of the image patch are insufficient to account for human performance in the task. To define machine observers using a more biologically plausible multiscale feature set, we trained standard linear and neural network classifiers on the rectified outputs of a Gabor filter bank applied to the image patches. We found that simple linear classifiers could not match human performance, while a neural network classifier combining filter information across location and spatial scale compared well. These results demonstrate the importance of combining a variety of cues defined at multiple spatial scales for detecting natural occlusions. PMID:23255731

  9. Wave-driven spatial and temporal variability in sea-floor sediment mobility in the Monterey Bay, Cordell Bank, and Gulf of the Farallones National Marine Sanctuaries

    USGS Publications Warehouse

    Storlazzi, Curt D.; Reid, Jane A.; Golden, Nadine E.

    2007-01-01

    Wind and wave patterns affect many aspects of continental shelves and shorelines geomorphic evolution. Although our understanding of the processes controlling sediment suspension on continental shelves has improved over the past decade, our ability to predict sediment mobility over large spatial and temporal scales remains limited. The deployment of robust operational buoys along the U.S. West Coast in the early 1980s provides large quantities of high-resolution oceanographic and meteorologic data. By 2006, these data sets were long enough to clearly identify long-term trends and compute statistically significant probability estimates of wave and wind behavior during annual and interannual climatic cycles (that is, El Niño and La Niña). Wave-induced sediment mobility on the shelf and upper slope off central California was modeled using synthesized oceanographic and meteorologic data as boundary input for the Delft SWAN model, sea-floor grain-size data provided by the usSEABED database, and regional bathymetry. Differences in waves (heights, periods, and directions) and winds (speeds and directions) between El Niño and La Niña months cause temporal and spatial variations in peak wave-induced bed shear stresses. These variations, in conjunction with spatially heterogeneous unconsolidated sea-floor sedimentary cover, result in predicted sediment mobility widely varying in both time and space. These findings indicate that these factors have significant consequences for both geological and biological processes.

  10. The open black box: The role of the end-user in GIS integration

    USGS Publications Warehouse

    Poore, B.S.

    2003-01-01

    Formalist theories of knowledge that underpin GIS scholarship on integration neglect the importance and creativity of end-users in knowledge construction. This has practical consequences for the success of large distributed databases that contribute to spatial-data infrastructures. Spatial-data infrastructures depend on participation at local levels, such as counties and watersheds, and they must be developed to support feedback from local users. Looking carefully at the work of scientists in a watershed in Puget Sound, Washington, USA during the salmon crisis reveals that the work of these end-users articulates different worlds of knowledge. This view of the user is consonant with recent work in science and technology studies and research into computer-supported cooperative work. GIS theory will be enhanced when it makes room for these users and supports their practical work. ?? / Canadian Association of Geographers.

  11. Geologic database for digital geology of California, Nevada, and Utah: an application of the North American Data Model

    USGS Publications Warehouse

    Bedford, David R.; Ludington, Steve; Nutt, Constance M.; Stone, Paul A.; Miller, David M.; Miller, Robert J.; Wagner, David L.; Saucedo, George J.

    2003-01-01

    The USGS is creating an integrated national database for digital state geologic maps that includes stratigraphic, age, and lithologic information. The majority of the conterminous 48 states have digital geologic base maps available, often at scales of 1:500,000. This product is a prototype, and is intended to demonstrate the types of derivative maps that will be possible with the national integrated database. This database permits the creation of a number of types of maps via simple or sophisticated queries, maps that may be useful in a number of areas, including mineral-resource assessment, environmental assessment, and regional tectonic evolution. This database is distributed with three main parts: a Microsoft Access 2000 database containing geologic map attribute data, an Arc/Info (Environmental Systems Research Institute, Redlands, California) Export format file containing points representing designation of stratigraphic regions for the Geologic Map of Utah, and an ArcView 3.2 (Environmental Systems Research Institute, Redlands, California) project containing scripts and dialogs for performing a series of generalization and mineral resource queries. IMPORTANT NOTE: Spatial data for the respective stage geologic maps is not distributed with this report. The digital state geologic maps for the states involved in this report are separate products, and two of them are produced by individual state agencies, which may be legally and/or financially responsible for this data. However, the spatial datasets for maps discussed in this report are available to the public. Questions regarding the distribution, sale, and use of individual state geologic maps should be sent to the respective state agency. We do provide suggestions for obtaining and formatting the spatial data to make it compatible with data in this report. See section ‘Obtaining and Formatting Spatial Data’ in the PDF version of the report.

  12. New Constraints on Terrestrial Surface-Atmosphere Fluxes of Gaseous Elemental Mercury Using a Global Database.

    PubMed

    Agnan, Yannick; Le Dantec, Théo; Moore, Christopher W; Edwards, Grant C; Obrist, Daniel

    2016-01-19

    Despite 30 years of study, gaseous elemental mercury (Hg(0)) exchange magnitude and controls between terrestrial surfaces and the atmosphere still remain uncertain. We compiled data from 132 studies, including 1290 reported fluxes from more than 200,000 individual measurements, into a database to statistically examine flux magnitudes and controls. We found that fluxes were unevenly distributed, both spatially and temporally, with strong biases toward Hg-enriched sites, daytime and summertime measurements. Fluxes at Hg-enriched sites were positively correlated with substrate concentrations, but this was absent at background sites. Median fluxes over litter- and snow-covered soils were lower than over bare soils, and chamber measurements showed higher emission compared to micrometeorological measurements. Due to low spatial extent, estimated emissions from Hg-enriched areas (217 Mg·a(-1)) were lower than previous estimates. Globally, areas with enhanced atmospheric Hg(0) levels (particularly East Asia) showed an emerging importance of Hg(0) emissions accounting for half of the total global emissions estimated at 607 Mg·a(-1), although with a large uncertainty range (-513 to 1353 Mg·a(-1) [range of 37.5th and 62.5th percentiles]). The largest uncertainties in Hg(0) fluxes stem from forests (-513 to 1353 Mg·a(-1) [range of 37.5th and 62.5th percentiles]), largely driven by a shortage of whole-ecosystem fluxes and uncertain contributions of leaf-atmosphere exchanges, questioning to what degree ecosystems are net sinks or sources of atmospheric Hg(0).

  13. GIS model for identifying urban areas vulnerable to noise pollution: case study

    NASA Astrophysics Data System (ADS)

    Bilaşco, Ştefan; Govor, Corina; Roşca, Sanda; Vescan, Iuliu; Filip, Sorin; Fodorean, Ioan

    2017-04-01

    The unprecedented expansion of the national car ownership over the last few years has been determined by economic growth and the need for the population and economic agents to reduce travel time in progressively expanding large urban centres. This has led to an increase in the level of road noise and a stronger impact on the quality of the environment. Noise pollution generated by means of transport represents one of the most important types of pollution with negative effects on a population's health in large urban areas. As a consequence, tolerable limits of sound intensity for the comfort of inhabitants have been determined worldwide and the generation of sound maps has been made compulsory in order to identify the vulnerable zones and to make recommendations how to decrease the negative impact on humans. In this context, the present study aims at presenting a GIS spatial analysis model-based methodology for identifying and mapping zones vulnerable to noise pollution. The developed GIS model is based on the analysis of all the components influencing sound propagation, represented as vector databases (points of sound intensity measurements, buildings, lands use, transport infrastructure), raster databases (DEM), and numerical databases (wind direction and speed, sound intensity). Secondly, the hourly changes (for representative hours) were analysed to identify the hotspots characterised by major traffic flows specific to rush hours. The validated results of the model are represented by GIS databases and useful maps for the local public administration to use as a source of information and in the process of making decisions.

  14. Geolocation of man-made reservoirs across terrains of varying complexity using GIS

    NASA Astrophysics Data System (ADS)

    Mixon, David M.; Kinner, David A.; Stallard, Robert F.; Syvitski, James P. M.

    2008-10-01

    The Reservoir Sedimentation Survey Information System (RESIS) is one of the world's most comprehensive databases of reservoir sedimentation rates, comprising nearly 6000 surveys for 1819 reservoirs across the continental United States. Sediment surveys in the database date from 1904 to 1999, though more than 95% of surveys were entered prior to 1980, making RESIS largely a historical database. The use of this database for large-scale studies has been limited by the lack of precise coordinates for the reservoirs. Many of the reservoirs are relatively small structures and do not appear on current USGS topographic maps. Others have been renamed or have only approximate (i.e. township and range) coordinates. This paper presents a method scripted in ESRI's ARC Macro Language (AML) to locate the reservoirs on digital elevation models using information available in RESIS. The script also delineates the contributing watersheds and compiles several hydrologically important parameters for each reservoir. Evaluation of the method indicates that, for watersheds larger than 5 km 2, the correct outlet is identified over 80% of the time. The importance of identifying the watershed outlet correctly depends on the application. Our intent is to collect spatial data for watersheds across the continental United States and describe the land use, soils, and topography for each reservoir's watershed. Because of local landscape similarity in these properties, we show that choosing the incorrect watershed does not necessarily mean that the watershed characteristics will be misrepresented. We present a measure termed terrain complexity and examine its relationship to geolocation success rate and its influence on the similarity of nearby watersheds.

  15. Very large hail occurrence in Poland from 2007 to 2015

    NASA Astrophysics Data System (ADS)

    Pilorz, Wojciech

    2015-10-01

    Very large hail is known as a presence of a hailstone greater or equal to 5 cm in diameter. This phenomenon is rare but its significant consequences, not only to agriculture but also to automobiles, households and people outdoor makes it essential thing to examine. Hail appearance is strictly connected with storms frequency and its kind. The most hail-endangered kind of storm is supercell storm. Geographical distribution of hailstorms was compared with geographical distribution of storms in Poland. Similarities were found. The area of the largest number of storms is southeastern Poland. Analyzed European Severe Weather Database (ESWD) data showed that most of very large hail reports occurred in this part of Poland. The probable reason for this situation is the longest period of lasting tropical airmasses in southeastern Poland. Spatial distribution analysis shows also more hail incidents over Upper Silesia, Lesser Poland, Subcarpathia and Świętokrzyskie regions. The information source about hail occurrence was ESWD - open database, where everyone can add report and find reports which meet given search criteria. 69 hailstorms in the period of 2007 - 2015 were examined. They caused 121 very large hail reports. It was found that there is large disproportion in number of hailstorms and hail reports between individual years. Very large hail season in Poland begins in May and ends in September with cumulation in July. Most of hail occurs between 12:00 and 17:00 UTC, but there were some cases of very large (one extremely large) hail at night and early morning hours. However very large hail is a spectacular phenomenon, its local character determines potentially high information loss rate and it is the most significant problem in hail research.

  16. A database of georeferenced nutrient chemistry data for mountain lakes of the Western United States

    PubMed Central

    Williams, Jason; Labou, Stephanie G.

    2017-01-01

    Human activities have increased atmospheric nitrogen and phosphorus deposition rates relative to pre-industrial background. In the Western U.S., anthropogenic nutrient deposition has increased nutrient concentrations and stimulated algal growth in at least some remote mountain lakes. The Georeferenced Lake Nutrient Chemistry (GLNC) Database was constructed to create a spatially-extensive lake chemistry database needed to assess atmospheric nutrient deposition effects on Western U.S. mountain lakes. The database includes nitrogen and phosphorus water chemistry data spanning 1964–2015, with 148,336 chemistry results from 51,048 samples collected across 3,602 lakes in the Western U.S. Data were obtained from public databases, government agencies, scientific literature, and researchers, and were formatted into a consistent table structure. All data are georeferenced to a modified version of the National Hydrography Dataset Plus version 2. The database is transparent and reproducible; R code and input files used to format data are provided in an appendix. The database will likely be useful to those assessing spatial patterns of lake nutrient chemistry associated with atmospheric deposition or other environmental stressors. PMID:28509907

  17. A framework for cross-observatory volcanological database management

    NASA Astrophysics Data System (ADS)

    Aliotta, Marco Antonio; Amore, Mauro; Cannavò, Flavio; Cassisi, Carmelo; D'Agostino, Marcello; Dolce, Mario; Mastrolia, Andrea; Mangiagli, Salvatore; Messina, Giuseppe; Montalto, Placido; Fabio Pisciotta, Antonino; Prestifilippo, Michele; Rossi, Massimo; Scarpato, Giovanni; Torrisi, Orazio

    2017-04-01

    In the last years, it has been clearly shown how the multiparametric approach is the winning strategy to investigate the complex dynamics of the volcanic systems. This involves the use of different sensor networks, each one dedicated to the acquisition of particular data useful for research and monitoring. The increasing interest devoted to the study of volcanological phenomena led the constitution of different research organizations or observatories, also relative to the same volcanoes, which acquire large amounts of data from sensor networks for the multiparametric monitoring. At INGV we developed a framework, hereinafter called TSDSystem (Time Series Database System), which allows to acquire data streams from several geophysical and geochemical permanent sensor networks (also represented by different data sources such as ASCII, ODBC, URL etc.), located on the main volcanic areas of Southern Italy, and relate them within a relational database management system. Furthermore, spatial data related to different dataset are managed using a GIS module for sharing and visualization purpose. The standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common space and time scale. In order to share data between INGV observatories, and also with Civil Protection, whose activity is related on the same volcanic districts, we designed a "Master View" system that, starting from the implementation of a number of instances of the TSDSystem framework (one for each observatory), makes possible the joint interrogation of data, both temporal and spatial, on instances located in different observatories, through the use of web services technology (RESTful, SOAP). Similarly, it provides metadata for equipment using standard schemas (such as FDSN StationXML). The "Master View" is also responsible for managing the data policy through a "who owns what" system, which allows you to associate viewing/download of spatial or time intervals to particular users or groups.

  18. Database Objects vs Files: Evaluation of alternative strategies for managing large remote sensing data

    NASA Astrophysics Data System (ADS)

    Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram

    2010-05-01

    Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available. Another consideration is the strategy used for partitioning large data collections, and large datasets within collections, using round-robin vs hash partitioning vs range partitioning methods. Each has different characteristics in terms of spatial locality of data and resultant degree of declustering of the computations on the data. Furthermore, we have observed that, in practice, there can be large variations in the frequency of access to different parts of a large data collection and/or dataset, thereby creating "hotspots" in the data. We will evaluate the ability of different approaches for dealing effectively with such hotspots and alternative strategies for dealing with hotspots.

  19. Geologic map of Alaska

    USGS Publications Warehouse

    Wilson, Frederic H.; Hults, Chad P.; Mull, Charles G.; Karl, Susan M.

    2015-12-31

    This Alaska compilation is unique in that it is integrated with a rich database of information provided in the spatial datasets and standalone attribute databases. Within the spatial files every line and polygon is attributed to its original source; the references to these sources are contained in related tables, as well as in stand-alone tables. Additional attributes include typical lithology, geologic setting, and age range for the map units. Also included are tables of radiometric ages.

  20. Representing spatial information in a computational model for network management

    NASA Technical Reports Server (NTRS)

    Blaisdell, James H.; Brownfield, Thomas F.

    1994-01-01

    While currently available relational database management systems (RDBMS) allow inclusion of spatial information in a data model, they lack tools for presenting this information in an easily comprehensible form. Computer-aided design (CAD) software packages provide adequate functions to produce drawings, but still require manual placement of symbols and features. This project has demonstrated a bridge between the data model of an RDBMS and the graphic display of a CAD system. It is shown that the CAD system can be used to control the selection of data with spatial components from the database and then quickly plot that data on a map display. It is shown that the CAD system can be used to extract data from a drawing and then control the insertion of that data into the database. These demonstrations were successful in a test environment that incorporated many features of known working environments, suggesting that the techniques developed could be adapted for practical use.

  1. Octree-based indexing for 3D pointclouds within an Oracle Spatial DBMS

    NASA Astrophysics Data System (ADS)

    Schön, Bianca; Mosa, Abu Saleh Mohammad; Laefer, Debra F.; Bertolotto, Michela

    2013-02-01

    A large proportion of today's digital datasets have a spatial component. The effective storage and management of which poses particular challenges, especially with light detection and ranging (LiDAR), where datasets of even small geographic areas may contain several hundred million points. While in the last decade 2.5-dimensional data were prevalent, true 3-dimensional data are increasingly commonplace via LiDAR. They have gained particular popularity for urban applications including generation of city-scale maps, baseline data disaster management, and utility planning. Additionally, LiDAR is commonly used for flood plane identification, coastal-erosion tracking, and forest biomass mapping. Despite growing data availability, current spatial information systems do not provide suitable full support for the data's true 3D nature. Consequently, one system is needed to store the data and another for its processing, thereby necessitating format transformations. The work presented herein aims at a more cost-effective way for managing 3D LiDAR data that allows for storage and manipulation within a single system by enabling a new index within existing spatial database management technology. Implementation of an octree index for 3D LiDAR data atop Oracle Spatial 11g is presented, along with an evaluation showing up to an eight-fold improvement compared to the native Oracle R-tree index.

  2. Meteoroid, and debris special investigation group preliminary results: Size-frequency distribution and spatial density of large impact features on LDEF

    NASA Technical Reports Server (NTRS)

    See, Thomas H.; Hoerz, Friedrich; Zolensky, Michael E.; Allbrooks, Martha K.; Atkinson, Dale R.; Simon, Charles G.

    1992-01-01

    All craters greater than or equal to 500 microns and penetration holes greater than or equal to 300 microns in diameter on the entire Long Duration Exposure Facility (LDEF) were documented. Summarized here are the observations on the LDEF frame, which exposed aluminum 6061-T6 in 26 specific directions relative to LDEF's velocity vector. In addition, the opportunity arose to characterize the penetration holes in the A0178 thermal blankets, which pointed in nine directions. For each of the 26 directions, LDEF provided time-area products that approach those afforded by all previous space-retrieved materials combined. The objective here is to provide a factual database pertaining to the largest collisional events on the entire LDEF spacecraft with a minimum of interpretation. This database may serve to encourage and guide more interpretative efforts and modeling attempts.

  3. Pushing the limits of spatial resolution with the Kuiper Airborne observatory

    NASA Technical Reports Server (NTRS)

    Lester, Daniel

    1994-01-01

    The study of astronomical objects at high spatial resolution in the far-IR is one of the most serious limitations to our work at these wavelengths, which carry information about the luminosity of dusty and obscured sources. At IR wavelengths shorter than 30 microns, ground based telescopes with large apertures at superb sites achieve diffraction-limited performance close to the seeing limit in the optical. At millimeter wavelengths, ground based interferometers achieve resolution that is close to this. The inaccessibility of the far-IR from the ground makes it difficult, however, to achieve complementary resolution in the far-IR. The 1983 IRAS survey, while extraordinarily sensitive, provides us with a sky map at a spatial resolution that is limited by detector size on a spatial scale that is far larger than that available in other wavelengths on the ground. The survey resolution is of order 4 min in the 100 micron bandpass, and 2 min at 60 microns (IRAS Explanatory Supplement, 1988). Information on a scale of 1' is available on some sources from the CPC. Deconvolution and image resolution using this database is one of the subjects of this workshop.

  4. Change detection analysis of multi-temporal imagery to assess environmental development on AL Sammalyah Island, Abu-Dhabi

    NASA Astrophysics Data System (ADS)

    Essa, Salem M.; Loughland, R.; Khogali, Mohamed E.

    2005-10-01

    AL Sammalyah Island is considered an important protected area in Abu Dhabi Emirate. The island has witnessed high rates of change in land use in the past few years starting from the early 1990s. Change detection analysis is conducted to monitor rate and spatial distribution of change occurring on the island. A three-phase research project has been implemented, an integrated Geographic Information System (GIS) database for the Island is the focus; the current phase main objective was to assess rate and spatial distribution of the change on the island using multi-date large scale aerial photos. Results of the current study demonstrated that total vegetation cover extent has increased from 3.742 km2 in 1994 to 5.101 km2 in 2005, an increase of 36.3% between 1994 and 2005. The study also showed that this increase in vegetation extent is mostly attributed to the increase in mangrove planted areas with an increase from 2.256 km2 in 1994 to 3.568 km2 in 2005, an increase of 58.2% in ten years. Remote sensing and GIS have been successfully used to quantify change extent, distribution and trajectories of change. The next step will be to complete the GIS database for AL Sammalyah Island.

  5. Altered spatial profile of distraction in people with schizophrenia.

    PubMed

    Leonard, Carly J; Robinson, Benjamin M; Hahn, Britta; Luck, Steven J; Gold, James M

    2017-11-01

    Attention is critical for effective processing of incoming information and has long been identified as a potential area of dysfunction in people with schizophrenia (PSZ). In the realm of visual processing, both spatial attention and feature-based attention are involved in biasing selection toward task-relevant stimuli and avoiding distraction. Evidence from multiple paradigms has suggested that PSZ may hyperfocus and have a narrower "spotlight" of spatial attention. In contrast, feature-based attention seems largely preserved, with some suggestion of increased processing of stimuli sharing the target-defining feature. In the current study, we examined the spatial profile of feature-based distraction using a task in which participants searched for a particular color target and attempted to ignore distractors that varied in distance from the target location and either matched or mismatched the target color. PSZ differed from healthy controls in terms of interference from peripheral distractors that shared the target-color presented 200 ms before a central target. Specifically, PSZ showed an amplified gradient of spatial attention, with increased distraction to near distractors and less interference to far distractors. Moreover, consistent with hyperfocusing, individual differences in this spatial profile were correlated with positive symptoms, such that those with greater positive symptoms showed less distraction by target-colored distractors near the task-relevant location. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Spatial Relation Predicates in Topographic Feature Semantics

    USGS Publications Warehouse

    Varanka, Dalia E.; Caro, Holly K.

    2013-01-01

    Topographic data are designed and widely used for base maps of diverse applications, yet the power of these information sources largely relies on the interpretive skills of map readers and relational database expert users once the data are in map or geographic information system (GIS) form. Advances in geospatial semantic technology offer data model alternatives for explicating concepts and articulating complex data queries and statements. To understand and enrich the vocabulary of topographic feature properties for semantic technology, English language spatial relation predicates were analyzed in three standard topographic feature glossaries. The analytical approach drew from disciplinary concepts in geography, linguistics, and information science. Five major classes of spatial relation predicates were identified from the analysis; representations for most of these are not widely available. The classes are: part-whole (which are commonly modeled throughout semantic and linked-data networks), geometric, processes, human intention, and spatial prepositions. These are commonly found in the ‘real world’ and support the environmental science basis for digital topographical mapping. The spatial relation concepts are based on sets of relation terms presented in this chapter, though these lists are not prescriptive or exhaustive. The results of this study make explicit the concepts forming a broad set of spatial relation expressions, which in turn form the basis for expanding the range of possible queries for topographical data analysis and mapping.

  7. A database and tool for boundary conditions for regional air quality modeling: description and evaluation

    NASA Astrophysics Data System (ADS)

    Henderson, B. H.; Akhtar, F.; Pye, H. O. T.; Napelenok, S. L.; Hutzell, W. T.

    2013-09-01

    Transported air pollutants receive increasing attention as regulations tighten and global concentrations increase. The need to represent international transport in regional air quality assessments requires improved representation of boundary concentrations. Currently available observations are too sparse vertically to provide boundary information, particularly for ozone precursors, but global simulations can be used to generate spatially and temporally varying Lateral Boundary Conditions (LBC). This study presents a public database of global simulations designed and evaluated for use as LBC for air quality models (AQMs). The database covers the contiguous United States (CONUS) for the years 2000-2010 and contains hourly varying concentrations of ozone, aerosols, and their precursors. The database is complimented by a tool for configuring the global results as inputs to regional scale models (e.g., Community Multiscale Air Quality or Comprehensive Air quality Model with extensions). This study also presents an example application based on the CONUS domain, which is evaluated against satellite retrieved ozone vertical profiles. The results show performance is largely within uncertainty estimates for the Tropospheric Emission Spectrometer (TES) with some exceptions. The major difference shows a high bias in the upper troposphere along the southern boundary in January. This publication documents the global simulation database, the tool for conversion to LBC, and the fidelity of concentrations on the boundaries. This documentation is intended to support applications that require representation of long-range transport of air pollutants.

  8. A global, open-source database of flood protection standards

    NASA Astrophysics Data System (ADS)

    Scussolini, Paolo; Aerts, Jeroen; Jongman, Brenden; Bouwer, Laurens; Winsemius, Hessel; de Moel, Hans; Ward, Philip

    2016-04-01

    Accurate flood risk estimation is pivotal in that it enables risk-informed policies in disaster risk reduction, as emphasized in the recent Sendai framework for Disaster Risk Reduction. To improve our understanding of flood risk, models are now capable to provide actionable risk information on the (sub)global scale. Still the accuracy of their results is greatly limited by the lack of information on standards of protection to flood that are actually in place; and researchers thus take large assumptions on the extent of protection. With our work we propose a first global, open-source database of FLOod PROtection Standards, FLOPROS, covering a range of spatial scales. FLOPROS is structured in three layers of information, and merges them into one consistent database: 1) the Design layer contains empirical information about the standard of protection presently in place; 2) the Policy layer contains intended protection standards from normative documents; 3) the Model layer uses a validated numerical approach to calculate protection standards for areas not covered in the other layers. The FLOPROS database can be used for more accurate risk assessment exercises across scales. As the database should be continually updated to reflect new interventions, we invite researchers and practitioners to contribute information. Further, we look for partners within the risk community to participate in additional strategies to implement the amount and accuracy of information contained in this first version of FLOPROS.

  9. Development of Spatiotemporal Bias-Correction Techniques for Downscaling GCM Predictions

    NASA Astrophysics Data System (ADS)

    Hwang, S.; Graham, W. D.; Geurink, J.; Adams, A.; Martinez, C. J.

    2010-12-01

    Accurately representing the spatial variability of precipitation is an important factor for predicting watershed response to climatic forcing, particularly in small, low-relief watersheds affected by convective storm systems. Although Global Circulation Models (GCMs) generally preserve spatial relationships between large-scale and local-scale mean precipitation trends, most GCM downscaling techniques focus on preserving only observed temporal variability on point by point basis, not spatial patterns of events. Downscaled GCM results (e.g., CMIP3 ensembles) have been widely used to predict hydrologic implications of climate variability and climate change in large snow-dominated river basins in the western United States (Diffenbaugh et al., 2008; Adam et al., 2009). However fewer applications to smaller rain-driven river basins in the southeastern US (where preserving spatial variability of rainfall patterns may be more important) have been reported. In this study a new method was developed to bias-correct GCMs to preserve both the long term temporal mean and variance of the precipitation data, and the spatial structure of daily precipitation fields. Forty-year retrospective simulations (1960-1999) from 16 GCMs were collected (IPCC, 2007; WCRP CMIP3 multi-model database: https://esg.llnl.gov:8443/), and the daily precipitation data at coarse resolution (i.e., 280km) were interpolated to 12km spatial resolution and bias corrected using gridded observations over the state of Florida (Maurer et al., 2002; Wood et al, 2002; Wood et al, 2004). In this method spatial random fields which preserved the observed spatial correlation structure of the historic gridded observations and the spatial mean corresponding to the coarse scale GCM daily rainfall were generated. The spatiotemporal variability of the spatio-temporally bias-corrected GCMs were evaluated against gridded observations, and compared to the original temporally bias-corrected and downscaled CMIP3 data for the central Florida. The hydrologic response of two southwest Florida watersheds to the gridded observation data, the original bias corrected CMIP3 data, and the new spatiotemporally corrected CMIP3 predictions was compared using an integrated surface-subsurface hydrologic model developed by Tampa Bay Water.

  10. A pilot GIS database of active faults of Mt. Etna (Sicily): A tool for integrated hazard evaluation

    NASA Astrophysics Data System (ADS)

    Barreca, Giovanni; Bonforte, Alessandro; Neri, Marco

    2013-02-01

    A pilot GIS-based system has been implemented for the assessment and analysis of hazard related to active faults affecting the eastern and southern flanks of Mt. Etna. The system structure was developed in ArcGis® environment and consists of different thematic datasets that include spatially-referenced arc-features and associated database. Arc-type features, georeferenced into WGS84 Ellipsoid UTM zone 33 Projection, represent the five main fault systems that develop in the analysed region. The backbone of the GIS-based system is constituted by the large amount of information which was collected from the literature and then stored and properly geocoded in a digital database. This consists of thirty five alpha-numeric fields which include all fault parameters available from literature such us location, kinematics, landform, slip rate, etc. Although the system has been implemented according to the most common procedures used by GIS developer, the architecture and content of the database represent a pilot backbone for digital storing of fault parameters, providing a powerful tool in modelling hazard related to the active tectonics of Mt. Etna. The database collects, organises and shares all scientific currently available information about the active faults of the volcano. Furthermore, thanks to the strong effort spent on defining the fields of the database, the structure proposed in this paper is open to the collection of further data coming from future improvements in the knowledge of the fault systems. By layering additional user-specific geographic information and managing the proposed database (topological querying) a great diversity of hazard and vulnerability maps can be produced by the user. This is a proposal of a backbone for a comprehensive geographical database of fault systems, universally applicable to other sites.

  11. Morphological classification and spatial distribution of Philippine volcanoes

    NASA Astrophysics Data System (ADS)

    Paguican, E. M. R.; Kervyn, M.; Grosse, P.

    2016-12-01

    The Philippines is an island arc composed of two major blocks: the aseismic Palawan microcontinental block and the Philippine mobile belt. It is bounded by opposing subduction zones, with the left-lateral Philippine Fault running north-south. This setting is ideal for volcano formation and growth, making it one of the best places to study the controls on island arc volcano morphometry and evolution. In this study, we created a database of volcanic edifices and structures identified on the SRTM 30 m digital elevation models (DEM). We computed the morphometry of each edifice using MORVOLC, an IDL code for generating quantitative parameters based on a defined volcano base and DEM. Morphometric results illustrate the large range of sizes and volumes of Philippine volcanoes. Heirarchical classification by principal component analysis distinguishes between large massifs, large cones/sub-cones, small shields/sub-cones, and small cones, based mainly on size (volume, basal width) and steepness (height/basal width ratio, average slopes). Poisson Nearest Neighbor analysis was used to examine the spatial distribution of volcano centroids. Spatial distribution of the different types of volcanoes suggests that large volcanic massifs formed on thickened crust. Although all the volcanic fields and arcs are a response to tectonic activity such as subduction or rifting, only West Luzon, North and South Mindanao, and Eastern Philippines volcanic arcs and Basilan, Macolod, and Maramag volcanic fields present a statistical clustering of volcanic centers. Spatial distribution and preferential alignment of edifices in all volcanic fields confirm that regional structures had some control on their formation. Volcanoes start either as steep cones or as less steep sub-cones and shields. They then grow into large cones, sub-cones and eventually into massifs as eruption focus shifts within the volcano and new eruptive material is deposited on the slopes. Examination of the directions of volcano collapse scars and erosional amphitheater valleys suggests that, during their development, volcano growth is affected by movement of underlying tectonic structures, weight and stability of the growing edifice, structure and composition of the substrata, and intense erosion associated with tropical rainfall.

  12. Ontology-based geospatial data query and integration

    USGS Publications Warehouse

    Zhao, T.; Zhang, C.; Wei, M.; Peng, Z.-R.

    2008-01-01

    Geospatial data sharing is an increasingly important subject as large amount of data is produced by a variety of sources, stored in incompatible formats, and accessible through different GIS applications. Past efforts to enable sharing have produced standardized data format such as GML and data access protocols such as Web Feature Service (WFS). While these standards help enabling client applications to gain access to heterogeneous data stored in different formats from diverse sources, the usability of the access is limited due to the lack of data semantics encoded in the WFS feature types. Past research has used ontology languages to describe the semantics of geospatial data but ontology-based queries cannot be applied directly to legacy data stored in databases or shapefiles, or to feature data in WFS services. This paper presents a method to enable ontology query on spatial data available from WFS services and on data stored in databases. We do not create ontology instances explicitly and thus avoid the problems of data replication. Instead, user queries are rewritten to WFS getFeature requests and SQL queries to database. The method also has the benefits of being able to utilize existing tools of databases, WFS, and GML while enabling query based on ontology semantics. ?? 2008 Springer-Verlag Berlin Heidelberg.

  13. ADAM-M Data and Information

    Atmospheric Science Data Center

    2017-05-11

    ... Information Creating a Unified Airborne Database for Assessment and Validation of Global Models of Atmospheric ...  (3)  To generate a standardized in-situ observational database with best possible matching temporal and spatial scales to model ...

  14. Automated Topographic Change Detection via Dem Differencing at Large Scales Using The Arcticdem Database

    NASA Astrophysics Data System (ADS)

    Candela, S. G.; Howat, I.; Noh, M. J.; Porter, C. C.; Morin, P. J.

    2016-12-01

    In the last decade, high resolution satellite imagery has become an increasingly accessible tool for geoscientists to quantify changes in the Arctic land surface due to geophysical, ecological and anthropomorphic processes. However, the trade off between spatial coverage and spatial-temporal resolution has limited detailed, process-level change detection over large (i.e. continental) scales. The ArcticDEM project utilized over 300,000 Worldview image pairs to produce a nearly 100% coverage elevation model (above 60°N) offering the first polar, high spatial - high resolution (2-8m by region) dataset, often with multiple repeats in areas of particular interest to geo-scientists. A dataset of this size (nearly 250 TB) offers endless new avenues of scientific inquiry, but quickly becomes unmanageable computationally and logistically for the computing resources available to the average scientist. Here we present TopoDiff, a framework for a generalized. automated workflow that requires minimal input from the end user about a study site, and utilizes cloud computing resources to provide a temporally sorted and differenced dataset, ready for geostatistical analysis. This hands-off approach allows the end user to focus on the science, without having to manage thousands of files, or petabytes of data. At the same time, TopoDiff provides a consistent and accurate workflow for image sorting, selection, and co-registration enabling cross-comparisons between research projects.

  15. Water resources of the Black Sea Basin at high spatial and temporal resolution

    NASA Astrophysics Data System (ADS)

    Rouholahnejad, Elham; Abbaspour, Karim C.; Srinivasan, Raghvan; Bacu, Victor; Lehmann, Anthony

    2014-07-01

    The pressure on water resources, deteriorating water quality, and uncertainties associated with the climate change create an environment of conflict in large and complex river system. The Black Sea Basin (BSB), in particular, suffers from ecological unsustainability and inadequate resource management leading to severe environmental, social, and economical problems. To better tackle the future challenges, we used the Soil and Water Assessment Tool (SWAT) to model the hydrology of the BSB coupling water quantity, water quality, and crop yield components. The hydrological model of the BSB was calibrated and validated considering sensitivity and uncertainty analysis. River discharges, nitrate loads, and crop yields were used to calibrate the model. Employing grid technology improved calibration computation time by more than an order of magnitude. We calculated components of water resources such as river discharge, infiltration, aquifer recharge, soil moisture, and actual and potential evapotranspiration. Furthermore, available water resources were calculated at subbasin spatial and monthly temporal levels. Within this framework, a comprehensive database of the BSB was created to fill the existing gaps in water resources data in the region. In this paper, we discuss the challenges of building a large-scale model in fine spatial and temporal detail. This study provides the basis for further research on the impacts of climate and land use change on water resources in the BSB.

  16. Modernization and multiscale databases at the U.S. geological survey

    USGS Publications Warehouse

    Morrison, J.L.

    1992-01-01

    The U.S. Geological Survey (USGS) has begun a digital cartographic modernization program. Keys to that program are the creation of a multiscale database, a feature-based file structure that is derived from a spatial data model, and a series of "templates" or rules that specify the relationships between instances of entities in reality and features in the database. The database will initially hold data collected from the USGS standard map products at scales of 1:24,000, 1:100,000, and 1:2,000,000. The spatial data model is called the digital line graph-enhanced model, and the comprehensive rule set consists of collection rules, product generation rules, and conflict resolution rules. This modernization program will affect the USGS mapmaking process because both digital and graphic products will be created from the database. In addition, non-USGS map users will have more flexibility in uses of the databases. These remarks are those of the session discussant made in response to the six papers and the keynote address given in the session. ?? 1992.

  17. Geologic map and map database of parts of Marin, San Francisco, Alameda, Contra Costa, and Sonoma counties, California

    USGS Publications Warehouse

    Blake, M.C.; Jones, D.L.; Graymer, R.W.; digital database by Soule, Adam

    2000-01-01

    This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (mageo.txt, mageo.pdf, or mageo.ps), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (mageo.txt, mageo.pdf, or mageo.ps), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller.

  18. A spatial-temporal system for dynamic cadastral management.

    PubMed

    Nan, Liu; Renyi, Liu; Guangliang, Zhu; Jiong, Xie

    2006-03-01

    A practical spatio-temporal database (STDB) technique for dynamic urban land management is presented. One of the STDB models, the expanded model of Base State with Amendments (BSA), is selected as the basis for developing the dynamic cadastral management technique. Two approaches, the Section Fast Indexing (SFI) and the Storage Factors of Variable Granularity (SFVG), are used to improve the efficiency of the BSA model. Both spatial graphic data and attribute data, through a succinct engine, are stored in standard relational database management systems (RDBMS) for the actual implementation of the BSA model. The spatio-temporal database is divided into three interdependent sub-databases: present DB, history DB and the procedures-tracing DB. The efficiency of database operation is improved by the database connection in the bottom layer of the Microsoft SQL Server. The spatio-temporal system can be provided at a low-cost while satisfying the basic needs of urban land management in China. The approaches presented in this paper may also be of significance to countries where land patterns change frequently or to agencies where financial resources are limited.

  19. Automated processing of shoeprint images based on the Fourier transform for use in forensic science.

    PubMed

    de Chazal, Philip; Flynn, John; Reilly, Richard B

    2005-03-01

    The development of a system for automatically sorting a database of shoeprint images based on the outsole pattern in response to a reference shoeprint image is presented. The database images are sorted so that those from the same pattern group as the reference shoeprint are likely to be at the start of the list. A database of 476 complete shoeprint images belonging to 140 pattern groups was established with each group containing two or more examples. A panel of human observers performed the grouping of the images into pattern categories. Tests of the system using the database showed that the first-ranked database image belongs to the same pattern category as the reference image 65 percent of the time and that a correct match appears within the first 5 percent of the sorted images 87 percent of the time. The system has translational and rotational invariance so that the spatial positioning of the reference shoeprint images does not have to correspond with the spatial positioning of the shoeprint images of the database. The performance of the system for matching partial-prints was also determined.

  20. A motional Stark effect diagnostic analysis routine for improved resolution of iota in the core of the large helical device.

    PubMed

    Dobbins, T J; Ida, K; Suzuki, C; Yoshinuma, M; Kobayashi, T; Suzuki, Y; Yoshida, M

    2017-09-01

    A new Motional Stark Effect (MSE) analysis routine has been developed for improved spatial resolution in the core of the Large Helical Device (LHD). The routine was developed to reduce the dependency of the analysis on the Pfirsch-Schlüter (PS) current in the core. The technique used the change in the polarization angle as a function of flux in order to find the value of diota/dflux at each measurement location. By integrating inwards from the edge, the iota profile can be recovered from this method. This reduces the results' dependency on the PS current because the effect of the PS current on the MSE measurement is almost constant as a function of flux in the core; therefore, the uncertainty in the PS current has a minimal effect on the calculation of the iota profile. In addition, the VMEC database was remapped from flux into r/a space by interpolating in mode space in order to improve the database core resolution. These changes resulted in a much smoother iota profile, conforming more to the physics expectations of standard discharge scenarios in the core of the LHD.

  1. Enhancement of Spatial Ability in Girls in a Single-Sex Environment through Spatial Experience and the Impact on Information Seeking

    ERIC Educational Resources Information Center

    Swarlis, Linda L.

    2008-01-01

    The test scores of spatial ability for women lag behind those of men in many spatial tests. On the Mental Rotations Test (MRT), a significant gender gap has existed for over 20 years and continues to exist. High spatial ability has been linked to efficiencies in typical computing tasks including Web and database searching, text editing, and…

  2. The methane distribution on Titan: high resolution spectroscopy in the near-IR with Keck NIRSPEC/AO

    NASA Astrophysics Data System (ADS)

    Adamkovics, Mate; Mitchell, Jonathan L.

    2014-11-01

    The distribution of methane on Titan is a diagnostic of regional scale meteorology and large scale atmospheric circulation. The observed formation of clouds and the transport of heat through the atmosphere both depend on spatial and temporal variations in methane humidity. We have performed observations to measure the the distribution on methane Titan using high spectral resolution near-IR (H-band) observations made with NIRSPEC, with adaptive optics, at Keck Observatory in July 2014. This work builds on previous attempts at this measurement with improvement in the observing protocol and data reduction, together with increased integration times. Radiative transfer models using line-by-line calculation of methane opacities from the HITRAN2012 database are used to retrieve methane abundances. We will describe analysis of the reduced observations, which show latitudinal spatial variation in the region the spectrum that is thought to be sensitive to methane abundance. Quantifying the methane abundance variation requires models that include the spatial variation in surface albedo and meridional haze gradient; we will describe (currently preliminary) analysis of the the methane distribution and uncertainties in the retrieval.

  3. Conscious visual memory with minimal attention.

    PubMed

    Pinto, Yair; Vandenbroucke, Annelinde R; Otten, Marte; Sligte, Ilja G; Seth, Anil K; Lamme, Victor A F

    2017-02-01

    Is conscious visual perception limited to the locations that a person attends? The remarkable phenomenon of change blindness, which shows that people miss nearly all unattended changes in a visual scene, suggests the answer is yes. However, change blindness is found after visual interference (a mask or a new scene), so that subjects have to rely on working memory (WM), which has limited capacity, to detect the change. Before such interference, however, a much larger capacity store, called fragile memory (FM), which is easily overwritten by newly presented visual information, is present. Whether these different stores depend equally on spatial attention is central to the debate on the role of attention in conscious vision. In 2 experiments, we found that minimizing spatial attention almost entirely erases visual WM, as expected. Critically, FM remains largely intact. Moreover, minimally attended FM responses yield accurate metacognition, suggesting that conscious memory persists with limited spatial attention. Together, our findings help resolve the fundamental issue of how attention affects perception: Both visual consciousness and memory can be supported by only minimal attention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. A study on spatial decision support systems for HIV/AIDS prevention based on COM GIS technology

    NASA Astrophysics Data System (ADS)

    Yang, Kun; Luo, Huasong; Peng, Shungyun; Xu, Quanli

    2007-06-01

    Based on the deeply analysis of the current status and the existing problems of GIS technology applications in Epidemiology, this paper has proposed the method and process for establishing the spatial decision support systems of AIDS epidemic prevention by integrating the COM GIS, Spatial Database, GPS, Remote Sensing, and Communication technologies, as well as ASP and ActiveX software development technologies. One of the most important issues for constructing the spatial decision support systems of AIDS epidemic prevention is how to integrate the AIDS spreading models with GIS. The capabilities of GIS applications in the AIDS epidemic prevention have been described here in this paper firstly. Then some mature epidemic spreading models have also been discussed for extracting the computation parameters. Furthermore, a technical schema has been proposed for integrating the AIDS spreading models with GIS and relevant geospatial technologies, in which the GIS and model running platforms share a common spatial database and the computing results can be spatially visualized on Desktop or Web GIS clients. Finally, a complete solution for establishing the decision support systems of AIDS epidemic prevention has been offered in this paper based on the model integrating methods and ESRI COM GIS software packages. The general decision support systems are composed of data acquisition sub-systems, network communication sub-systems, model integrating sub-systems, AIDS epidemic information spatial database sub-systems, AIDS epidemic information querying and statistical analysis sub-systems, AIDS epidemic dynamic surveillance sub-systems, AIDS epidemic information spatial analysis and decision support sub-systems, as well as AIDS epidemic information publishing sub-systems based on Web GIS.

  5. Learning Efficient Spatial-Temporal Gait Features with Deep Learning for Human Identification.

    PubMed

    Liu, Wu; Zhang, Cheng; Ma, Huadong; Li, Shuangqun

    2018-02-06

    The integration of the latest breakthroughs in bioinformatics technology from one side and artificial intelligence from another side, enables remarkable advances in the fields of intelligent security guard computational biology, healthcare, and so on. Among them, biometrics based automatic human identification is one of the most fundamental and significant research topic. Human gait, which is a biometric features with the unique capability, has gained significant attentions as the remarkable characteristics of remote accessed, robust and security in the biometrics based human identification. However, the existed methods cannot well handle the indistinctive inter-class differences and large intra-class variations of human gait in real-world situation. In this paper, we have developed an efficient spatial-temporal gait features with deep learning for human identification. First of all, we proposed a gait energy image (GEI) based Siamese neural network to automatically extract robust and discriminative spatial gait features for human identification. Furthermore, we exploit the deep 3-dimensional convolutional networks to learn the human gait convolutional 3D (C3D) as the temporal gait features. Finally, the GEI and C3D gait features are embedded into the null space by the Null Foley-Sammon Transform (NFST). In the new space, the spatial-temporal features are sufficiently combined with distance metric learning to drive the similarity metric to be small for pairs of gait from the same person, and large for pairs from different persons. Consequently, the experiments on the world's largest gait database show our framework impressively outperforms state-of-the-art methods.

  6. Attempting to physically explain space-time correlation of extremes

    NASA Astrophysics Data System (ADS)

    Bernardara, Pietro; Gailhard, Joel

    2010-05-01

    Spatial and temporal clustering of hydro-meteorological extreme events is scientific evidence. Moreover, the statistical parameters characterizing their local frequencies of occurrence show clear spatial patterns. Thus, in order to robustly assess the hydro-meteorological hazard, statistical models need to be able to take into account spatial and temporal dependencies. Statistical models considering long term correlation for quantifying and qualifying temporal and spatial dependencies are available, such as multifractal approach. Furthermore, the development of regional frequency analysis techniques allows estimating the frequency of occurrence of extreme events taking into account spatial patterns on the extreme quantiles behaviour. However, in order to understand the origin of spatio-temporal clustering, an attempt to find physical explanation should be done. Here, some statistical evidences of spatio-temporal correlation and spatial patterns of extreme behaviour are given on a large database of more than 400 rainfall and discharge series in France. In particular, the spatial distribution of multifractal and Generalized Pareto distribution parameters shows evident correlation patterns in the behaviour of frequency of occurrence of extremes. It is then shown that the identification of atmospheric circulation pattern (weather types) can physically explain the temporal clustering of extreme rainfall events (seasonality) and the spatial pattern of the frequency of occurrence. Moreover, coupling this information with the hydrological modelization of a watershed (as in the Schadex approach) an explanation of spatio-temporal distribution of extreme discharge can also be provided. We finally show that a hydro-meteorological approach (as the Schadex approach) can explain and take into account space and time dependencies of hydro-meteorological extreme events.

  7. Geolocation of man-made reservoirs across terrains of varying complexity using GIS

    USGS Publications Warehouse

    Mixon, D.M.; Kinner, D.A.; Stallard, R.F.; Syvitski, J.P.M.

    2008-01-01

    The Reservoir Sedimentation Survey Information System (RESIS) is one of the world's most comprehensive databases of reservoir sedimentation rates, comprising nearly 6000 surveys for 1819 reservoirs across the continental United States. Sediment surveys in the database date from 1904 to 1999, though more than 95% of surveys were entered prior to 1980, making RESIS largely a historical database. The use of this database for large-scale studies has been limited by the lack of precise coordinates for the reservoirs. Many of the reservoirs are relatively small structures and do not appear on current USGS topographic maps. Others have been renamed or have only approximate (i.e. township and range) coordinates. This paper presents a method scripted in ESRI's ARC Macro Language (AML) to locate the reservoirs on digital elevation models using information available in RESIS. The script also delineates the contributing watersheds and compiles several hydrologically important parameters for each reservoir. Evaluation of the method indicates that, for watersheds larger than 5 km2, the correct outlet is identified over 80% of the time. The importance of identifying the watershed outlet correctly depends on the application. Our intent is to collect spatial data for watersheds across the continental United States and describe the land use, soils, and topography for each reservoir's watershed. Because of local landscape similarity in these properties, we show that choosing the incorrect watershed does not necessarily mean that the watershed characteristics will be misrepresented. We present a measure termed terrain complexity and examine its relationship to geolocation success rate and its influence on the similarity of nearby watersheds. ?? 2008 Elsevier Ltd. All rights reserved.

  8. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain.

    PubMed

    Dominkovics, Pau; Granell, Carlos; Pérez-Navarro, Antoni; Casals, Martí; Orcau, Angels; Caylà, Joan A

    2011-11-29

    Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web processing services together with web-based mapping capabilities suit the needs of health practitioners in epidemiological analysis scenarios.

  9. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain

    PubMed Central

    2011-01-01

    Background Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Methods Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. Results The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. Conclusions In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web processing services together with web-based mapping capabilities suit the needs of health practitioners in epidemiological analysis scenarios. PMID:22126392

  10. High-resolution inventory of technologies, activities, and emissions of coal-fired power plants in China from 1990 to 2010

    NASA Astrophysics Data System (ADS)

    Liu, F.; Zhang, Q.; Tong, D.; Zheng, B.; Li, M.; Huo, H.; He, K. B.

    2015-07-01

    This paper, which focuses on emissions from China's coal-fired power plants during 1990-2010, is the second in a series of papers that aims to develop high-resolution emission inventory for China. This is the first time that emissions from China's coal-fired power plants were estimated at unit level for a 20 year period. This inventory is constructed from a unit-based database compiled in this study, named the China coal-fired Power plant Emissions Database (CPED), which includes detailed information on the technologies, activity data, operation situation, emission factors, and locations of individual units and supplements with aggregated data where unit-based information is not available. Between 1990 and 2010, compared to a 479 % growth in coal consumption, emissions from China's coal-fired power plants increased by 56, 335 and 442 % for SO2, NOx and CO2, respectively, and decreased by 23 % for PM2.5. Driven by the accelerated economy growth, large power plants were constructed throughout the country after 2000, resulting in dramatic growth in emissions. Growth trend of emissions has been effective curbed since 2005 due to strengthened emission control measures including the installation of flue-gas desulfurization (FGD) systems and the optimization of the generation fleet mix by promoting large units and decommissioning small ones. Compared to previous emission inventories, CPED significantly improved the spatial resolution and temporal profile of power plant emission inventory in China by extensive use of underlying data at unit level. The new inventory developed in this study will enable a close examination for temporal and spatial variations of power plant emissions in China and will help to improve the performances of chemical transport models by providing more accurate emission data.

  11. A completely automated CAD system for mass detection in a large mammographic database.

    PubMed

    Bellotti, R; De Carlo, F; Tangaro, S; Gargano, G; Maggipinto, G; Castellano, M; Massafra, R; Cascio, D; Fauci, F; Magro, R; Raso, G; Lauria, A; Forni, G; Bagnasco, S; Cerello, P; Zanon, E; Cheran, S C; Lopez Torres, E; Bottigli, U; Masala, G L; Oliva, P; Retico, A; Fantacci, M E; Cataldo, R; De Mitri, I; De Nunzio, G

    2006-08-01

    Mass localization plays a crucial role in computer-aided detection (CAD) systems for the classification of suspicious regions in mammograms. In this article we present a completely automated classification system for the detection of masses in digitized mammographic images. The tool system we discuss consists in three processing levels: (a) Image segmentation for the localization of regions of interest (ROIs). This step relies on an iterative dynamical threshold algorithm able to select iso-intensity closed contours around gray level maxima of the mammogram. (b) ROI characterization by means of textural features computed from the gray tone spatial dependence matrix (GTSDM), containing second-order spatial statistics information on the pixel gray level intensity. As the images under study were recorded in different centers and with different machine settings, eight GTSDM features were selected so as to be invariant under monotonic transformation. In this way, the images do not need to be normalized, as the adopted features depend on the texture only, rather than on the gray tone levels, too. (c) ROI classification by means of a neural network, with supervision provided by the radiologist's diagnosis. The CAD system was evaluated on a large database of 3369 mammographic images [2307 negative, 1062 pathological (or positive), containing at least one confirmed mass, as diagnosed by an expert radiologist]. To assess the performance of the system, receiver operating characteristic (ROC) and free-response ROC analysis were employed. The area under the ROC curve was found to be Az = 0.783 +/- 0.008 for the ROI-based classification. When evaluating the accuracy of the CAD against the radiologist-drawn boundaries, 4.23 false positives per image are found at 80% of mass sensitivity.

  12. Surficial geologic map of the Amboy 30' x 60' quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Bedford, David R.; Miller, David M.; Phelps, Geoffrey A.

    2010-01-01

    The surficial geologic map of the Amboy 30' x 60' quadrangle presents characteristics of surficial materials for an area of approximately 5,000 km2 in the eastern Mojave Desert of southern California. This map consists of new surficial mapping conducted between 2000 and 2007, as well as compilations from previous surficial mapping. Surficial geologic units are mapped and described based on depositional process and age categories that reflect the mode of deposition, pedogenic effects following deposition, and, where appropriate, the lithologic nature of the material. Many physical properties were noted and measured during the geologic mapping. This information was used to classify surficial deposits and to understand their ecological importance. We focus on physical properties that drive hydrologic, biologic, and physical processes such as particle-size distribution (PSD) and bulk density. The database contains point data representing locations of samples for both laboratory determined physical properties and semiquantitative field-based information in the database. We include the locations of all field observations and note the type of information collected in the field to help assist in assessing the quality of the mapping. The publication is separated into three parts: documentation, spatial data, and printable map graphics of the database. Documentation includes this pamphlet, which provides a discussion of the surficial geology and units and the map. Spatial data are distributed as ArcGIS Geodatabase in Microsoft Access format and are accompanied by a readme file, which describes the database contents, and FGDC metadata for the spatial map information. Map graphics files are distributed as Postscript and Adobe Portable Document Format (PDF) files that provide a view of the spatial database at the mapped scale.

  13. Web-based Visualization and Query of semantically segmented multiresolution 3D Models in the Field of Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Auer, M.; Agugiaro, G.; Billen, N.; Loos, L.; Zipf, A.

    2014-05-01

    Many important Cultural Heritage sites have been studied over long periods of time by different means of technical equipment, methods and intentions by different researchers. This has led to huge amounts of heterogeneous "traditional" datasets and formats. The rising popularity of 3D models in the field of Cultural Heritage in recent years has brought additional data formats and makes it even more necessary to find solutions to manage, publish and study these data in an integrated way. The MayaArch3D project aims to realize such an integrative approach by establishing a web-based research platform bringing spatial and non-spatial databases together and providing visualization and analysis tools. Especially the 3D components of the platform use hierarchical segmentation concepts to structure the data and to perform queries on semantic entities. This paper presents a database schema to organize not only segmented models but also different Levels-of-Details and other representations of the same entity. It is further implemented in a spatial database which allows the storing of georeferenced 3D data. This enables organization and queries by semantic, geometric and spatial properties. As service for the delivery of the segmented models a standardization candidate of the OpenGeospatialConsortium (OGC), the Web3DService (W3DS) has been extended to cope with the new database schema and deliver a web friendly format for WebGL rendering. Finally a generic user interface is presented which uses the segments as navigation metaphor to browse and query the semantic segmentation levels and retrieve information from an external database of the German Archaeological Institute (DAI).

  14. Assessment and mapping of water pollution indices in zone-III of municipal corporation of hyderabad using remote sensing and geographic information system.

    PubMed

    Asadi, S S; Vuppala, Padmaja; Reddy, M Anji

    2005-01-01

    A preliminary survey of area under Zone-III of MCH was undertaken to assess the ground water quality, demonstrate its spatial distribution and correlate with the land use patterns using advance techniques of remote sensing and geographical information system (GIS). Twenty-seven ground water samples were collected and their chemical analysis was done to form the attribute database. Water quality index was calculated from the measured parameters, based on which the study area was classified into five groups with respect to suitability of water for drinking purpose. Thematic maps viz., base map, road network, drainage and land use/land cover were prepared from IRS ID PAN + LISS III merged satellite imagery forming the spatial database. Attribute database was integrated with spatial sampling locations map in Arc/Info and maps showing spatial distribution of water quality parameters were prepared in Arc View. Results indicated that high concentrations of total dissolved solids (TDS), nitrates, fluorides and total hardness were observed in few industrial and densely populated areas indicating deteriorated water quality while the other areas exhibited moderate to good water quality.

  15. In need of combined topography and bathymetry DEM

    NASA Astrophysics Data System (ADS)

    Kisimoto, K.; Hilde, T.

    2003-04-01

    In many geoscience applications, digital elevation models (DEMs) are now more commonly used at different scales and greater resolution due to the great advancement in computer technology. Increasing the accuracy/resolution of the model and the coverage of the terrain (global model) has been the goal of users as mapping technology has improved and computers get faster and cheaper. The ETOPO5 (5 arc minutes spatial resolution land and seafloor model), initially developed in 1988 by Margo Edwards, then at Washington University, St. Louis, MO, has been the only global terrain model for a long time, and it is now being replaced by three new topographic and bathymetric DEMs, i.e.; the ETOPO2 (2 arc minutes spatial resolution land and seafloor model), the GTOPO30 land model with a spatial resolution of 30 arc seconds (c.a. 1km at equator) and the 'GEBCO 1-MINUTE GLOBAL BATHYMETRIC GRID' ocean floor model with a spatial resolution of 1 arc minute (c.a. 2 km at equator). These DEMs are products of projects through which compilation and reprocessing of existing and/or new datasets were made to meet user's new requirements. These ongoing efforts are valuable and support should be continued to refine and update these DEMs. On the other hand, a different approach to create a global bathymetric (seafloor) database exists. A method to estimate the seafloor topography from satellite altimetry combined with existing ships' conventional sounding data was devised and a beautiful global seafloor database created and made public by W.H. Smith and D.T. Sandwell in 1997. The big advantage of this database is the uniformity of coverage, i.e. there is no large area where depths are missing. It has a spatial resolution of 2 arc minute. Another important effort is found in making regional, not global, seafloor databases with much finer resolutions in many countries. The Japan Hydrographic Department has compiled and released a 500m-grid topography database around Japan, J-EGG500, in 1999. Although the coverage of this database is only a small portion of the Earth, the database has been highly appreciated in the academic community, and accepted in surprise by the general public when the database was displayed in 3D imagery to show its quality. This database could be rather smoothly combined with the finer land DEM of 250m spatial resolution (Japan250m.grd, K. Kisimoto, 2000). One of the most important applications of this combined DEM of topography and bathymetry is tsunami modeling. Understanding of the coastal environment, management and development of the coastal region are other fields in need of these data. There is, however, an important issue to consider when we create a combined DEM of topography and bathymetry in finer resolutions. The problem arises from the discrepancy of the standard datum planes or reference levels used for topographic leveling and bathymetric sounding. Land topography (altitude) is defined by leveling from the single reference point determined by average mean sea level, in other words, land height is measured from the geoid. On the other hand, depth charts are made based on depth measured from locally determined reference sea surface level, and this value of sea surface level is taken from the long term average of the lowest tidal height. So, to create a combined DEM of topography and bathymetry in very fine scale, we need to avoid this inconsistency between height and depth across the coastal region. Height and depth should be physically continuous relative to a single reference datum across the coast within such new high resolution DEMs. (N.B. Coast line is not equal to 'altitude-zero line' nor 'depth-zero line'. It is defined locally as the long term average of the highest tide level.) All of this said, we still need a lot of work on the ocean side. Global coverage with detailed bathymetric mapping is still poor. Seafloor imaging and other geophysical measurements/experiments should be organized and conducted internationally and interdisciplinary ways more than ever. We always need greater technological advancement and application of this technology in marine sciences, and more enthusiastic minds of seagoing researchers as well. Recent seafloor mapping technology/quality both in bathymetry and imagery is very promising and even favorably compared with the terrain mapping. We discuss and present on recent achievement and needs on the seafloor mapping using several most up-to-date global- and regional- DEMs available for science community at the poster session.

  16. A global organism detection and monitoring system for non-native species

    USGS Publications Warehouse

    Graham, J.; Newman, G.; Jarnevich, C.; Shory, R.; Stohlgren, T.J.

    2007-01-01

    Harmful invasive non-native species are a significant threat to native species and ecosystems, and the costs associated with non-native species in the United States is estimated at over $120 Billion/year. While some local or regional databases exist for some taxonomic groups, there are no effective geographic databases designed to detect and monitor all species of non-native plants, animals, and pathogens. We developed a web-based solution called the Global Organism Detection and Monitoring (GODM) system to provide real-time data from a broad spectrum of users on the distribution and abundance of non-native species, including attributes of their habitats for predictive spatial modeling of current and potential distributions. The four major subsystems of GODM provide dynamic links between the organism data, web pages, spatial data, and modeling capabilities. The core survey database tables for recording invasive species survey data are organized into three categories: "Where, Who & When, and What." Organisms are identified with Taxonomic Serial Numbers from the Integrated Taxonomic Information System. To allow users to immediately see a map of their data combined with other user's data, a custom geographic information system (GIS) Internet solution was required. The GIS solution provides an unprecedented level of flexibility in database access, allowing users to display maps of invasive species distributions or abundances based on various criteria including taxonomic classification (i.e., phylum or division, order, class, family, genus, species, subspecies, and variety), a specific project, a range of dates, and a range of attributes (percent cover, age, height, sex, weight). This is a significant paradigm shift from "map servers" to true Internet-based GIS solutions. The remainder of the system was created with a mix of commercial products, open source software, and custom software. Custom GIS libraries were created where required for processing large datasets, accessing the operating system, and to use existing libraries in C++, R, and other languages to develop the tools to track harmful species in space and time. The GODM database and system are crucial for early detection and rapid containment of invasive species. ?? 2007 Elsevier B.V. All rights reserved.

  17. Large ensemble and large-domain hydrologic modeling: Insights from SUMMA applications in the Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Ou, G.; Nijssen, B.; Nearing, G. S.; Newman, A. J.; Mizukami, N.; Clark, M. P.

    2016-12-01

    The Structure for Unifying Multiple Modeling Alternatives (SUMMA) provides a unifying modeling framework for process-based hydrologic modeling by defining a general set of conservation equations for mass and energy, with the capability to incorporate multiple choices for spatial discretizations and flux parameterizations. In this study, we provide a first demonstration of large-scale hydrologic simulations using SUMMA through an application to the Columbia River Basin (CRB) in the northwestern United States and Canada for a multi-decadal simulation period. The CRB is discretized into 11,723 hydrologic response units (HRUs) according to the United States Geologic Service Geospatial Fabric. The soil parameters are derived from the Natural Resources Conservation Service Soil Survey Geographic (SSURGO) Database. The land cover parameters are based on the National Land Cover Database from the year 2001 created by the Multi-Resolution Land Characteristics (MRLC) Consortium. The forcing data, including hourly air pressure, temperature, specific humidity, wind speed, precipitation, shortwave and longwave radiations, are based on Phase 2 of the North American Land Data Assimilation System (NLDAS-2) and averaged for each HRU. The simulation results are compared to simulations with the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS). We are particularly interested in SUMMA's capability to mimic model behaviors of the other two models through the selection of appropriate model parameterizations in SUMMA.

  18. Soil organic carbon stocks in Alaska estimated with spatial and pedon data

    USGS Publications Warehouse

    Bliss, Norman B.; Maursetter, J.

    2010-01-01

    Temperatures in high-latitude ecosystems are increasing faster than the average rate of global warming, which may lead to a positive feedback for climate change by increasing the respiration rates of soil organic C. If a positive feedback is confirmed, soil C will represent a source of greenhouse gases that is not currently considered in international protocols to regulate C emissions. We present new estimates of the stocks of soil organic C in Alaska, calculated by linking spatial and field data developed by the USDA NRCS. The spatial data are from the State Soil Geographic database (STATSGO), and the field and laboratory data are from the National Soil Characterization Database, also known as the pedon database. The new estimates range from 32 to 53 Pg of soil organic C for Alaska, formed by linking the spatial and field data using the attributes of Soil Taxonomy. For modelers, we recommend an estimation method based on taxonomic subgroups with interpolation for missing areas, which yields an estimate of 48 Pg. This is a substantial increase over a magnitude of 13 Pg estimated from only the STATSGO data as originally distributed in 1994, but the increase reflects different estimation methods and is not a measure of the change in C on the landscape. Pedon samples were collected between 1952 and 2002, so the results do not represent a single point in time. The linked databases provide an improved basis for modeling the impacts of climate change on net ecosystem exchange.

  19. Wildlife tracking data management: a new vision.

    PubMed

    Urbano, Ferdinando; Cagnacci, Francesca; Calenge, Clément; Dettki, Holger; Cameron, Alison; Neteler, Markus

    2010-07-27

    To date, the processing of wildlife location data has relied on a diversity of software and file formats. Data management and the following spatial and statistical analyses were undertaken in multiple steps, involving many time-consuming importing/exporting phases. Recent technological advancements in tracking systems have made large, continuous, high-frequency datasets of wildlife behavioural data available, such as those derived from the global positioning system (GPS) and other animal-attached sensor devices. These data can be further complemented by a wide range of other information about the animals' environment. Management of these large and diverse datasets for modelling animal behaviour and ecology can prove challenging, slowing down analysis and increasing the probability of mistakes in data handling. We address these issues by critically evaluating the requirements for good management of GPS data for wildlife biology. We highlight that dedicated data management tools and expertise are needed. We explore current research in wildlife data management. We suggest a general direction of development, based on a modular software architecture with a spatial database at its core, where interoperability, data model design and integration with remote-sensing data sources play an important role in successful GPS data handling.

  20. Wildlife tracking data management: a new vision

    PubMed Central

    Urbano, Ferdinando; Cagnacci, Francesca; Calenge, Clément; Dettki, Holger; Cameron, Alison; Neteler, Markus

    2010-01-01

    To date, the processing of wildlife location data has relied on a diversity of software and file formats. Data management and the following spatial and statistical analyses were undertaken in multiple steps, involving many time-consuming importing/exporting phases. Recent technological advancements in tracking systems have made large, continuous, high-frequency datasets of wildlife behavioural data available, such as those derived from the global positioning system (GPS) and other animal-attached sensor devices. These data can be further complemented by a wide range of other information about the animals' environment. Management of these large and diverse datasets for modelling animal behaviour and ecology can prove challenging, slowing down analysis and increasing the probability of mistakes in data handling. We address these issues by critically evaluating the requirements for good management of GPS data for wildlife biology. We highlight that dedicated data management tools and expertise are needed. We explore current research in wildlife data management. We suggest a general direction of development, based on a modular software architecture with a spatial database at its core, where interoperability, data model design and integration with remote-sensing data sources play an important role in successful GPS data handling. PMID:20566495

  1. Mining Claim Activity on Federal Land for the Period 1976 through 2003

    USGS Publications Warehouse

    Causey, J. Douglas

    2005-01-01

    Previous reports on mining claim records provided information and statistics (number of claims) using data from the U.S. Bureau of Land Management's (BLM) Mining Claim Recordation System. Since that time, BLM converted their mining claim data to the Legacy Repost 2000 system (LR2000). This report describes a process to extract similar statistical data about mining claims from LR2000 data using different software and procedures than were used in the earlier work. A major difference between this process and the previous work is that every section that has a mining claim record is assigned a value. This is done by proportioning a claim between each section in which it is recorded. Also, the mining claim data in this report includes all BLM records, not just the western states. LR2000 mining claim database tables for the United States were provided by BLM in text format and imported into a Microsoft? Access2000 database in January, 2004. Data from two tables in the BLM LR2000 database were summarized through a series of database queries to determine a number that represents active mining claims in each Public Land Survey (PLS) section for each of the years from 1976 to 2002. For most of the area, spatial databases are also provided. The spatial databases are only configured to work with the statistics provided in the non-spatial data files. They are suitable for geographic information system (GIS)-based regional assessments at a scale of 1:100,000 or smaller (for example, 1:250,000).

  2. Simultaneous detection of landmarks and key-frame in cardiac perfusion MRI using a joint spatial-temporal context model

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoguang; Xue, Hui; Jolly, Marie-Pierre; Guetter, Christoph; Kellman, Peter; Hsu, Li-Yueh; Arai, Andrew; Zuehlsdorff, Sven; Littmann, Arne; Georgescu, Bogdan; Guehring, Jens

    2011-03-01

    Cardiac perfusion magnetic resonance imaging (MRI) has proven clinical significance in diagnosis of heart diseases. However, analysis of perfusion data is time-consuming, where automatic detection of anatomic landmarks and key-frames from perfusion MR sequences is helpful for anchoring structures and functional analysis of the heart, leading toward fully automated perfusion analysis. Learning-based object detection methods have demonstrated their capabilities to handle large variations of the object by exploring a local region, i.e., context. Conventional 2D approaches take into account spatial context only. Temporal signals in perfusion data present a strong cue for anchoring. We propose a joint context model to encode both spatial and temporal evidence. In addition, our spatial context is constructed not only based on the landmark of interest, but also the landmarks that are correlated in the neighboring anatomies. A discriminative model is learned through a probabilistic boosting tree. A marginal space learning strategy is applied to efficiently learn and search in a high dimensional parameter space. A fully automatic system is developed to simultaneously detect anatomic landmarks and key frames in both RV and LV from perfusion sequences. The proposed approach was evaluated on a database of 373 cardiac perfusion MRI sequences from 77 patients. Experimental results of a 4-fold cross validation show superior landmark detection accuracies of the proposed joint spatial-temporal approach to the 2D approach that is based on spatial context only. The key-frame identification results are promising.

  3. Some practicable applications of quadtree data structures/representation in astronomy

    NASA Technical Reports Server (NTRS)

    Pasztor, L.

    1992-01-01

    Development of quadtree as hierarchical data structuring technique for representing spatial data (like points, regions, surfaces, lines, curves, volumes, etc.) has been motivated to a large extent by storage requirements of images, maps, and other multidimensional (spatially structured) data. For many spatial algorithms, time-efficiency of quadtrees in terms of execution may be as important as their space-efficiency concerning storage conditions. Briefly, the quadtree is a class of hierarchical data structures which is based on the recursive partition of a square region into quadrants and sub-quadrants until a predefined limit. Beyond the wide applicability of quadtrees in image processing, spatial information analysis, and building digital databases (processes becoming ordinary for the astronomical community), there may be numerous further applications in astronomy. Some of these practicable applications based on quadtree representation of astronomical data are presented and suggested for further considerations. Examples are shown for use of point as well as region quadtrees. Statistics of different leaf and non-leaf nodes (homogeneous and heterogeneous sub-quadrants respectively) at different levels may provide useful information on spatial structure of astronomical data in question. By altering the principle guiding the decomposition process, different types of spatial data may be focused on. Finally, a sampling method based on quadtree representation of an image is proposed which may prove to be efficient in the elaboration of sampling strategy in a region where observations were carried out previously either with different resolution or/and in different bands.

  4. Validation of multi-mission satellite altimetry for the Baltic Sea region

    NASA Astrophysics Data System (ADS)

    Kudryavtseva, Nadia; Soomere, Tarmo; Giudici, Andrea

    2016-04-01

    Currently, three sources of wave data are available for the research community, namely, buoys, modelling, and satellite altimetry. The buoy measurements provide high-quality time series of wave properties but they are deployed only in a few locations. Wave modelling covers large domains and provides good results for the open sea conditions. However, the limitation of modelling is that the results are dependent on wind quality and assumptions put into the model. Satellite altimetry in many occasions provides homogeneous data over large sea areas with an appreciable spatial and temporal resolution. The use of satellite altimetry is problematic in coastal areas and partially ice-covered water bodies. These limitations can be circumvented by careful analysis of the geometry of the basin, ice conditions and spatial coverage of each altimetry snapshot. In this poster, for the first time, we discuss a validation of 30 years of multi-mission altimetry covering the whole Baltic Sea. We analysed data from RADS database (Scharroo et al. 2013) which span from 1985 to 2015. To assess the limitations of the satellite altimeter data quality, the data were cross-matched with available wave measurements from buoys of the Swedish Meteorological and Hydrological Institute and Finnish Meteorological Institute. The altimeter-measured significant wave heights showed a very good correspondence with the wave buoys. We show that the data with backscatter coefficients more than 13.5 and high errors in significant wave heights and range should be excluded. We also examined the effect of ice cover and distance from the land on satellite altimetry measurements. The analysis of cross-matches between the satellite altimetry data and buoys' measurements shows that the data are only corrupted in the nearshore domain within 0.2 degrees from the coast. The statistical analysis showed a significant decrease in wave heights for sea areas with ice concentration more than 30 percent. We also checked and corrected the data for biases between different missions. This analysis provides a unique uniform database of satellite altimetry measurements over the whole Baltic Sea, which can be further used for finding biases in wave modelling and studies of wave climatology. The database is available upon request.

  5. gPhoton: The GALEX Photon Data Archive

    NASA Astrophysics Data System (ADS)

    Million, Chase; Fleming, Scott W.; Shiao, Bernie; Seibert, Mark; Loyd, Parke; Tucker, Michael; Smith, Myron; Thompson, Randy; White, Richard L.

    2016-12-01

    gPhoton is a new database product and software package that enables analysis of GALEX ultraviolet data at the photon level. The project’s stand-alone, pure-Python calibration pipeline reproduces the functionality of the original mission pipeline to reduce raw spacecraft data to lists of time-tagged, sky-projected photons, which are then hosted in a publicly available database by the Mikulski Archive at Space Telescope. This database contains approximately 130 terabytes of data describing approximately 1.1 trillion sky-projected events with a timestamp resolution of five milliseconds. A handful of Python and command-line modules serve as a front end to interact with the database and to generate calibrated light curves and images from the photon-level data at user-defined temporal and spatial scales. The gPhoton software and source code are in active development and publicly available under a permissive license. We describe the motivation, design, and implementation of the calibration pipeline, database, and tools, with emphasis on divergence from prior work, as well as challenges created by the large data volume. We summarize the astrometric and photometric performance of gPhoton relative to the original mission pipeline. For a brief example of short time-domain science capabilities enabled by gPhoton, we show new flares from the known M-dwarf flare star CR Draconis. The gPhoton software has permanent object identifiers with the ASCL (ascl:1603.004) and DOI (doi:10.17909/T9CC7G). This paper describes the software as of version v1.27.2.

  6. A Spatially-Registered, Massively Parallelised Data Structure for Interacting with Large, Integrated Geodatasets

    NASA Astrophysics Data System (ADS)

    Irving, D. H.; Rasheed, M.; O'Doherty, N.

    2010-12-01

    The efficient storage, retrieval and interactive use of subsurface data present great challenges in geodata management. Data volumes are typically massive, complex and poorly indexed with inadequate metadata. Derived geomodels and interpretations are often tightly bound in application-centric and proprietary formats; open standards for long-term stewardship are poorly developed. Consequently current data storage is a combination of: complex Logical Data Models (LDMs) based on file storage formats; 2D GIS tree-based indexing of spatial data; and translations of serialised memory-based storage techniques into disk-based storage. Whilst adequate for working at the mesoscale over a short timeframes, these approaches all possess technical and operational shortcomings: data model complexity; anisotropy of access; scalability to large and complex datasets; and weak implementation and integration of metadata. High performance hardware such as parallelised storage and Relational Database Management System (RDBMS) have long been exploited in many solutions but the underlying data structure must provide commensurate efficiencies to allow multi-user, multi-application and near-realtime data interaction. We present an open Spatially-Registered Data Structure (SRDS) built on Massively Parallel Processing (MPP) database architecture implemented by a ANSI SQL 2008 compliant RDBMS. We propose a LDM comprising a 3D Earth model that is decomposed such that each increasing Level of Detail (LoD) is achieved by recursively halving the bin size until it is less than the error in each spatial dimension for that data point. The value of an attribute at that point is stored as a property of that point and at that LoD. It is key to the numerical efficiency of the SRDS that it is under-pinned by a power-of-two relationship thus precluding the need for computationally intensive floating point arithmetic. Our approach employed a tightly clustered MPP array with small clusters of storage, processors and memory communicating over a high-speed network inter-connect. This is a shared-nothing architecture where resources are managed within each cluster unlike most other RDBMSs. Data are accessed on this architecture by their primary index values which utilises the hashing algorithm for point-to-point access. The hashing algorithm’s main role is the efficient distribution of data across the clusters based on the primary index. In this study we used 3D seismic volumes, 2D seismic profiles and borehole logs to demonstrate application in both (x,y,TWT) and (x,y,z)-space. In the SRDS the primary index is a composite column index of (x,y) to avoid invoking time-consuming full table scans as is the case in tree-based systems. This means that data access is isotropic. A query for data in a specified spatial range permits retrieval recursively by point-to-point queries within each nested LoD yielding true linear performance up to the Petabyte scale with hardware scaling presenting the primary limiting factor. Our architecture and LDM promotes: realtime interaction with massive data volumes; streaming of result sets and server-rendered 2D/3D imagery; rigorous workflow control and auditing; and in-database algorithms run directly against data as a HPC cloud service.

  7. Remote science support during MARS2013: testing a map-based system of data processing and utilization for future long-duration planetary missions.

    PubMed

    Losiak, Anna; Gołębiowska, Izabela; Orgel, Csilla; Moser, Linda; MacArthur, Jane; Boyd, Andrea; Hettrich, Sebastian; Jones, Natalie; Groemer, Gernot

    2014-05-01

    MARS2013 was an integrated Mars analog field simulation in eastern Morocco performed by the Austrian Space Forum between February 1 and 28, 2013. The purpose of this paper is to discuss the system of data processing and utilization adopted by the Remote Science Support (RSS) team during this mission. The RSS team procedures were designed to optimize operational efficiency of the Flightplan, field crew, and RSS teams during a long-term analog mission with an introduced 10 min time delay in communication between "Mars" and Earth. The RSS workflow was centered on a single-file, easy-to-use, spatially referenced database that included all the basic information about the conditions at the site of study, as well as all previous and planned activities. This database was prepared in Google Earth software. The lessons learned from MARS2013 RSS team operations are as follows: (1) using a spatially referenced database is an efficient way of data processing and data utilization in a long-term analog mission with a large amount of data to be handled, (2) mission planning based on iterations can be efficiently supported by preparing suitability maps, (3) the process of designing cartographical products should start early in the planning stages of a mission and involve representatives of all teams, (4) all team members should be trained in usage of cartographical products, (5) technical problems (e.g., usage of a geological map while wearing a space suit) should be taken into account when planning a work flow for geological exploration, (6) a system that helps the astronauts to efficiently orient themselves in the field should be designed as part of future analog studies.

  8. Development of an Independent Global Land Cover Validation Dataset

    NASA Astrophysics Data System (ADS)

    Sulla-Menashe, D. J.; Olofsson, P.; Woodcock, C. E.; Holden, C.; Metcalfe, M.; Friedl, M. A.; Stehman, S. V.; Herold, M.; Giri, C.

    2012-12-01

    Accurate information related to the global distribution and dynamics in global land cover is critical for a large number of global change science questions. A growing number of land cover products have been produced at regional to global scales, but the uncertainty in these products and the relative strengths and weaknesses among available products are poorly characterized. To address this limitation we are compiling a database of high spatial resolution imagery to support international land cover validation studies. Validation sites were selected based on a probability sample, and may therefore be used to estimate statistically defensible accuracy statistics and associated standard errors. Validation site locations were identified using a stratified random design based on 21 strata derived from an intersection of Koppen climate classes and a population density layer. In this way, the two major sources of global variation in land cover (climate and human activity) are explicitly included in the stratification scheme. At each site we are acquiring high spatial resolution (< 1-m) satellite imagery for 5-km x 5-km blocks. The response design uses an object-oriented hierarchical legend that is compatible with the UN FAO Land Cover Classification System. Using this response design, we are classifying each site using a semi-automated algorithm that blends image segmentation with a supervised RandomForest classification algorithm. In the long run, the validation site database is designed to support international efforts to validate land cover products. To illustrate, we use the site database to validate the MODIS Collection 4 Land Cover product, providing a prototype for validating the VIIRS Surface Type Intermediate Product scheduled to start operational production early in 2013. As part of our analysis we evaluate sources of error in coarse resolution products including semantic issues related to the class definitions, mixed pixels, and poor spectral separation between classes.

  9. Development of a database for the verification of trans-ionospheric remote sensing systems

    NASA Astrophysics Data System (ADS)

    Leitinger, R.

    2005-08-01

    Remote sensing systems need verification by means of in-situ data or by means of model data. In the case of ionospheric occultation inversion, ionosphere tomography and other imaging methods on the basis of satellite-to-ground or satellite-to-satellite electron content, the availability of in-situ data with adequate spatial and temporal co-location is a very rare case, indeed. Therefore the method of choice for verification is to produce artificial electron content data with realistic properties, subject these data to the inversion/retrieval method, compare the results with model data and apply a suitable type of “goodness of fit” classification. Inter-comparison of inversion/retrieval methods should be done with sets of artificial electron contents in a “blind” (or even “double blind”) way. The set up of a relevant database for the COST 271 Action is described. One part of the database will be made available to everyone interested in testing of inversion/retrieval methods. The artificial electron content data are calculated by means of large-scale models that are “modulated” in a realistic way to include smaller scale and dynamic structures, like troughs and traveling ionospheric disturbances.

  10. Scaling Semantic Graph Databases in Size and Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morari, Alessandro; Castellana, Vito G.; Villa, Oreste

    In this paper we present SGEM, a full software system for accelerating large-scale semantic graph databases on commodity clusters. Unlike current approaches, SGEM addresses semantic graph databases by only employing graph methods at all the levels of the stack. On one hand, this allows exploiting the space efficiency of graph data structures and the inherent parallelism of graph algorithms. These features adapt well to the increasing system memory and core counts of modern commodity clusters. On the other hand, however, these systems are optimized for regular computation and batched data transfers, while graph methods usually are irregular and generate fine-grainedmore » data accesses with poor spatial and temporal locality. Our framework comprises a SPARQL to data parallel C compiler, a library of parallel graph methods and a custom, multithreaded runtime system. We introduce our stack, motivate its advantages with respect to other solutions and show how we solved the challenges posed by irregular behaviors. We present the result of our software stack on the Berlin SPARQL benchmarks with datasets up to 10 billion triples (a triple corresponds to a graph edge), demonstrating scaling in dataset size and in performance as more nodes are added to the cluster.« less

  11. Real-time traffic sign detection and recognition

    NASA Astrophysics Data System (ADS)

    Herbschleb, Ernst; de With, Peter H. N.

    2009-01-01

    The continuous growth of imaging databases increasingly requires analysis tools for extraction of features. In this paper, a new architecture for the detection of traffic signs is proposed. The architecture is designed to process a large database with tens of millions of images with a resolution up to 4,800x2,400 pixels. Because of the size of the database, a high reliability as well as a high throughput is required. The novel architecture consists of a three-stage algorithm with multiple steps per stage, combining both color and specific spatial information. The first stage contains an area-limitation step which is performance critical in both the detection rate as the overall processing time. The second stage locates suggestions for traffic signs using recently published feature processing. The third stage contains a validation step to enhance reliability of the algorithm. During this stage, the traffic signs are recognized. Experiments show a convincing detection rate of 99%. With respect to computational speed, the throughput for line-of-sight images of 800×600 pixels is 35 Hz and for panorama images it is 4 Hz. Our novel architecture outperforms existing algorithms, with respect to both detection rate and throughput

  12. HodDB: Design and Analysis of a Query Processor for Brick.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fierro, Gabriel; Culler, David

    Brick is a recently proposed metadata schema and ontology for describing building components and the relationships between them. It represents buildings as directed labeled graphs using the RDF data model. Using the SPARQL query language, building-agnostic applications query a Brick graph to discover the set of resources and relationships they require to operate. Latency-sensitive applications, such as user interfaces, demand response and modelpredictive control, require fast queries — conventionally less than 100ms. We benchmark a set of popular open-source and commercial SPARQL databases against three real Brick models using seven application queries and find that none of them meet thismore » performance target. This lack of performance can be attributed to design decisions that optimize for queries over large graphs consisting of billions of triples, but give poor spatial locality and join performance on the small dense graphs typical of Brick. We present the design and evaluation of HodDB, a RDF/SPARQL database for Brick built over a node-based index structure. HodDB performs Brick queries 3-700x faster than leading SPARQL databases and consistently meets the 100ms threshold, enabling the portability of important latency-sensitive building applications.« less

  13. Large Survey Database: A Distributed Framework for Storage and Analysis of Large Datasets

    NASA Astrophysics Data System (ADS)

    Juric, Mario

    2011-01-01

    The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures. An LSD database consists of a set of vertically and horizontally partitioned tables, physically stored as compressed HDF5 files. Vertically, we partition the tables into groups of related columns ('column groups'), storing together logically related data (e.g., astrometry, photometry). Horizontally, the tables are partitioned into partially overlapping ``cells'' by position in space (lon, lat) and time (t). This organization allows for fast lookups based on spatial and temporal coordinates, as well as data and task distribution. The design was inspired by the success of Google BigTable (Chang et al., 2006). Our programming model is a pipelined extension of MapReduce (Dean and Ghemawat, 2004). An SQL-like query language is used to access data. For complex tasks, map-reduce ``kernels'' that operate on query results on a per-cell basis can be written, with the framework taking care of scheduling and execution. The combination leverages users' familiarity with SQL, while offering a fully distributed computing environment. LSD adds little overhead compared to direct Python file I/O. In tests, we sweeped through 1.1 Grows of PanSTARRS+SDSS data (220GB) less than 15 minutes on a dual CPU machine. In a cluster environment, we achieved bandwidths of 17Gbits/sec (I/O limited). Based on current experience, we believe LSD should scale to be useful for analysis and storage of LSST-scale datasets. It can be downloaded from http://mwscience.net/lsd.

  14. Hyper-Spectral Image Analysis With Partially Latent Regression and Spatial Markov Dependencies

    NASA Astrophysics Data System (ADS)

    Deleforge, Antoine; Forbes, Florence; Ba, Sileye; Horaud, Radu

    2015-09-01

    Hyper-spectral data can be analyzed to recover physical properties at large planetary scales. This involves resolving inverse problems which can be addressed within machine learning, with the advantage that, once a relationship between physical parameters and spectra has been established in a data-driven fashion, the learned relationship can be used to estimate physical parameters for new hyper-spectral observations. Within this framework, we propose a spatially-constrained and partially-latent regression method which maps high-dimensional inputs (hyper-spectral images) onto low-dimensional responses (physical parameters such as the local chemical composition of the soil). The proposed regression model comprises two key features. Firstly, it combines a Gaussian mixture of locally-linear mappings (GLLiM) with a partially-latent response model. While the former makes high-dimensional regression tractable, the latter enables to deal with physical parameters that cannot be observed or, more generally, with data contaminated by experimental artifacts that cannot be explained with noise models. Secondly, spatial constraints are introduced in the model through a Markov random field (MRF) prior which provides a spatial structure to the Gaussian-mixture hidden variables. Experiments conducted on a database composed of remotely sensed observations collected from the Mars planet by the Mars Express orbiter demonstrate the effectiveness of the proposed model.

  15. Wildfire exposure analysis on the national forests in the Pacific Northwest, USA.

    PubMed

    Ager, Alan A; Buonopane, Michelle; Reger, Allison; Finney, Mark A

    2013-06-01

    We analyzed wildfire exposure for key social and ecological features on the national forests in Oregon and Washington. The forests contain numerous urban interfaces, old growth forests, recreational sites, and habitat for rare and endangered species. Many of these resources are threatened by wildfire, especially in the east Cascade Mountains fire-prone forests. The study illustrates the application of wildfire simulation for risk assessment where the major threat is from large and rare naturally ignited fires, versus many previous studies that have focused on risk driven by frequent and small fires from anthropogenic ignitions. Wildfire simulation modeling was used to characterize potential wildfire behavior in terms of annual burn probability and flame length. Spatial data on selected social and ecological features were obtained from Forest Service GIS databases and elsewhere. The potential wildfire behavior was then summarized for each spatial location of each resource. The analysis suggested strong spatial variation in both burn probability and conditional flame length for many of the features examined, including biodiversity, urban interfaces, and infrastructure. We propose that the spatial patterns in modeled wildfire behavior could be used to improve existing prioritization of fuel management and wildfire preparedness activities within the Pacific Northwest region. © 2012 Society for Risk Analysis.

  16. Spatial and Temporal Patterns of Unburned Areas within Fire Perimeters in the Northwestern United States from 1984 to 2014

    NASA Astrophysics Data System (ADS)

    Meddens, A. J.; Kolden, C.; Lutz, J. A.; Abatzoglou, J. T.; Hudak, A. T.

    2016-12-01

    Recently, there has been concern about increasing extent and severity of wildfires across the globe given rapid climate change. Areas that do not burn within fire perimeters can act as fire refugia, providing (1) protection from the detrimental effects of the fire, (2) seed sources, and (3) post-fire habitat on the landscape. However, recent studies have mainly focused on the higher end of the burn severity spectrum whereas the lower end of the burn severity spectrum has been largely ignored. We developed a spatially explicit database for 2,200 fires across the inland northwestern USA, delineating unburned areas within fire perimeters from 1984 to 2014. We used 1,600 Landsat scenes with one or two scenes before and one or two scenes after the fires to capture the unburned proportion of the fire. Subsequently, we characterized the spatial and temporal patterns of unburned areas and related the unburned proportion to interannual climate variability. The overall classification accuracy detecting unburned locations was 89.2% using a 10-fold cross-validation classification tree approach in combination with 719 randomly located field plots. The unburned proportion ranged from 2% to 58% with an average of 19% for a select number of fires. We find that using both an immediate post-fire image and a one-year post fire image improves classification accuracy of unburned islands over using just a single post-fire image. The spatial characteristics of the unburned islands differ between forested and non-forested regions with a larger amount of unburned area within non-forest. In addition, we show trends of unburned proportion related primarily to concurrent climatic drought conditions across the entire region. This database is important for subsequent analyses of fire refugia prioritization, vegetation recovery studies, ecosystem resilience, and forest management to facilitate unburned islands through fuels breaks, prescribed burning, and fire suppression strategies.

  17. Application of 3D Spatio-Temporal Data Modeling, Management, and Analysis in DB4GEO

    NASA Astrophysics Data System (ADS)

    Kuper, P. V.; Breunig, M.; Al-Doori, M.; Thomsen, A.

    2016-10-01

    Many of todaýs world wide challenges such as climate change, water supply and transport systems in cities or movements of crowds need spatio-temporal data to be examined in detail. Thus the number of examinations in 3D space dealing with geospatial objects moving in space and time or even changing their shapes in time will rapidly increase in the future. Prominent spatio-temporal applications are subsurface reservoir modeling, water supply after seawater desalination and the development of transport systems in mega cities. All of these applications generate large spatio-temporal data sets. However, the modeling, management and analysis of 3D geo-objects with changing shape and attributes in time still is a challenge for geospatial database architectures. In this article we describe the application of concepts for the modeling, management and analysis of 2.5D and 3D spatial plus 1D temporal objects implemented in DB4GeO, our service-oriented geospatial database architecture. An example application with spatio-temporal data of a landfill, near the city of Osnabrück in Germany demonstrates the usage of the concepts. Finally, an outlook on our future research focusing on new applications with big data analysis in three spatial plus one temporal dimension in the United Arab Emirates, especially the Dubai area, is given.

  18. D Webgis and Visualization Issues for Architectures and Large Sites

    NASA Astrophysics Data System (ADS)

    De Amicis, R.; Conti, G.; Girardi, G.; Andreolli, M.

    2011-09-01

    Traditionally, within the field of archaeology and, more generally, within the cultural heritage domain, Geographical Information Systems (GIS) have been mostly used as support to cataloguing activities, essentially operating as gateways to large geo-referenced archives of specialised cultural heritage information. Additionally GIS have proved to be essential to help cultural heritage institutions improve management of their historical information, providing the means for detection of otherwise hard-to-discover spatial patterns, supporting with computation tools necessary to perform spatial clustering, proximity and orientation analysis. This paper presents a platform developed to answer to both the aforementioned issues, by allowing geo-referenced cataloguing of multi-media resources of cultural relevance as well as access, in a user-friendly manner, through an interactive 3D geobrowser which operates as single point of access to the available digital repositories. The solution has been showcased in the context of "Festival dell'economia" (the Fair of Economics) a major event recently occurred in Trento, Italy and it has allowed visitors of the event to interactively access an extremely large repository of information, as well as their metadata, available across the area of the Autonomous Province of Trento, in Italy. Within the event, an extremely large repository was made accessible, via the network, through web-services, from a 3D interactive geobrowser developed by the authors. The 3D scene was enriched with a number of Points of Interest (POIs) linking to information available within various databases. The software package was deployed with a complex hardware set-up composed of a large composite panoramic screen covering a horizontal field of view of 240 degrees.

  19. Completion of the 2011 National Land Cover Database for the Conterminous United States – Representing a Decade of Land Cover Change Information

    EPA Science Inventory

    The National Land Cover Database (NLCD) provides nationwide data on land cover and land cover change at the native 30-m spatial resolution of the Landsat Thematic Mapper (TM). The database is designed to provide five-year cyclical updating of United States land cover and associat...

  20. Regional spatial-temporal spread of citrus huanglongbing is affected by rain in Florida.

    PubMed

    Shimwela, Mpoki; Schubert, Timothy S; Albritton, Matthew; Halbert, Susan E; Jones, Debra J; Sun, Xiaoan; Roberts, Pamela; Singer, Burton; Lee, Wen Suk; Jones, Jeffrey B; Ploetz, Randy; van Bruggen, Ariena H C

    2018-06-06

    Citrus huanglongbing (HLB), associated with Candidatus Liberibacter asiaticus (Las), disseminated by Asian Citrus Psyllid (ACP), has devastated citrus in Florida since 2005. Data on HLB occurrence were stored in databases (2005-2012). Cumulative HLB-positive citrus blocks were subjected to kernel density analysis and kriging. Relative disease incidence per county was calculated by dividing HLB numbers by relative tree numbers and maximum incidence. Spatio-temporal HLB distributions were correlated with weather. Relative HLB incidence correlated positively with rainfall. The focus expansion rate was 1626 m month-1, similar to that in Brazil. Relative HLB incidence in counties with primarily large groves increased at a lower rate (0.24 year-1) than in counties with smaller groves in hotspot areas (0.67 year-1), confirming reports that large-scale HLB management may slow epidemic progress.

  1. The spatial distribution and evolution characteristics of North Atlantic cyclones

    NASA Astrophysics Data System (ADS)

    Dacre, H.; Gray, S.

    2009-09-01

    Mid-latitude cyclones play a large role in determining the day-to-day weather conditions in western Europe through their associated wind and precipitation patterns. Thus, their typical spatial and evolution characteristics are of great interest to meteorologists, insurance and risk management companies. In this study a feature tracking algorithm is applied to a cyclone database produced using the Hewson-method of cyclone identification, based on low-level gradients of wet-bulb potential temperature, to produce a climatology of mid-latitude cyclones. The aim of this work is to compare the cyclone track and density statistics found in this study with previous climatologies and to determine reasons for any differences. This method is found to compare well with other cyclone identification methods; the north Atlantic storm track is reproduced along with the major regions of genesis. Differences are attributed to cyclone lifetime and strength thresholds, dataset resolution and cyclone identification and tracking methods. Previous work on cyclone development has been largely limited to case studies as opposed to analysis of climatological data, and does not distinguish between the different stages of cyclone evolution. The cyclone database used in this study allows cyclone characteristics to be tracked throughout the cyclone lifecycle. This enables the evaluation of the characteristics of cyclone evolution for systems forming in different genesis regions and a calculation of the spatial distribution and evolution of these characteristics in composite cyclones. It was found that most of the cyclones that cross western Europe originate in the east Atlantic where the baroclinicity and sea surface temperature gradients are weak compared to the west Atlantic. East Atlantic cyclones also have higher low-level relative vorticity and lower mean sea level pressure at their genesis point than west Atlantic cyclones. This is consistent with the hypothesis that they are secondary cyclones developing on the trailing fronts of pre-existing 'parent' cyclones. Furthermore, it was found that a higher proportion of east Atlantic cyclones are type C cyclones with strong upper-level forcing but weak low-level forcing suggesting that latent energy plays a more important role in their intensification than for west Atlantic cyclones.

  2. The spatial distribution and evolution characteristics of North Atlantic cyclones

    NASA Astrophysics Data System (ADS)

    Dacre, H.; Gray, S.

    2009-04-01

    Mid-latitude cyclones play a large role in determining the day-to-day weather conditions in western Europe through their associated wind and precipitation patterns. Thus, their typical spatial and evolution characteristics are of great interest to meteorologists, insurance and risk management companies. In this study a feature tracking algorithm is applied to a cyclone database produced using the Hewson-method of cyclone identification, based on low-level gradients of wet-bulb potential temperature, to produce a climatology of mid-latitude cyclones. The aim of this work is to compare the cyclone track and density statistics found in this study with previous climatologies. This method is found to compare well with other cyclone identification methods; the north Atlantic storm track is reproduced along with the major regions of genesis. Differences are attributed to cyclone lifetime and strength thresholds, dataset resolution and cyclone identification and tracking methods. Previous work on cyclone development has been largely limited to case studies as opposed to analysis of climatological data, and does not distinguish between the different stages of cyclone evolution. The cyclone database used in this study allows cyclone characteristics to be tracked throughout the cyclone lifecycle. This enables the evaluation of the characteristics of cyclone evolution for systems forming in different genesis regions and a calculation of the spatial distribution and evolution of these characteristics in composite cyclones. It was found that most of the cyclones that cross western Europe originate in the east Atlantic where the baroclinicity and sea surface temperature gradients are weak compared to the west Atlantic. East Atlantic cyclones also have higher low-level relative vorticity and lower mean sea level pressure at their genesis point than west Atlantic cyclones. This is consistent with the hypothesis that they are secondary cyclones developing on the trailing fronts of pre-existing 'parent' cyclones. Furthermore, it was found that a higher proportion of east Atlantic cyclones are type C cyclones with strong upper-level forcing but weak low-level forcing suggesting that latent energy plays a more important role in their intensification than for west Atlantic cyclones.

  3. Distributed spatial information integration based on web service

    NASA Astrophysics Data System (ADS)

    Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng

    2008-10-01

    Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.

  4. Distributed spatial information integration based on web service

    NASA Astrophysics Data System (ADS)

    Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng

    2009-10-01

    Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.

  5. Application of China's National Forest Continuous Inventory database.

    PubMed

    Xie, Xiaokui; Wang, Qingli; Dai, Limin; Su, Dongkai; Wang, Xinchuang; Qi, Guang; Ye, Yujing

    2011-12-01

    The maintenance of a timely, reliable and accurate spatial database on current forest ecosystem conditions and changes is essential to characterize and assess forest resources and support sustainable forest management. Information for such a database can be obtained only through a continuous forest inventory. The National Forest Continuous Inventory (NFCI) is the first level of China's three-tiered inventory system. The NFCI is administered by the State Forestry Administration; data are acquired by five inventory institutions around the country. Several important components of the database include land type, forest classification and ageclass/ age-group. The NFCI database in China is constructed based on 5-year inventory periods, resulting in some of the data not being timely when reports are issued. To address this problem, a forest growth simulation model has been developed to update the database for years between the periodic inventories. In order to aid in forest plan design and management, a three-dimensional virtual reality system of forest landscapes for selected units in the database (compartment or sub-compartment) has also been developed based on Virtual Reality Modeling Language. In addition, a transparent internet publishing system for a spatial database based on open source WebGIS (UMN Map Server) has been designed and utilized to enhance public understanding and encourage free participation of interested parties in the development, implementation, and planning of sustainable forest management.

  6. A global database of ant species abundances

    USGS Publications Warehouse

    Gibb, Heloise; Dunn, Rob R.; Sanders, Nathan J.; Grossman, Blair F.; Photakis, Manoli; Abril, Silvia; Agosti, Donat; Andersen, Alan N.; Angulo, Elena; Armbrecht, Ingre; Arnan, Xavier; Baccaro, Fabricio B.; Bishop, Tom R.; Boulay, Raphael; Bruhl, Carsten; Castracani, Cristina; Cerda, Xim; Del Toro, Israel; Delsinne, Thibaut; Diaz, Mireia; Donoso, David A.; Ellison, Aaron M.; Enriquez, Martha L.; Fayle, Tom M.; Feener Jr., Donald H.; Fisher, Brian L.; Fisher, Robert N.; Fitpatrick, Matthew C.; Gomez, Cristanto; Gotelli, Nicholas J.; Gove, Aaron; Grasso, Donato A.; Groc, Sarah; Guenard, Benoit; Gunawardene, Nihara; Heterick, Brian; Hoffmann, Benjamin; Janda, Milan; Jenkins, Clinton; Kaspari, Michael; Klimes, Petr; Lach, Lori; Laeger, Thomas; Lattke, John; Leponce, Maurice; Lessard, Jean-Philippe; Longino, John; Lucky, Andrea; Luke, Sarah H.; Majer, Jonathan; McGlynn, Terrence P.; Menke, Sean; Mezger, Dirk; Mori, Alessandra; Moses, Jimmy; Munyai, Thinandavha Caswell; Pacheco, Renata; Paknia, Omid; Pearce-Duvet, Jessica; Pfeiffer, Martin; Philpott, Stacy M.; Resasco, Julian; Retana, Javier; Silva, Rogerio R.; Sorger, Magdalena D.; Souza, Jorge; Suarez, Andrew V.; Tista, Melanie; Vasconcelos, Heraldo L.; Vonshak, Merav; Weiser, Michael D.; Yates, Michelle; Parr, Catherine L.

    2017-01-01

    What forces structure ecological assemblages? A key limitation to general insights about assemblage structure is the availability of data that are collected at a small spatial grain (local assemblages) and a large spatial extent (global coverage). Here, we present published and unpublished data from 51,388 ant abundance and occurrence records of more than 2693 species and 7953 morphospecies from local assemblages collected at 4212 locations around the world. Ants were selected because they are diverse and abundant globally, comprise a large fraction of animal biomass in most terrestrial communities, and are key contributors to a range of ecosystem functions. Data were collected between 1949 and 2014, and include, for each geo-referenced sampling site, both the identity of the ants collected and details of sampling design, habitat type and degree of disturbance. The aim of compiling this dataset was to provide comprehensive species abundance data in order to test relationships between assemblage structure and environmental and biogeographic factors. Data were collected using a variety of standardised methods, such as pitfall and Winkler traps, and will be valuable for studies investigating large-scale forces structuring local assemblages. Understanding such relationships is particularly critical under current rates of global change. We encourage authors holding additional data on systematically collected ant assemblages, especially those in dry and cold, and remote areas, to contact us and contribute their data to this growing dataset.

  7. Patterns of Spatial Variation of Assemblages Associated with Intertidal Rocky Shores: A Global Perspective

    PubMed Central

    Cruz-Motta, Juan José; Miloslavich, Patricia; Palomo, Gabriela; Iken, Katrin; Konar, Brenda; Pohle, Gerhard; Trott, Tom; Benedetti-Cecchi, Lisandro; Herrera, César; Hernández, Alejandra; Sardi, Adriana; Bueno, Andrea; Castillo, Julio; Klein, Eduardo; Guerra-Castro, Edlin; Gobin, Judith; Gómez, Diana Isabel; Riosmena-Rodríguez, Rafael; Mead, Angela; Bigatti, Gregorio; Knowlton, Ann; Shirayama, Yoshihisa

    2010-01-01

    Assemblages associated with intertidal rocky shores were examined for large scale distribution patterns with specific emphasis on identifying latitudinal trends of species richness and taxonomic distinctiveness. Seventy-two sites distributed around the globe were evaluated following the standardized sampling protocol of the Census of Marine Life NaGISA project (www.nagisa.coml.org). There were no clear patterns of standardized estimators of species richness along latitudinal gradients or among Large Marine Ecosystems (LMEs); however, a strong latitudinal gradient in taxonomic composition (i.e., proportion of different taxonomic groups in a given sample) was observed. Environmental variables related to natural influences were strongly related to the distribution patterns of the assemblages on the LME scale, particularly photoperiod, sea surface temperature (SST) and rainfall. In contrast, no environmental variables directly associated with human influences (with the exception of the inorganic pollution index) were related to assemblage patterns among LMEs. Correlations of the natural assemblages with either latitudinal gradients or environmental variables were equally strong suggesting that neither neutral models nor models based solely on environmental variables sufficiently explain spatial variation of these assemblages at a global scale. Despite the data shortcomings in this study (e.g., unbalanced sample distribution), we show the importance of generating biological global databases for the use in large-scale diversity comparisons of rocky intertidal assemblages to stimulate continued sampling and analyses. PMID:21179546

  8. Land-use change, deforestation, and peasant farm systems: A case study of Mexico's Southern Yucatan Peninsular Region

    NASA Astrophysics Data System (ADS)

    Vance, Colin James

    This dissertation develops spatially explicit econometric models by linking Thematic Mapper (TM) satellite imagery with household survey data to test behavioral propositions of semi-subsistence farmers in the Southern Yucatan Peninsular Region (SYPR) of Mexico. Covering 22,000 km2, this agricultural frontier contains one of the largest and oldest expanses of tropical forests in the Americas outside of Amazonia. Over the past 30 years, the SYPR has undergone significant land-use change largely owing to the construction of a highway through the region's center in 1967. These landscape dynamics are modeled by exploiting a spatial database linking a time series of TM imagery with socio-economic and geo-referenced land-use data collected from a random sample of 188 farm households. The dissertation moves beyond the existing literature on deforestation in three principal respects. Theoretically, the study develops a non-separable model of land-use that relaxes the assumption of profit maximization almost exclusively invoked in studies of the deforestation issue. The model is derived from a utility-maximizing framework that explicitly incorporates the interdependency of the household's production and consumption choices as these affect the allocation of resources. Methodologically, the study assembles a spatial database that couples satellite imagery with household-level socio-economic data. The field survey protocol recorded geo-referenced land-use data through the use of a geographic positioning system and the creation of sketch maps detailing the location of different uses observed within individual plots. Empirically, the study estimates spatially explicit econometric models of land-use change using switching regressions and duration analysis. A distinguishing feature of these models is that they link the dependent and independent variables at the level of the decision unit, the land manager, thereby capturing spatial and temporal heterogeneity that is otherwise obscured in studies using data aggregated to higher scales of analysis. The empirical findings suggest the potential of various policy initiatives to impede or otherwise alter the pattern of land-cover conversions. In this regard, the study reveals that consideration of missing or thin markets is critical to understanding how farmers in the SYPR reach subsistence and commercial cropping decisions.

  9. Statistical and Conceptual Model Testing Geomorphic Principles through Quantification in the Middle Rio Grande River, NM.

    NASA Astrophysics Data System (ADS)

    Posner, A. J.

    2017-12-01

    The Middle Rio Grande River (MRG) traverses New Mexico from Cochiti to Elephant Butte reservoirs. Since the 1100s, cultivating and inhabiting the valley of this alluvial river has required various river training works. The mid-20th century saw a concerted effort to tame the river through channelization, Jetty Jacks, and dam construction. A challenge for river managers is to better understand the interactions between a river training works, dam construction, and the geomorphic adjustments of a desert river driven by spring snowmelt and summer thunderstorms carrying water and large sediment inputs from upstream and ephemeral tributaries. Due to its importance to the region, a vast wealth of data exists for conditions along the MRG. The investigation presented herein builds upon previous efforts by combining hydraulic model results, digitized planforms, and stream gage records in various statistical and conceptual models in order to test our understanding of this complex system. Spatially continuous variables were clipped by a set of river cross section data that is collected at decadal intervals since the early 1960s, creating a spatially homogenous database upon which various statistical testing was implemented. Conceptual models relate forcing variables and response variables to estimate river planform changes. The developed database, represents a unique opportunity to quantify and test geomorphic conceptual models in the unique characteristics of the MRG. The results of this investigation provides a spatially distributed characterization of planform variable changes, permitting managers to predict planform at a much higher resolution than previously available, and a better understanding of the relationship between flow regime and planform changes such as changes to longitudinal slope, sinuosity, and width. Lastly, data analysis and model interpretation led to the development of a new conceptual model for the impact of ephemeral tributaries in alluvial rivers.

  10. A Mediterranean coastal database for assessing the impacts of sea-level rise and associated hazards

    NASA Astrophysics Data System (ADS)

    Wolff, Claudia; Vafeidis, Athanasios T.; Muis, Sanne; Lincke, Daniel; Satta, Alessio; Lionello, Piero; Jimenez, Jose A.; Conte, Dario; Hinkel, Jochen

    2018-03-01

    We have developed a new coastal database for the Mediterranean basin that is intended for coastal impact and adaptation assessment to sea-level rise and associated hazards on a regional scale. The data structure of the database relies on a linear representation of the coast with associated spatial assessment units. Using information on coastal morphology, human settlements and administrative boundaries, we have divided the Mediterranean coast into 13 900 coastal assessment units. To these units we have spatially attributed 160 parameters on the characteristics of the natural and socio-economic subsystems, such as extreme sea levels, vertical land movement and number of people exposed to sea-level rise and extreme sea levels. The database contains information on current conditions and on plausible future changes that are essential drivers for future impacts, such as sea-level rise rates and socio-economic development. Besides its intended use in risk and impact assessment, we anticipate that the Mediterranean Coastal Database (MCD) constitutes a useful source of information for a wide range of coastal applications.

  11. A Mediterranean coastal database for assessing the impacts of sea-level rise and associated hazards

    PubMed Central

    Wolff, Claudia; Vafeidis, Athanasios T.; Muis, Sanne; Lincke, Daniel; Satta, Alessio; Lionello, Piero; Jimenez, Jose A.; Conte, Dario; Hinkel, Jochen

    2018-01-01

    We have developed a new coastal database for the Mediterranean basin that is intended for coastal impact and adaptation assessment to sea-level rise and associated hazards on a regional scale. The data structure of the database relies on a linear representation of the coast with associated spatial assessment units. Using information on coastal morphology, human settlements and administrative boundaries, we have divided the Mediterranean coast into 13 900 coastal assessment units. To these units we have spatially attributed 160 parameters on the characteristics of the natural and socio-economic subsystems, such as extreme sea levels, vertical land movement and number of people exposed to sea-level rise and extreme sea levels. The database contains information on current conditions and on plausible future changes that are essential drivers for future impacts, such as sea-level rise rates and socio-economic development. Besides its intended use in risk and impact assessment, we anticipate that the Mediterranean Coastal Database (MCD) constitutes a useful source of information for a wide range of coastal applications. PMID:29583140

  12. Spatial disaggregation of complex soil map units at regional scale based on soil-landscape relationships

    NASA Astrophysics Data System (ADS)

    Vincent, Sébastien; Lemercier, Blandine; Berthier, Lionel; Walter, Christian

    2015-04-01

    Accurate soil information over large extent is essential to manage agronomical and environmental issues. Where it exists, information on soil is often sparse or available at coarser resolution than required. Typically, the spatial distribution of soil at regional scale is represented as a set of polygons defining soil map units (SMU), each one describing several soil types not spatially delineated, and a semantic database describing these objects. Delineation of soil types within SMU, ie spatial disaggregation of SMU allows improved soil information's accuracy using legacy data. The aim of this study was to predict soil types by spatial disaggregation of SMU through a decision tree approach, considering expert knowledge on soil-landscape relationships embedded in soil databases. The DSMART (Disaggregation and Harmonization of Soil Map Units Through resampled Classification Trees) algorithm developed by Odgers et al. (2014) was used. It requires soil information, environmental covariates, and calibration samples, to build then extrapolate decision trees. To assign a soil type to a particular spatial position, a weighed random allocation approach is applied: each soil type in the SMU is weighted according to its assumed proportion of occurrence in the SMU. Thus soil-landscape relationships are not considered in the current version of DSMART. Expert rules on soil distribution considering the relief, parent material and wetlands location were proposed to drive the procedure of allocation of soil type to sampled positions, in order to integrate the soil-landscape relationships. Semantic information about spatial organization of soil types within SMU and exhaustive landscape descriptors were used. In the eastern part of Brittany (NW France), 171 soil types were described; their relative area in the SMU were estimated, geomorphological and geological contexts were recorded. The model predicted 144 soil types. An external validation was performed by comparing predicted with effectively observed soil types derived from available soil maps at scale of 1:25.000 or 1:50.000. Overall accuracies were 63.1% and 36.2%, respectively considering or not the adjacent pixels. The introduction of expert rules based on soil-landscape relationships to allocate soil types to calibration samples enhanced dramatically the results in comparison with a simple weighted random allocation procedure. It also enabled the production of a comprehensive soil map, retrieving expected spatial organization of soils. Estimation of soil properties for various depths is planned using disaggregated soil types, according to the GlobalSoilmap.net specifications. Odgers, N.P., Sun, W., McBratney, A.B., Minasny, B., Clifford, D., 2014. Disaggregating and harmonising soil map units through resampled classification trees. Geoderma 214, 91-100.

  13. Components of spatial information management in wildlife ecology: Software for statistical and modeling analysis [Chapter 14

    Treesearch

    Hawthorne L. Beyer; Jeff Jenness; Samuel A. Cushman

    2010-01-01

    Spatial information systems (SIS) is a term that describes a wide diversity of concepts, techniques, and technologies related to the capture, management, display and analysis of spatial information. It encompasses technologies such as geographic information systems (GIS), global positioning systems (GPS), remote sensing, and relational database management systems (...

  14. Spatial Data Integration Using Ontology-Based Approach

    NASA Astrophysics Data System (ADS)

    Hasani, S.; Sadeghi-Niaraki, A.; Jelokhani-Niaraki, M.

    2015-12-01

    In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.

  15. Monitoring Earth's reservoir and lake dynamics from space

    NASA Astrophysics Data System (ADS)

    Donchyts, G.; Eilander, D.; Schellekens, J.; Winsemius, H.; Gorelick, N.; Erickson, T.; Van De Giesen, N.

    2016-12-01

    Reservoirs and lakes constitute about 90% of the Earth's fresh surface water. They play a major role in the water cycle and are critical for the ever increasing demands of the world's growing population. Water from reservoirs is used for agricultural, industrial, domestic, and other purposes. Current digital databases of lakes and reservoirs are scarce, mainly providing only descriptive and static properties of the reservoirs. The Global Reservoir and Dam (GRanD) database contains almost 7000 entries while OpenStreetMap counts more than 500 000 entries tagged as a reservoir. In the last decade several research efforts already focused on accurate estimates of surface water dynamics, mainly using satellite altimetry, However, currently they are limited only to less than 1000 (mostly large) water bodies. Our approach is based on three main components. Firstly, a novel method, allowing automated and accurate estimation of surface area from (partially) cloud-free optical multispectral or radar satellite imagery. The algorithm uses satellite imagery measured by Landsat, Sentinel and MODIS missions. Secondly, a database to store reservoir static and dynamic parameters. Thirdly, a web-based tool, built on top of Google Earth Engine infrastructure. The tool allows estimation of surface area for lakes and reservoirs at planetary-scale at high spatial and temporal resolution. A prototype version of the method, database, and tool will be presented as well as validation using in-situ measurements.

  16. Nationwide incidence of motor neuron disease using the French health insurance information system database.

    PubMed

    Kab, Sofiane; Moisan, Frédéric; Preux, Pierre-Marie; Marin, Benoît; Elbaz, Alexis

    2017-08-01

    There are no estimates of the nationwide incidence of motor neuron disease (MND) in France. We used the French health insurance information system to identify incident MND cases (2012-2014), and compared incidence figures to those from three external sources. We identified incident MND cases (2012-2014) based on three data sources (riluzole claims, hospitalisation records, long-term chronic disease benefits), and computed MND incidence by age, gender, and geographic region. We used French mortality statistics, Limousin ALS registry data, and previous European studies based on administrative databases to perform external comparisons. We identified 6553 MND incident cases. After standardisation to the United States 2010 population, the age/gender-standardised incidence was 2.72/100,000 person-years (males, 3.37; females, 2.17; male:female ratio = 1.53, 95% CI1.46-1.61). There was no major spatial difference in MND distribution. Our data were in agreement with the French death database (standardised mortality ratio = 1.01, 95% CI = 0.96-1.06) and Limousin ALS registry (standardised incidence ratio = 0.92, 95% CI = 0.72-1.15). Incidence estimates were in the same range as those from previous studies. We report French nationwide incidence estimates of MND. Administrative databases including hospital discharge data and riluzole claims offer an interesting approach to identify large population-based samples of patients with MND for epidemiologic studies and surveillance.

  17. Geography and macroeconomics: new data and new findings.

    PubMed

    Nordhaus, William D

    2006-03-07

    The linkage between economic activity and geography is obvious: Populations cluster mainly on coasts and rarely on ice sheets. Past studies of the relationships between economic activity and geography have been hampered by limited spatial data on economic activity. The present study introduces data on global economic activity, the G-Econ database, which measures economic activity for all large countries, measured at a 1 degree latitude by 1 degree longitude scale. The methodologies for the study are described. Three applications of the data are investigated. First, the puzzling "climate-output reversal" is detected, whereby the relationship between temperature and output is negative when measured on a per capita basis and strongly positive on a per area basis. Second, the database allows better resolution of the impact of geographic attributes on African poverty, finding geography is an important source of income differences relative to high-income regions. Finally, we use the G-Econ data to provide estimates of the economic impact of greenhouse warming, with larger estimates of warming damages than past studies.

  18. Geography and macroeconomics: New data and new findings

    PubMed Central

    Nordhaus, William D.

    2006-01-01

    The linkage between economic activity and geography is obvious: Populations cluster mainly on coasts and rarely on ice sheets. Past studies of the relationships between economic activity and geography have been hampered by limited spatial data on economic activity. The present study introduces data on global economic activity, the G-Econ database, which measures economic activity for all large countries, measured at a 1° latitude by 1° longitude scale. The methodologies for the study are described. Three applications of the data are investigated. First, the puzzling “climate-output reversal” is detected, whereby the relationship between temperature and output is negative when measured on a per capita basis and strongly positive on a per area basis. Second, the database allows better resolution of the impact of geographic attributes on African poverty, finding geography is an important source of income differences relative to high-income regions. Finally, we use the G-Econ data to provide estimates of the economic impact of greenhouse warming, with larger estimates of warming damages than past studies. PMID:16473945

  19. Digital Geologic Map Database of Medicine Lake Volcano, Northern California

    NASA Astrophysics Data System (ADS)

    Ramsey, D. W.; Donnelly-Nolan, J. M.; Felger, T. J.

    2010-12-01

    Medicine Lake volcano, located in the southern Cascades ~55 km east-northeast of Mount Shasta, is a large rear-arc, shield-shaped volcano with an eruptive history spanning nearly 500 k.y. Geologic mapping of Medicine Lake volcano has been digitally compiled as a spatial database in ArcGIS. Within the database, coverage feature classes have been created representing geologic lines (contacts, faults, lava tubes, etc.), geologic unit polygons, and volcanic vent location points. The database can be queried to determine the spatial distributions of different rock types, geologic units, and other geologic and geomorphic features. These data, in turn, can be used to better understand the evolution, growth, and potential hazards of this large, rear-arc Cascades volcano. Queries of the database reveal that the total area covered by lavas of Medicine Lake volcano, which range in composition from basalt through rhyolite, is about 2,200 km2, encompassing all or parts of 27 U.S. Geological Survey 1:24,000-scale topographic quadrangles. The maximum extent of these lavas is about 80 km north-south by 45 km east-west. Occupying the center of Medicine Lake volcano is a 7 km by 12 km summit caldera in which nestles its namesake, Medicine Lake. The flanks of the volcano, which are dotted with cinder cones, slope gently upward to the caldera rim, which reaches an elevation of nearly 2,440 m. Approximately 250 geologic units have been mapped, only half a dozen of which are thin surficial units such as alluvium. These volcanic units mostly represent eruptive events, each commonly including a vent (dome, cinder cone, spatter cone, etc.) and its associated lava flow. Some cinder cones have not been matched to lava flows, as the corresponding flows are probably buried, and some flows cannot be correlated with vents. The largest individual units on the map are all basaltic in composition, including the late Pleistocene basalt of Yellowjacket Butte (296 km2 exposed), the largest unit on the map, whose area is partly covered by a late Holocene andesite flow. Silicic lava flows are mostly confined to the main edifice of the volcano, with the youngest rhyolite flows found in and near the summit caldera, including the rhyolitic Little Glass Mountain (~1,000 yr B.P.) and Glass Mountain (~950 yr B.P.) flows, which are the youngest eruptions at Medicine Lake volcano. In postglacial time, 17 eruptions have added approximately 7.5 km3 to the volcano’s total estimated volume of 600 km3, which may be the largest by volume among Cascade Range volcanoes. The volcano has erupted nine times in the past 5,200 years, a rate more frequent than has been documented at all other Cascade volcanoes except Mount St. Helens.

  20. Landslide database dominated by rainfall triggered events

    NASA Astrophysics Data System (ADS)

    Devoli, G.; Strauch, W.; Álvarez, A.

    2009-04-01

    A digital landslide database has been created for Nicaragua to provide the scientific community and national authorities with a tool for landslide hazard assessment. Valuable information on landslide events has been obtained from a great variety of sources. On the basis of the data stored in the database, preliminary analyses performed at national scale aimed to characterize landslides in terms of spatial and temporal distribution, types of slope movements, triggering mechanisms, number of casualties and damage to infrastructure. A total of about 17000 events spatially distributed in mountainous and volcanic terrains have been collected in the database. The events are temporally distributed between 1826 and 2003, but a large number of the records (62% of the total number) occurred during the disastrous Hurricane Mitch in October 1998. The results showed that debris flows are the most common types of landslides recorded in the database (66% of the total amount), but other types, including rockfalls and slides, have also been identified. Rainfall, also associated with tropical cyclones, is the most frequent triggering mechanism of landslides in Nicaragua, but also seismic and volcanic activities are important triggers or, especially, the combination of one of them with rainfall. Rainfall has caused all types of failures, but debris flows and translational shallow slides are more frequent types. Earthquakes have most frequently triggered rockfalls and slides, while volcanic eruptions rockfalls and debris flows. Landslides triggered by rainfall were limited in time to the wet season that lasts from May to October and an increase in the number of events is observed during the months of September and October, which is in accord with the period of the rainy season in the Pacific and Northern and Central regions and when the country has the highest probability of being impacted by hurricanes. Both Atlantic and Pacific tropical cyclones have triggered landslides. At the medium scale, the influence of topographic and lithologic parameters on the occurrence of landslides was also analyzed and the physical characterization of landslides was done to better understand the landslide dynamics and run-out distances in both volcanic and non-volcanic areas. Data from fairly well documented events in Nicaragua were compared with other similar events in Central America and elsewhere and treated statistically to search for possible correlations and empirical relationships to predict run-out distances for different types of landslides, knowing the height of fall or the volume. The empirical relationships showed that debris flows and debris avalanches at volcanoes have the highest mobility and reach longer distances compared to other types of landslides. Because of their characteristics and downstream behaviour (long run-out distances and large volumes) both types of landslides have produced the highest number of victims in the country being the most dangerous to life and property.

  1. Unified Access Architecture for Large-Scale Scientific Datasets

    NASA Astrophysics Data System (ADS)

    Karna, Risav

    2014-05-01

    Data-intensive sciences have to deploy diverse large scale database technologies for data analytics as scientists have now been dealing with much larger volume than ever before. While array databases have bridged many gaps between the needs of data-intensive research fields and DBMS technologies (Zhang 2011), invocation of other big data tools accompanying these databases is still manual and separate the database management's interface. We identify this as an architectural challenge that will increasingly complicate the user's work flow owing to the growing number of useful but isolated and niche database tools. Such use of data analysis tools in effect leaves the burden on the user's end to synchronize the results from other data manipulation analysis tools with the database management system. To this end, we propose a unified access interface for using big data tools within large scale scientific array database using the database queries themselves to embed foreign routines belonging to the big data tools. Such an invocation of foreign data manipulation routines inside a query into a database can be made possible through a user-defined function (UDF). UDFs that allow such levels of freedom as to call modules from another language and interface back and forth between the query body and the side-loaded functions would be needed for this purpose. For the purpose of this research we attempt coupling of four widely used tools Hadoop (hadoop1), Matlab (matlab1), R (r1) and ScaLAPACK (scalapack1) with UDF feature of rasdaman (Baumann 98), an array-based data manager, for investigating this concept. The native array data model used by an array-based data manager provides compact data storage and high performance operations on ordered data such as spatial data, temporal data, and matrix-based data for linear algebra operations (scidbusr1). Performances issues arising due to coupling of tools with different paradigms, niche functionalities, separate processes and output data formats have been anticipated and considered during the design of the unified architecture. The research focuses on the feasibility of the designed coupling mechanism and the evaluation of the efficiency and benefits of our proposed unified access architecture. Zhang 2011: Zhang, Ying and Kersten, Martin and Ivanova, Milena and Nes, Niels, SciQL: Bridging the Gap Between Science and Relational DBMS, Proceedings of the 15th Symposium on International Database Engineering Applications, 2011. Baumann 98: Baumann, P., Dehmel, A., Furtado, P., Ritsch, R., Widmann, N., "The Multidimensional Database System RasDaMan", SIGMOD 1998, Proceedings ACM SIGMOD International Conference on Management of Data, June 2-4, 1998, Seattle, Washington, 1998. hadoop1: hadoop.apache.org, "Hadoop", http://hadoop.apache.org/, [Online; accessed 12-Jan-2014]. scalapack1: netlib.org/scalapack, "ScaLAPACK", http://www.netlib.org/scalapack,[Online; accessed 12-Jan-2014]. r1: r-project.org, "R", http://www.r-project.org/,[Online; accessed 12-Jan-2014]. matlab1: mathworks.com, "Matlab Documentation", http://www.mathworks.de/de/help/matlab/,[Online; accessed 12-Jan-2014]. scidbusr1: scidb.org, "SciDB User's Guide", http://scidb.org/HTMLmanual/13.6/scidb_ug,[Online; accessed 01-Dec-2013].

  2. Acquisition and analysis of a spectral and bidirectional database of urban materials over Toulouse (France)

    NASA Astrophysics Data System (ADS)

    Briottet, X.; Lachérade, S.; Pallotta, S.; Miesch, C.; Tanguy, B.; Le Men, H.

    2006-05-01

    This paper presents an experiment carried out in Toulouse in 2004. This campaign aims to create a specific library which will give us simultaneously information in three domains: a list of the main materials present in the city, the optical properties of each of them (spectral and directional) and their spatial variability in a given class. The spectral domain covers the entire optical domain from the visible to the Long Wave InfraRed range. Measurements have been carried out in the visible and near infrared spectral region (400-2500 nm) with an ASD spectroradiometer at a 20 cm resolution for outdoors measurements, and with a goniometer for laboratory ones at the same spatial resolution. A database of about 550 individual spectra has been created. These spectra could be divided into 4 classical urban classes like road (red asphalt, tar), pavement (red asphalt, tar), square (granite slab) and wall (brick, concrete). In addition to these "in situ" experiments, the bi-directional behaviours of urban material samples have been studied in laboratory with the Onera goniometer. Two material types have been distinguished: flat materials, which is isotropic, and textured materials, whose study is more complex. Whereas road and sidewalk materials are quite lambertian with a slight backscattering effect typical of rough surfaces, square materials like granite or concrete present a specular peak at large zenith angle. A specific study on tiles demonstrates their important anisotropic directional properties. In the infrared domain (3μm - 14μm), a SOC 400 spectroradiometer was used at a 1.27cm spatial resolution. A database of about 100 individual spectra has been created. These spectra could be divided into four classical urban classes like road (red asphalt, tar), pavement (red asphalt, tar), square (granite slab) and wall (bricks, painted walls). In each spectral domain, three variability types are considered: a physical variability which is intrinsic to the material, a contextual variability depending on the material use and a theoretical variability which is the one observed inside a chosen class.

  3. The development of a new database of gas emissions: MAGA, a collaborative web environment for collecting data

    NASA Astrophysics Data System (ADS)

    Cardellini, C.; Chiodini, G.; Frigeri, A.; Bagnato, E.; Aiuppa, A.; McCormick, B.

    2013-12-01

    The data on volcanic and non-volcanic gas emissions available online are, as today, incomplete and most importantly, fragmentary. Hence, there is need for common frameworks to aggregate available data, in order to characterize and quantify the phenomena at various spatial and temporal scales. Building on the Googas experience we are now extending its capability, particularly on the user side, by developing a new web environment for collecting and publishing data. We have started to create a new and detailed web database (MAGA: MApping GAs emissions) for the deep carbon degassing in the Mediterranean area. This project is part of the Deep Earth Carbon Degassing (DECADE) research initiative, lunched in 2012 by the Deep Carbon Observatory (DCO) to improve the global budget of endogenous carbon from volcanoes. MAGA database is planned to complement and integrate the work in progress within DECADE in developing CARD (Carbon Degassing) database. MAGA database will allow researchers to insert data interactively and dynamically into a spatially referred relational database management system, as well as to extract data. MAGA kicked-off with the database set up and a complete literature survey on publications on volcanic gas fluxes, by including data on active craters degassing, diffuse soil degassing and fumaroles both from dormant closed-conduit volcanoes (e.g., Vulcano, Phlegrean Fields, Santorini, Nysiros, Teide, etc.) and open-vent volcanoes (e.g., Etna, Stromboli, etc.) in the Mediterranean area and Azores. For each geo-located gas emission site, the database holds images and description of the site and of the emission type (e.g., diffuse emission, plume, fumarole, etc.), gas chemical-isotopic composition (when available), gas temperature and gases fluxes magnitude. Gas sampling, analysis and flux measurement methods are also reported together with references and contacts to researchers expert of the site. Data can be accessed on the network from a web interface or as a data-driven web service, where software clients can request data directly from the database. This way Geographical Information Systems (GIS) and Virtual Globes (e.g., Google Earth) can easily access the database, and data can be exchanged with other database. In details the database now includes: i) more than 1000 flux data about volcanic plume degassing from Etna (4 summit craters and bulk degassing) and Stromboli volcanoes, with time averaged CO2 fluxes of ~ 18000 and 766 t/d, respectively; ii) data from ~ 30 sites of diffuse soil degassing from Napoletan volcanoes, Azores, Canary, Etna, Stromboli, and Vulcano Island, with a wide range of CO2 fluxes (from les than 1 to 1500 t/d) and iii) several data on fumarolic emissions (~ 7 sites) with CO2 fluxes up to 1340 t/day (i.e., Stromboli). When available, time series of compositional data have been archived in the database (e.g., for Campi Flegrei fumaroles). We believe MAGA data-base is an important starting point to develop a large scale, expandable data-base aimed to excite, inspire, and encourage participation among researchers. In addition, the possibility to archive location and qualitative information for gas emission/sites not yet investigated, could stimulate the scientific community for future researches and will provide an indication on the current uncertainty on deep carbon fluxes global estimates.

  4. Spatially detailed water footprint assessment using the U.S. National Water-Economy Database

    NASA Astrophysics Data System (ADS)

    Ruddell, B. L.

    2015-12-01

    The new U.S. National Water-Economy Database (NWED) provides a complete picture of water use and trade in water-derived goods and services in the U.S. economy, by economic sector, at the county and metropolitan area scale. This data product provides for the first time a basis for spatially detailed calculations of water footprints and virtual water trade in the entire U.S.. This talk reviews the general patterns of U.S. water footprint and virtual water trade, at the county scale., and provides an opportunity for the community to discuss applications of this database for water resource policy and economics. The water footprints of irrigated agriculture and energy are specifically addressed, as well as overall patterns of water use in the economy.

  5. The National Land Cover Database

    USGS Publications Warehouse

    Homer, Collin G.; Fry, Joyce A.; Barnes, Christopher A.

    2012-01-01

    The National Land Cover Database (NLCD) serves as the definitive Landsat-based, 30-meter resolution, land cover database for the Nation. NLCD provides spatial reference and descriptive data for characteristics of the land surface such as thematic class (for example, urban, agriculture, and forest), percent impervious surface, and percent tree canopy cover. NLCD supports a wide variety of Federal, State, local, and nongovernmental applications that seek to assess ecosystem status and health, understand the spatial patterns of biodiversity, predict effects of climate change, and develop land management policy. NLCD products are created by the Multi-Resolution Land Characteristics (MRLC) Consortium, a partnership of Federal agencies led by the U.S. Geological Survey. All NLCD data products are available for download at no charge to the public from the MRLC Web site: http://www.mrlc.gov.

  6. GIS Methodic and New Database for Magmatic Rocks. Application for Atlantic Oceanic Magmatism.

    NASA Astrophysics Data System (ADS)

    Asavin, A. M.

    2001-12-01

    There are several geochemical Databases in INTERNET available now. There one of the main peculiarities of stored geochemical information is geographical coordinates of each samples in those Databases. As rule the software of this Database use spatial information only for users interface search procedures. In the other side, GIS-software (Geographical Information System software),for example ARC/INFO software which using for creation and analyzing special geological, geochemical and geophysical e-map, have been deeply involved with geographical coordinates for of samples. We join peculiarities GIS systems and relational geochemical Database from special software. Our geochemical information system created in Vernadsky Geological State Museum and institute of Geochemistry and Analytical Chemistry from Moscow. Now we tested system with data of geochemistry oceanic rock from Atlantic and Pacific oceans, about 10000 chemical analysis. GIS information content consist from e-map covers Wold Globes. Parts of these maps are Atlantic ocean covers gravica map (with grid 2''), oceanic bottom hot stream, altimeteric maps, seismic activity, tectonic map and geological map. Combination of this information content makes possible created new geochemical maps and combination of spatial analysis and numerical geochemical modeling of volcanic process in ocean segment. Now we tested information system on thick client technology. Interface between GIS system Arc/View and Database resides in special multiply SQL-queries sequence. The result of the above gueries were simple DBF-file with geographical coordinates. This file act at the instant of creation geochemical and other special e-map from oceanic region. We used more complex method for geophysical data. From ARC\\View we created grid cover for polygon spatial geophysical information.

  7. Geospatial Data Management Platform for Urban Groundwater

    NASA Astrophysics Data System (ADS)

    Gaitanaru, D.; Priceputu, A.; Gogu, C. R.

    2012-04-01

    Due to the large amount of civil work projects and research studies, large quantities of geo-data are produced for the urban environments. These data are usually redundant as well as they are spread in different institutions or private companies. Time consuming operations like data processing and information harmonisation represents the main reason to systematically avoid the re-use of data. The urban groundwater data shows the same complex situation. The underground structures (subway lines, deep foundations, underground parkings, and others), the urban facility networks (sewer systems, water supply networks, heating conduits, etc), the drainage systems, the surface water works and many others modify continuously. As consequence, their influence on groundwater changes systematically. However, these activities provide a large quantity of data, aquifers modelling and then behaviour prediction can be done using monitored quantitative and qualitative parameters. Due to the rapid evolution of technology in the past few years, transferring large amounts of information through internet has now become a feasible solution for sharing geoscience data. Furthermore, standard platform-independent means to do this have been developed (specific mark-up languages like: GML, GeoSciML, WaterML, GWML, CityML). They allow easily large geospatial databases updating and sharing through internet, even between different companies or between research centres that do not necessarily use the same database structures. For Bucharest City (Romania) an integrated platform for groundwater geospatial data management is developed under the framework of a national research project - "Sedimentary media modeling platform for groundwater management in urban areas" (SIMPA) financed by the National Authority for Scientific Research of Romania. The platform architecture is based on three components: a geospatial database, a desktop application (a complex set of hydrogeological and geological analysis tools) and a front-end geoportal service. The SIMPA platform makes use of mark-up transfer standards to provide a user-friendly application that can be accessed through internet to query, analyse, and visualise geospatial data related to urban groundwater. The platform holds the information within the local groundwater geospatial databases and the user is able to access this data through a geoportal service. The database architecture allows storing accurate and very detailed geological, hydrogeological, and infrastructure information that can be straightforwardly generalized and further upscaled. The geoportal service offers the possibility of querying a dataset from the spatial database. The query is coded in a standard mark-up language, and sent to the server through a standard Hyper Text Transfer Protocol (http) to be processed by the local application. After the validation of the query, the results are sent back to the user to be displayed by the geoportal application. The main advantage of the SIMPA platform is that it offers to the user the possibility to make a primary multi-criteria query, which results in a smaller set of data to be analysed afterwards. This improves both the transfer process parameters and the user's means of creating the desired query.

  8. Accessibility, searchability, transparency and engagement of soil carbon data: The International Soil Carbon Network

    NASA Astrophysics Data System (ADS)

    Harden, Jennifer W.; Hugelius, Gustaf; Koven, Charlie; Sulman, Ben; O'Donnell, Jon; He, Yujie

    2016-04-01

    Soils are capacitors for carbon and water entering and exiting through land-atmosphere exchange. Capturing the spatiotemporal variations in soil C exchange through monitoring and modeling is difficult in part because data are reported unevenly across spatial, temporal, and management scales and in part because the unit of measure generally involves destructive harvest or non-recurrent measurements. In order to improve our fundamental basis for understanding soil C exchange, a multi-user, open source, searchable database and network of scientists has been formed. The International Soil Carbon Network (ISCN) is a self-chartered, member-based and member-owned network of scientists dedicated to soil carbon science. Attributes of the ISCN include 1) Targeted ISCN Action Groups which represent teams of motivated researchers that propose and pursue specific soil C research questions with the aim of synthesizing seminal articles regarding soil C fate. 2) Datasets to date contributed by institutions and individuals to a comprehensive, searchable open-access database that currently includes over 70,000 geolocated profiles for which soil C and other soil properties. 3) Derivative products resulting from the database, including depth attenuation attributes for C concentration and storage; C storage maps; and model-based assessments of emission/sequestration for future climate scenarios. Several examples illustrate the power of such a database and its engagement with the science community. First, a simplified, data-constrained global ecosystem model estimated a global sensitivity of permafrost soil carbon to climate change (g sensitivity) of -14 to -19 Pg C °C-1 of warming on a 100 years time scale. Second, using mathematical characterizations of depth profiles for organic carbon storage, C at the soil surface reflects Net Primary Production (NPP) and its allotment as moss or litter, while e-folding depths are correlated to rooting depth. Third, storage of deep C is highly correlated with bulk density and porosity of the rock/sediment matrix. Thus C storage is most stable at depth, yet is susceptible to changes in tillage, rooting depths, and erosion/sedimentation. Fourth, current ESMs likely overestimate the turnover time of soil organic carbon and subsequently overestimate soil carbon sequestration, thus datasets combined with other soil properties will help constrain the ESM predictions. Last, analysis of soil horizon and carbon data showed that soils with a history of tillage had significantly lower carbon concentrations in both near-surface and deep layers, and that the effect persisted even in reforested areas. In addition to the opportunities for empirical science using a large database, the database has great promise for evaluation of biogeochemical and earth system models. The preservation of individual soil core measurements avoids issues with spatial averaging while facilitating evaluation of advanced model processes such as depth distributions of soil carbon, land use impacts, and spatial heterogeneity.

  9. Preliminary Integrated Geologic Map Databases for the United States: Connecticut, Maine, Massachusetts, New Hampshire, New Jersey, Rhode Island and Vermont

    USGS Publications Warehouse

    Nicholson, Suzanne W.; Dicken, Connie L.; Horton, John D.; Foose, Michael P.; Mueller, Julia A.L.; Hon, Rudi

    2006-01-01

    The rapid growth in the use of Geographic Information Systems (GIS) has highlighted the need for regional and national scale digital geologic maps that have standardized information about geologic age and lithology. Such maps can be conveniently used to generate derivative maps for manifold special purposes such as mineral-resource assessment, metallogenic studies, tectonic studies, and environmental research. Although two digital geologic maps (Schruben and others, 1994; Reed and Bush, 2004) of the United States currently exist, their scales (1:2,500,000 and 1:5,000,000) are too general for many regional applications. Most states have digital geologic maps at scales of about 1:500,000, but the databases are not comparably structured and, thus, it is difficult to use the digital database for more than one state at a time. This report describes the result for a seven state region of an effort by the U.S. Geological Survey to produce a series of integrated and standardized state geologic map databases that cover the entire United States. In 1997, the United States Geological Survey's Mineral Resources Program initiated the National Surveys and Analysis (NSA) Project to develop national digital databases. One primary activity of this project was to compile a national digital geologic map database, utilizing state geologic maps, to support studies in the range of 1:250,000- to 1:1,000,000-scale. To accomplish this, state databases were prepared using a common standard for the database structure, fields, attribution, and data dictionaries. For Alaska and Hawaii new state maps are being prepared and the preliminary work for Alaska is being released as a series of 1:250,000 scale quadrangle reports. This document provides background information and documentation for the integrated geologic map databases of this report. This report is one of a series of such reports releasing preliminary standardized geologic map databases for the United States. The data products of the project consist of two main parts, the spatial databases and a set of supplemental tables relating to geologic map units. The datasets serve as a data resource to generate a variety of stratigraphic, age, and lithologic maps. This documentation is divided into four main sections: (1) description of the set of data files provided in this report, (2) specifications of the spatial databases, (3) specifications of the supplemental tables, and (4) an appendix containing the data dictionaries used to populate some fields of the spatial database and supplemental tables.

  10. Very large database of lipids: rationale and design.

    PubMed

    Martin, Seth S; Blaha, Michael J; Toth, Peter P; Joshi, Parag H; McEvoy, John W; Ahmed, Haitham M; Elshazly, Mohamed B; Swiger, Kristopher J; Michos, Erin D; Kwiterovich, Peter O; Kulkarni, Krishnaji R; Chimera, Joseph; Cannon, Christopher P; Blumenthal, Roger S; Jones, Steven R

    2013-11-01

    Blood lipids have major cardiovascular and public health implications. Lipid-lowering drugs are prescribed based in part on categorization of patients into normal or abnormal lipid metabolism, yet relatively little emphasis has been placed on: (1) the accuracy of current lipid measures used in clinical practice, (2) the reliability of current categorizations of dyslipidemia states, and (3) the relationship of advanced lipid characterization to other cardiovascular disease biomarkers. To these ends, we developed the Very Large Database of Lipids (NCT01698489), an ongoing database protocol that harnesses deidentified data from the daily operations of a commercial lipid laboratory. The database includes individuals who were referred for clinical purposes for a Vertical Auto Profile (Atherotech Inc., Birmingham, AL), which directly measures cholesterol concentrations of low-density lipoprotein, very low-density lipoprotein, intermediate-density lipoprotein, high-density lipoprotein, their subclasses, and lipoprotein(a). Individual Very Large Database of Lipids studies, ranging from studies of measurement accuracy, to dyslipidemia categorization, to biomarker associations, to characterization of rare lipid disorders, are investigator-initiated and utilize peer-reviewed statistical analysis plans to address a priori hypotheses/aims. In the first database harvest (Very Large Database of Lipids 1.0) from 2009 to 2011, there were 1 340 614 adult and 10 294 pediatric patients; the adult sample had a median age of 59 years (interquartile range, 49-70 years) with even representation by sex. Lipid distributions closely matched those from the population-representative National Health and Nutrition Examination Survey. The second harvest of the database (Very Large Database of Lipids 2.0) is underway. Overall, the Very Large Database of Lipids database provides an opportunity for collaboration and new knowledge generation through careful examination of granular lipid data on a large scale. © 2013 Wiley Periodicals, Inc.

  11. Design and engineering of photosynthetic light-harvesting and electron transfer using length, time, and energy scales.

    PubMed

    Noy, Dror; Moser, Christopher C; Dutton, P Leslie

    2006-02-01

    Decades of research on the physical processes and chemical reaction-pathways in photosynthetic enzymes have resulted in an extensive database of kinetic information. Recently, this database has been augmented by a variety of high and medium resolution crystal structures of key photosynthetic enzymes that now include the two photosystems (PSI and PSII) of oxygenic photosynthetic organisms. Here, we examine the currently available structural and functional information from an engineer's point of view with the long-term goal of reproducing the key features of natural photosystems in de novo designed and custom-built molecular solar energy conversion devices. We find that the basic physics of the transfer processes, namely, the time constraints imposed by the rates of incoming photon flux and the various decay processes allow for a large degree of tolerance in the engineering parameters. Moreover, we find that the requirements to guarantee energy and electron transfer rates that yield high efficiency in natural photosystems are largely met by control of distance between chromophores and redox cofactors. Thus, for projected de novo designed constructions, the control of spatial organization of cofactor molecules within a dense array is initially given priority. Nevertheless, constructions accommodating dense arrays of different cofactors, some well within 1 nm from each other, still presents a significant challenge for protein design.

  12. High resolution reconstruction of monthly autumn and winter precipitation of Iberian Peninsula for last 150 years.

    NASA Astrophysics Data System (ADS)

    Cortesi, N.; Trigo, R.; González-Hidalgo, J. C.; Ramos, A.

    2012-04-01

    Precipitation over Iberian Peninsula (IP) presents large values of interannual variability and large spatial contrasts between wet mountainous regions in the north and dry regions in the southern plains. Unlike other European regions, IP was poorly monitored for precipitation during 19th century. Here we present a new approach to fill this gap. A set of 26 atmospheric circulation weather types (Trigo R.M. and DaCamara C.C., 2000) derived from a recent SLP dataset, the EMULATE (European and North Atlantic daily to multidecadal climate variability) Project, was used to reconstruct Iberian monthly precipitation from October to March during 1851-1947. Principal Component Regression Analysis was chosen to develop monthly precipitation reconstruction back to 1851 and calibrated over 1948-2003 period for 3030 monthly precipitation series of high-density homogenized MOPREDAS (Monthly Precipitation Database for Spain and Portugal) database. Validation was conducted over 1920-1947 at 15 key site locations. Results show high model performance for selected months, with a mean coefficient of variation (CV) around 0.6 during validation period. Lower CV values were achieved in western area of IP. Trigo, R. M., and DaCamara, C.C., 2000: "Circulation weather types and their impact on the precipitation regime in Portugal". Int. J. Climatol., 20, 1559-1581.

  13. BAPA Database: Linking landslide occurrence with rainfall in Asturias (Spain)

    NASA Astrophysics Data System (ADS)

    Valenzuela, Pablo; José Domínguez-Cuesta, María; Jiménez-Sánchez, Montserrat

    2015-04-01

    Asturias is a region in northern Spain with a temperate and humid climate. In this region, slope instability processes are very common and often cause economic losses and, sometimes, human victims. To prevent the geological risk involved, it is of great interest to predict landslide spatial and temporal occurrence. Some previous investigations have shown the importance of rainfall as a trigger factor. Despite the high incidence of these phenomena in Asturias, there are no databases of recent and actual landslides. The BAPA Project (Base de Datos de Argayos del Principado de Asturias - Principality of Asturias Landslide Database) aims to create an inventory of slope instabilities which have occurred between 1980 and 2015. The final goal is to study in detail the relationship between rainfall and slope instabilities in Asturias, establishing precipitation thresholds and soil moisture conditions necessary to instability triggering. This work presents the database progress showing its structure divided into various fields that essentially contain information related to spatial, temporal, geomorphological and damage data.

  14. Integrating stations from the North America Gravity Database into a local GPS-based land gravity survey

    USGS Publications Warehouse

    Shoberg, Thomas G.; Stoddard, Paul R.

    2013-01-01

    The ability to augment local gravity surveys with additional gravity stations from easily accessible national databases can greatly increase the areal coverage and spatial resolution of a survey. It is, however, necessary to integrate such data seamlessly with the local survey. One challenge to overcome in integrating data from national databases is that these data are typically of unknown quality. This study presents a procedure for the evaluation and seamless integration of gravity data of unknown quality from a national database with data from a local Global Positioning System (GPS)-based survey. The starting components include the latitude, longitude, elevation and observed gravity at each station location. Interpolated surfaces of the complete Bouguer anomaly are used as a means of quality control and comparison. The result is an integrated dataset of varying quality with many stations having GPS accuracy and other reliable stations of unknown origin, yielding a wider coverage and greater spatial resolution than either survey alone.

  15. Topologically Consistent Models for Efficient Big Geo-Spatio Data Distribution

    NASA Astrophysics Data System (ADS)

    Jahn, M. W.; Bradley, P. E.; Doori, M. Al; Breunig, M.

    2017-10-01

    Geo-spatio-temporal topology models are likely to become a key concept to check the consistency of 3D (spatial space) and 4D (spatial + temporal space) models for emerging GIS applications such as subsurface reservoir modelling or the simulation of energy and water supply of mega or smart cities. Furthermore, the data management for complex models consisting of big geo-spatial data is a challenge for GIS and geo-database research. General challenges, concepts, and techniques of big geo-spatial data management are presented. In this paper we introduce a sound mathematical approach for a topologically consistent geo-spatio-temporal model based on the concept of the incidence graph. We redesign DB4GeO, our service-based geo-spatio-temporal database architecture, on the way to the parallel management of massive geo-spatial data. Approaches for a new geo-spatio-temporal and object model of DB4GeO meeting the requirements of big geo-spatial data are discussed in detail. Finally, a conclusion and outlook on our future research are given on the way to support the processing of geo-analytics and -simulations in a parallel and distributed system environment.

  16. Rapid production of optimal-quality reduced-resolution representations of very large databases

    DOEpatents

    Sigeti, David E.; Duchaineau, Mark; Miller, Mark C.; Wolinsky, Murray; Aldrich, Charles; Mineev-Weinstein, Mark B.

    2001-01-01

    View space representation data is produced in real time from a world space database representing terrain features. The world space database is first preprocessed. A database is formed having one element for each spatial region corresponding to a finest selected level of detail. A multiresolution database is then formed by merging elements and a strict error metric is computed for each element at each level of detail that is independent of parameters defining the view space. The multiresolution database and associated strict error metrics are then processed in real time for real time frame representations. View parameters for a view volume comprising a view location and field of view are selected. The error metric with the view parameters is converted to a view-dependent error metric. Elements with the coarsest resolution are chosen for an initial representation. Data set first elements from the initial representation data set are selected that are at least partially within the view volume. The first elements are placed in a split queue ordered by the value of the view-dependent error metric. If the number of first elements in the queue meets or exceeds a predetermined number of elements or whether the largest error metric is less than or equal to a selected upper error metric bound, the element at the head of the queue is force split and the resulting elements are inserted into the queue. Force splitting is continued until the determination is positive to form a first multiresolution set of elements. The first multiresolution set of elements is then outputted as reduced resolution view space data representing the terrain features.

  17. Cyberinfrastructure for the Unified Study of Earth Structure and Earthquake Sources in Complex Geologic Environments

    NASA Astrophysics Data System (ADS)

    Zhao, L.; Chen, P.; Jordan, T. H.; Olsen, K. B.; Maechling, P.; Faerman, M.

    2004-12-01

    The Southern California Earthquake Center (SCEC) is developing a Community Modeling Environment (CME) to facilitate the computational pathways of physics-based seismic hazard analysis (Maechling et al., this meeting). Major goals are to facilitate the forward modeling of seismic wavefields in complex geologic environments, including the strong ground motions that cause earthquake damage, and the inversion of observed waveform data for improved models of Earth structure and fault rupture. Here we report on a unified approach to these coupled inverse problems that is based on the ability to generate and manipulate wavefields in densely gridded 3D Earth models. A main element of this approach is a database of receiver Green tensors (RGT) for the seismic stations, which comprises all of the spatial-temporal displacement fields produced by the three orthogonal unit impulsive point forces acting at each of the station locations. Once the RGT database is established, synthetic seismograms for any earthquake can be simply calculated by extracting a small, source-centered volume of the RGT from the database and applying the reciprocity principle. The partial derivatives needed for point- and finite-source inversions can be generated in the same way. Moreover, the RGT database can be employed in full-wave tomographic inversions launched from a 3D starting model, because the sensitivity (Fréchet) kernels for travel-time and amplitude anomalies observed at seismic stations in the database can be computed by convolving the earthquake-induced displacement field with the station RGTs. We illustrate all elements of this unified analysis with an RGT database for 33 stations of the California Integrated Seismic Network in and around the Los Angeles Basin, which we computed for the 3D SCEC Community Velocity Model (SCEC CVM3.0) using a fourth-order staggered-grid finite-difference code. For a spatial grid spacing of 200 m and a time resolution of 10 ms, the calculations took ~19,000 node-hours on the Linux cluster at USC's High-Performance Computing Center. The 33-station database with a volume of ~23.5 TB was archived in the SCEC digital library at the San Diego Supercomputer Center using the Storage Resource Broker (SRB). From a laptop, anyone with access to this SRB collection can compute synthetic seismograms for an arbitrary source in the CVM in a matter of minutes. Efficient approaches have been implemented to use this RGT database in the inversions of waveforms for centroid and finite moment tensors and tomographic inversions to improve the CVM. Our experience with these large problems suggests areas where the cyberinfrastructure currently available for geoscience computation needs to be improved.

  18. Airport databases for 3D synthetic-vision flight-guidance displays: database design, quality assessment, and data generation

    NASA Astrophysics Data System (ADS)

    Friedrich, Axel; Raabe, Helmut; Schiefele, Jens; Doerr, Kai Uwe

    1999-07-01

    In future aircraft cockpit designs SVS (Synthetic Vision System) databases will be used to display 3D physical and virtual information to pilots. In contrast to pure warning systems (TAWS, MSAW, EGPWS) SVS serve to enhance pilot spatial awareness by 3-dimensional perspective views of the objects in the environment. Therefore all kind of aeronautical relevant data has to be integrated into the SVS-database: Navigation- data, terrain-data, obstacles and airport-Data. For the integration of all these data the concept of a GIS (Geographical Information System) based HQDB (High-Quality- Database) has been created at the TUD (Technical University Darmstadt). To enable database certification, quality- assessment procedures according to ICAO Annex 4, 11, 14 and 15 and RTCA DO-200A/EUROCAE ED76 were established in the concept. They can be differentiated in object-related quality- assessment-methods following the keywords accuracy, resolution, timeliness, traceability, assurance-level, completeness, format and GIS-related quality assessment methods with the keywords system-tolerances, logical consistence and visual quality assessment. An airport database is integrated in the concept as part of the High-Quality- Database. The contents of the HQDB are chosen so that they support both Flight-Guidance-SVS and other aeronautical applications like SMGCS (Surface Movement and Guidance Systems) and flight simulation as well. Most airport data are not available. Even though data for runways, threshold, taxilines and parking positions were to be generated by the end of 1997 (ICAO Annex 11 and 15) only a few countries fulfilled these requirements. For that reason methods of creating and certifying airport data have to be found. Remote sensing and digital photogrammetry serve as means to acquire large amounts of airport objects with high spatial resolution and accuracy in much shorter time than with classical surveying methods. Remotely sensed images can be acquired from satellite-platforms or aircraft-platforms. To achieve the highest horizontal accuracy requirements stated in ICAO Annex 14 for runway centerlines (0.50 meters), at the present moment only images acquired from aircraft based sensors can be used as source data. Still, ground reference by GCP (Ground Control-points) is obligatory. A DEM (Digital Elevation Model) can be created automatically in the photogrammetric process. It can be used as highly accurate elevation model for the airport area. The final verification of airport data is accomplished by independent surveyed runway- and taxiway- control-points. The concept of generation airport-data by means of remote sensing and photogrammetry was tested with the Stuttgart/Germany airport. The results proved that the final accuracy was within the accuracy specification defined by ICAO Annex 14.

  19. Inferring rupture characteristics using new databases for 3D slab geometry and earthquake rupture models

    NASA Astrophysics Data System (ADS)

    Hayes, G. P.; Plescia, S. M.; Moore, G.

    2017-12-01

    The U.S. Geological Survey National Earthquake Information Center has recently published a database of finite fault models for globally distributed M7.5+ earthquakes since 1990. Concurrently, we have also compiled a database of three-dimensional slab geometry models for all global subduction zones, to update and replace Slab1.0. Here, we use these two new and valuable resources to infer characteristics of earthquake rupture and propagation in subduction zones, where the vast majority of large-to-great-sized earthquakes occur. For example, we can test questions that are fairly prevalent in seismological literature. Do large ruptures preferentially occur where subduction zones are flat (e.g., Bletery et al., 2016)? Can `flatness' be mapped to understand and quantify earthquake potential? Do the ends of ruptures correlate with significant changes in slab geometry, and/or bathymetric features entering the subduction zone? Do local subduction zone geometry changes spatially correlate with areas of low slip in rupture models (e.g., Moreno et al., 2012)? Is there a correlation between average seismogenic zone dip, and/or seismogenic zone width, and earthquake size? (e.g., Hayes et al., 2012; Heuret et al., 2011). These issues are fundamental to the understanding of earthquake rupture dynamics and subduction zone seismogenesis, and yet many are poorly understood or are still debated in scientific literature. We attempt to address these questions and similar issues in this presentation, and show how these models can be used to improve our understanding of earthquake hazard in subduction zones.

  20. Introducing the Global Fire WEather Database (GFWED)

    NASA Astrophysics Data System (ADS)

    Field, R. D.

    2015-12-01

    The Canadian Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations beginning in 1980 called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5° latitude by 2/3° longitude. Input weather data were obtained from the NASA Modern Era Retrospective-Analysis for Research (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded datasets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia, Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA-based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DC=1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously-identified in MERRA's precipitation and reinforce the need to consider alternative sources of precipitation data. GFWED is being used by researchers around the world for analyzing historical relationships between fire weather and fire activity at large scales, in identifying large-scale atmosphere-ocean controls on fire weather, and calibration of FWI-based fire prediction models. These applications will be discussed. More information on GFWED can be found at http://data.giss.nasa.gov/impacts/gfwed/

  1. Macrostrat: A Platform for Geological Data Integration and Deep-Time Earth Crust Research

    NASA Astrophysics Data System (ADS)

    Peters, Shanan E.; Husson, Jon M.; Czaplewski, John

    2018-04-01

    Characterizing the lithology, age, and physical-chemical properties of rocks and sediments in the Earth's upper crust is necessary to fully assess energy, water, and mineral resources and to address many fundamental questions. Although a large number of geological maps, regional geological syntheses, and sample-based measurements have been produced, there is no openly available database that integrates rock record-derived data, while also facilitating large-scale, quantitative characterization of the volume, age, and material properties of the upper crust. Here we describe Macrostrat, a relational geospatial database and supporting cyberinfrastructure that is designed to enable quantitative spatial and geochronological analyses of the entire assemblage of surface and subsurface sedimentary, igneous, and metamorphic rocks. Macrostrat contains general, comprehensive summaries of the age and properties of 33,903 lithologically and chronologically defined geological units distributed across 1,474 regions in North and South America, the Caribbean, New Zealand, and the deep sea. Sample-derived data, including fossil occurrences in the Paleobiology Database, more than 180,000 geochemical and outcrop-derived measurements, and more than 2.3 million bedrock geologic map units from over 200 map sources, are linked to specific Macrostrat units and/or lithologies. Macrostrat has generated numerous quantitative results and its infrastructure is used as a data platform in several independently developed mobile applications. It is necessary to expand geographic coverage and to refine age models and material properties to arrive at a more precise characterization of the upper crust globally and test fundamental hypotheses about the long-term evolution of Earth systems.

  2. Providing accurate near real-time fire alerts for Protected Areas through NASA FIRMS: Opportunities and Challenges

    NASA Astrophysics Data System (ADS)

    Ilavajhala, S.; Davies, D.; Schmaltz, J. E.; Wong, M.; Murphy, K. J.

    2013-12-01

    The NASA Fire Information for Resource Management System (FIRMS) is at the forefront of providing global near real-time (NRT) MODIS thermal anomalies / hotspot location data to end-users . FIRMS serves the data via an interactive Web GIS named Web Fire Mapper, downloads of NRT active fire, archive data downloads for MODIS hotspots dating back to 1999 and a hotspot email alert system The FIRMS Email Alerts system has been successfully alerting users of fires in their area of interest in near real-time and/or via daily and weekly email summaries, with an option to receive MODIS hotspot data as a text file (CSV) attachment. Currently, there are more than 7000 email alert subscriptions from more than 100 countries. Specifically, the email alerts system is designed to generate and send an email alert for any region or area on the globe, with a special focus on providing alerts for protected areas worldwide. For many protected areas, email alerts are particularly useful for early fire detection, monitoring on going fires, as well as allocating resources to protect wildlife and natural resources of particular value. For protected areas, FIRMS uses the World Database on Protected Areas (WDPA) supplied by United Nations Environment Program - World Conservation Monitoring Centre (UNEP-WCMC). Maintaining the most up-to-date, accurate boundary geometry for the protected areas for the email alerts is a challenge as the WDPA is continuously updated due to changing boundaries, merging or delisting of certain protected areas. Because of this dynamic nature of the protected areas database, the FIRMS protected areas database is frequently out-of-date with the most current version of WDPA database. To maintain the most up-to-date boundary information for protected areas and to be in compliance with the WDPA terms and conditions, FIRMS needs to constantly update its database of protected areas. Currently, FIRMS strives to keep its database up to date by downloading the most recent WDPA database at regular intervals, processing it, and ingesting it into the FIRMS spatial database. However, due to the large size of database, the process to download, process and ingest the database is quite time consuming. The FIRMS team is currently working on developing a method to update the protected areas database via web at regular intervals or on-demand. Using such a solution, FIRMS will be able access the most up-to-date extents of any protected area and the corresponding spatial geometries in real time. As such, FIRMS can utilize such a service to access the protected areas and their associated geometries to keep users' protected area boundaries in sync with those of the most recent WDPA database, and thus serve a more accurate email alert to the users. Furthermore, any client accessing the WDPA protected areas database could potentially use the solution of real-time access to the protected areas database. This talk primarily focuses on the challenges for FIRMS in sending accurate email alerts for protected areas, along with the solution the FIRMS team is developing. This talk also introduces the FIRMS fire information system and its components, with a special emphasis on the FIRMS email alerts system.

  3. It's time for a crisper image of the Face of the Earth: Landsat and climate time series for massive land cover & climate change mapping at detailed resolution.

    NASA Astrophysics Data System (ADS)

    Pons, Xavier; Miquel, Ninyerola; Oscar, González-Guerrero; Cristina, Cea; Pere, Serra; Alaitz, Zabala; Lluís, Pesquer; Ivette, Serral; Joan, Masó; Cristina, Domingo; Maria, Serra Josep; Jordi, Cristóbal; Chris, Hain; Martha, Anderson; Juanjo, Vidal

    2014-05-01

    Combining climate dynamics and land cover at a relative coarse resolution allows a very interesting approach to global studies, because in many cases these studies are based on a quite high temporal resolution, but they may be limited in large areas like the Mediterranean. However, the current availability of long time series of Landsat imagery and spatially detailed surface climate models allow thinking on global databases improving the results of mapping in areas with a complex history of landscape dynamics, characterized by fragmentation, or areas where relief creates intricate climate patterns that can be hardly monitored or modeled at coarse spatial resolutions. DinaCliVe (supported by the Spanish Government and ERDF, and by the Catalan Government, under grants CGL2012-33927 and SGR2009-1511) is the name of the project that aims analyzing land cover and land use dynamics as well as vegetation stress, with a particular emphasis on droughts, and the role that climate variation may have had in such phenomena. To meet this objective is proposed to design a massive database from long time series of Landsat land cover products (grouped in quinquennia) and monthly climate records (in situ climate data) for the Iberian Peninsula (582,000 km2). The whole area encompasses 47 Landsat WRS2 scenes (Landsat 4 to 8 missions, from path 197 to 202 and from rows 30 to 34), and 52 Landsat WRS1 scenes (for the previous Landsat missions, 212 to 221 and 30 to 34). Therefore, a mean of 49.5 Landsat scenes, 8 quinquennia per scene and a about 6 dates per quinquennium , from 1975 to present, produces around 2376 sets resulting in 30 m x 30 m spatial resolution maps. Each set is composed by highly coherent geometric and radiometric multispectral and multitemporal (to account for phenology) imagery as well as vegetation and wetness indexes, and several derived topographic information (about 10 Tbyte of data). Furthermore, on the basis on a previous work: the Digital Climatic Atlas of the Iberian Peninsula, spatio-temporal surface climate data has been generated with a monthly resolution (from January 1950 to December 2010) through a multiple regression model and residuals spatial interpolation using geographic variables (altitude, latitude and continentality) and solar radiation (only in the case of temperatures). This database includes precipitation, mean minimum and mean maximum air temperature and mean air temperature, improving the previous one by using the ASTER GDEM at 30 m spatial resolution, by deepening to a monthly resolution and by increasing the number of meteorological stations used, representing a total amount of 0.7 Tbyte of data. An initial validation shows accuracies higher than 85 % for land cover maps and an RMS of 1.2 ºC, 1.6 ºC and 22 mm for mean and extreme temperatures, and for precipitation, respectively. This amount of new detailed data for the Iberian Peninsula framework will be used to study the spatial direction, velocity and acceleration of the tendencies related to climate change, land cover and tree line dynamics. A global analysis using all these datasets will try to discriminate the climatic signal when interpreted together with anthropogenic driving forces. Ultimately, getting ready for massive database computation and analysis will improve predictions for global models that will require of the growing high-resolution information available.

  4. Database of tsunami scenario simulations for Western Iberia: a tool for the TRIDEC Project Decision Support System for tsunami early warning

    NASA Astrophysics Data System (ADS)

    Armigliato, Alberto; Pagnoni, Gianluca; Zaniboni, Filippo; Tinti, Stefano

    2013-04-01

    TRIDEC is a EU-FP7 Project whose main goal is, in general terms, to develop suitable strategies for the management of crises possibly arising in the Earth management field. The general paradigms adopted by TRIDEC to develop those strategies include intelligent information management, the capability of managing dynamically increasing volumes and dimensionality of information in complex events, and collaborative decision making in systems that are typically very loosely coupled. The two areas where TRIDEC applies and tests its strategies are tsunami early warning and industrial subsurface development. In the field of tsunami early warning, TRIDEC aims at developing a Decision Support System (DSS) that integrates 1) a set of seismic, geodetic and marine sensors devoted to the detection and characterisation of possible tsunamigenic sources and to monitoring the time and space evolution of the generated tsunami, 2) large-volume databases of pre-computed numerical tsunami scenarios, 3) a proper overall system architecture. Two test areas are dealt with in TRIDEC: the western Iberian margin and the eastern Mediterranean. In this study, we focus on the western Iberian margin with special emphasis on the Portuguese coasts. The strategy adopted in TRIDEC plans to populate two different databases, called "Virtual Scenario Database" (VSDB) and "Matching Scenario Database" (MSDB), both of which deal only with earthquake-generated tsunamis. In the VSDB we simulate numerically few large-magnitude events generated by the major known tectonic structures in the study area. Heterogeneous slip distributions on the earthquake faults are introduced to simulate events as "realistically" as possible. The members of the VSDB represent the unknowns that the TRIDEC platform must be able to recognise and match during the early crisis management phase. On the other hand, the MSDB contains a very large number (order of thousands) of tsunami simulations performed starting from many different simple earthquake sources of different magnitudes and located in the "vicinity" of the virtual scenario earthquake. In the DSS perspective, the members of the MSDB have to be suitably combined based on the information coming from the sensor networks, and the results are used during the crisis evolution phase to forecast the degree of exposition of different coastal areas. We provide examples from both databases whose members are computed by means of the in-house software called UBO-TSUFD, implementing the non-linear shallow-water equations and solving them over a set of nested grids that guarantee a suitable spatial resolution (few tens of meters) in specific, suitably chosen, coastal areas.

  5. Spatially Resolved Mid-IR Spectra from Meteorites; Linking Composition, Crystallographic Orientation and Spectra on the Micro-Scale

    NASA Astrophysics Data System (ADS)

    Stephen, N. R.

    2016-08-01

    IR spectroscopy is used to infer composition of extraterrestrial bodies, comparing bulk spectra to databases of separate mineral phases. We extract spatially resolved meteorite-specific spectra from achondrites with respect to zonation and orientation.

  6. Spatial and temporal contrasts in the distribution of crops and pastures across Amazonia: A new agricultural land use data set from census data since 1950

    PubMed Central

    Imbach, P; Manrow, M; Barona, E; Barretto, A; Hyman, G; Ciais, P

    2015-01-01

    Amazonia holds the largest continuous area of tropical forests with intense land use change dynamics inducing water, carbon, and energy feedbacks with regional and global impacts. Much of our knowledge of land use change in Amazonia comes from studies of the Brazilian Amazon, which accounts for two thirds of the region. Amazonia outside of Brazil has received less attention because of the difficulty of acquiring consistent data across countries. We present here an agricultural statistics database of the entire Amazonia region, with a harmonized description of crops and pastures in geospatial format, based on administrative boundary data at the municipality level. The spatial coverage includes countries within Amazonia and spans censuses and surveys from 1950 to 2012. Harmonized crop and pasture types are explored by grouping annual and perennial cropping systems, C3 and C4 photosynthetic pathways, planted and natural pastures, and main crops. Our analysis examined the spatial pattern of ratios between classes of the groups and their correlation with the agricultural extent of crops and pastures within administrative units of the Amazon, by country, and census/survey dates. Significant correlations were found between all ratios and the fraction of agricultural lands of each administrative unit, with the exception of planted to natural pastures ratio and pasture lands extent. Brazil and Peru in most cases have significant correlations for all ratios analyzed even for specific census and survey dates. Results suggested improvements, and potential applications of the database for carbon, water, climate, and land use change studies are discussed. The database presented here provides an Amazon-wide improved data set on agricultural dynamics with expanded temporal and spatial coverage. Key Points Agricultural census database covers Amazon basin municipalities from 1950 to 2012Harmonized database groups crops and pastures by cropping system, C3/C4, and main cropsWe explored correlations between groups and the extent of agricultural lands PMID:26709335

  7. Spatial and temporal contrasts in the distribution of crops and pastures across Amazonia: A new agricultural land use data set from census data since 1950.

    PubMed

    Imbach, P; Manrow, M; Barona, E; Barretto, A; Hyman, G; Ciais, P

    2015-06-01

    Amazonia holds the largest continuous area of tropical forests with intense land use change dynamics inducing water, carbon, and energy feedbacks with regional and global impacts. Much of our knowledge of land use change in Amazonia comes from studies of the Brazilian Amazon, which accounts for two thirds of the region. Amazonia outside of Brazil has received less attention because of the difficulty of acquiring consistent data across countries. We present here an agricultural statistics database of the entire Amazonia region, with a harmonized description of crops and pastures in geospatial format, based on administrative boundary data at the municipality level. The spatial coverage includes countries within Amazonia and spans censuses and surveys from 1950 to 2012. Harmonized crop and pasture types are explored by grouping annual and perennial cropping systems, C3 and C4 photosynthetic pathways, planted and natural pastures, and main crops. Our analysis examined the spatial pattern of ratios between classes of the groups and their correlation with the agricultural extent of crops and pastures within administrative units of the Amazon, by country, and census/survey dates. Significant correlations were found between all ratios and the fraction of agricultural lands of each administrative unit, with the exception of planted to natural pastures ratio and pasture lands extent. Brazil and Peru in most cases have significant correlations for all ratios analyzed even for specific census and survey dates. Results suggested improvements, and potential applications of the database for carbon, water, climate, and land use change studies are discussed. The database presented here provides an Amazon-wide improved data set on agricultural dynamics with expanded temporal and spatial coverage. Agricultural census database covers Amazon basin municipalities from 1950 to 2012Harmonized database groups crops and pastures by cropping system, C3/C4, and main cropsWe explored correlations between groups and the extent of agricultural lands.

  8. Mining Claim Activity on Federal Land in the United States

    USGS Publications Warehouse

    Causey, J. Douglas

    2007-01-01

    Several statistical compilations of mining claim activity on Federal land derived from the Bureau of Land Management's LR2000 database have previously been published by the U.S Geological Survey (USGS). The work in the 1990s did not include Arkansas or Florida. None of the previous reports included Alaska because it is stored in a separate database (Alaska Land Information System) and is in a different format. This report includes data for all states for which there are Federal mining claim records, beginning in 1976 and continuing to the present. The intent is to update the spatial and statistical data associated with this report on an annual basis, beginning with 2005 data. The statistics compiled from the databases are counts of the number of active mining claims in a section of land each year from 1976 to the present for all states within the United States. Claim statistics are subset by lode and placer types, as well as a dataset summarizing all claims including mill site and tunnel site claims. One table presents data by case type, case status, and number of claims in a section. This report includes a spatial database for each state in which mining claims were recorded, except North Dakota, which only has had two claims. A field is present that allows the statistical data to be joined to the spatial databases so that spatial displays and analysis can be done by using appropriate geographic information system (GIS) software. The data show how mining claim activity has changed in intensity, space, and time. Variations can be examined on a state, as well as a national level. The data are tied to a section of land, approximately 640 acres, which allows it to be used at regional, as well as local scale. The data only pertain to Federal land and mineral estate that was open to mining claim location at the time the claims were staked.

  9. Example-Based Super-Resolution Fluorescence Microscopy.

    PubMed

    Jia, Shu; Han, Boran; Kutz, J Nathan

    2018-04-23

    Capturing biological dynamics with high spatiotemporal resolution demands the advancement in imaging technologies. Super-resolution fluorescence microscopy offers spatial resolution surpassing the diffraction limit to resolve near-molecular-level details. While various strategies have been reported to improve the temporal resolution of super-resolution imaging, all super-resolution techniques are still fundamentally limited by the trade-off associated with the longer image acquisition time that is needed to achieve higher spatial information. Here, we demonstrated an example-based, computational method that aims to obtain super-resolution images using conventional imaging without increasing the imaging time. With a low-resolution image input, the method provides an estimate of its super-resolution image based on an example database that contains super- and low-resolution image pairs of biological structures of interest. The computational imaging of cellular microtubules agrees approximately with the experimental super-resolution STORM results. This new approach may offer potential improvements in temporal resolution for experimental super-resolution fluorescence microscopy and provide a new path for large-data aided biomedical imaging.

  10. Land Surface Microwave Emissivities Derived from AMSR-E and MODIS Measurements with Advanced Quality Control

    NASA Technical Reports Server (NTRS)

    Moncet, Jean-Luc; Liang, Pan; Galantowicz, John F.; Lipton, Alan E.; Uymin, Gennady; Prigent, Catherine; Grassotti, Christopher

    2011-01-01

    A microwave emissivity database has been developed with data from the Advanced Microwave Scanning Radiometer-EOS (AMSR-E) and with ancillary land surface temperature (LST) data from the Moderate Resolution Imaging Spectroradiometer (MODIS) on the same Aqua spacecraft. The primary intended application of the database is to provide surface emissivity constraints in atmospheric and surface property retrieval or assimilation. An additional application is to serve as a dynamic indicator of land surface properties relevant to climate change monitoring. The precision of the emissivity data is estimated to be significantly better than in prior databases from other sensors due to the precise collocation with high-quality MODIS LST data and due to the quality control features of our data analysis system. The accuracy of the emissivities in deserts and semi-arid regions is enhanced by applying, in those regions, a version of the emissivity retrieval algorithm that accounts for the penetration of microwave radiation through dry soil with diurnally varying vertical temperature gradients. These results suggest that this penetration effect is more widespread and more significant to interpretation of passive microwave measurements than had been previously established. Emissivity coverage in areas where persistent cloudiness interferes with the availability of MODIS LST data is achieved using a classification-based method to spread emissivity data from less-cloudy areas that have similar microwave surface properties. Evaluations and analyses of the emissivity products over homogeneous snow-free areas are presented, including application to retrieval of soil temperature profiles. Spatial inhomogeneities are the largest in the vicinity of large water bodies due to the large water/land emissivity contrast and give rise to large apparent temporal variability in the retrieved emissivities when satellite footprint locations vary over time. This issue will be dealt with in the future by including a water fraction correction. Also note that current reliance on the MODIS day-night algorithm as a source of LST limits the coverage of the database in the Polar Regions. We will consider relaxing the current restriction as part of future development.

  11. Expression and Organization of Geographic Spatial Relations Based on Topic Maps

    NASA Astrophysics Data System (ADS)

    Liang, H. J.; Wang, H.; Cui, T. J.; Guo, J. F.

    2017-09-01

    Spatial Relation is one of the important components of Geographical Information Science and Spatial Database. There have been lots of researches on Spatial Relation and many different spatial relations have been proposed. The relationships among these spatial relations such as hierarchy and so on are complex and this brings some difficulties to the applications and teaching of these spatial relations. This paper summaries some common spatial relations, extracts the topic types, association types, resource types of these spatial relations using the technology of Topic Maps, and builds many different relationships among these spatial relations. Finally, this paper utilizes Java and Ontopia to build a topic map among these common spatial relations, forms a complex knowledge network of spatial relations, and realizes the effective management and retrieval of spatial relations.

  12. Navigating spatial and temporal complexity in developing a long-term land use database for an agricultural watershed

    USDA-ARS?s Scientific Manuscript database

    No comprehensive protocols exist for the collection, standardization, and storage of agronomic management information into a database that preserves privacy, maintains data uncertainty, and translates everyday decisions into quantitative values. This manuscript describes the development of a databas...

  13. The Mars Climate Database (MCD version 5.2)

    NASA Astrophysics Data System (ADS)

    Millour, E.; Forget, F.; Spiga, A.; Navarro, T.; Madeleine, J.-B.; Montabone, L.; Pottier, A.; Lefevre, F.; Montmessin, F.; Chaufray, J.-Y.; Lopez-Valverde, M. A.; Gonzalez-Galindo, F.; Lewis, S. R.; Read, P. L.; Huot, J.-P.; Desjean, M.-C.; MCD/GCM development Team

    2015-10-01

    The Mars Climate Database (MCD) is a database of meteorological fields derived from General Circulation Model (GCM) numerical simulations of the Martian atmosphere and validated using available observational data. The MCD includes complementary post-processing schemes such as high spatial resolution interpolation of environmental data and means of reconstructing the variability thereof. We have just completed (March 2015) the generation of a new version of the MCD, MCD version 5.2

  14. Spatial and temporal air quality pattern recognition using environmetric techniques: a case study in Malaysia.

    PubMed

    Syed Abdul Mutalib, Sharifah Norsukhairin; Juahir, Hafizan; Azid, Azman; Mohd Sharif, Sharifah; Latif, Mohd Talib; Aris, Ahmad Zaharin; Zain, Sharifuddin M; Dominick, Doreena

    2013-09-01

    The objective of this study is to identify spatial and temporal patterns in the air quality at three selected Malaysian air monitoring stations based on an eleven-year database (January 2000-December 2010). Four statistical methods, Discriminant Analysis (DA), Hierarchical Agglomerative Cluster Analysis (HACA), Principal Component Analysis (PCA) and Artificial Neural Networks (ANNs), were selected to analyze the datasets of five air quality parameters, namely: SO2, NO2, O3, CO and particulate matter with a diameter size of below 10 μm (PM10). The three selected air monitoring stations share the characteristic of being located in highly urbanized areas and are surrounded by a number of industries. The DA results show that spatial characterizations allow successful discrimination between the three stations, while HACA shows the temporal pattern from the monthly and yearly factor analysis which correlates with severe haze episodes that have happened in this country at certain periods of time. The PCA results show that the major source of air pollution is mostly due to the combustion of fossil fuel in motor vehicles and industrial activities. The spatial pattern recognition (S-ANN) results show a better prediction performance in discriminating between the regions, with an excellent percentage of correct classification compared to DA. This study presents the necessity and usefulness of environmetric techniques for the interpretation of large datasets aiming to obtain better information about air quality patterns based on spatial and temporal characterizations at the selected air monitoring stations.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Setio, Arnaud A. A., E-mail: arnaud.arindraadiyoso@radboudumc.nl; Jacobs, Colin; Gelderblom, Jaap

    Purpose: Current computer-aided detection (CAD) systems for pulmonary nodules in computed tomography (CT) scans have a good performance for relatively small nodules, but often fail to detect the much rarer larger nodules, which are more likely to be cancerous. We present a novel CAD system specifically designed to detect solid nodules larger than 10 mm. Methods: The proposed detection pipeline is initiated by a three-dimensional lung segmentation algorithm optimized to include large nodules attached to the pleural wall via morphological processing. An additional preprocessing is used to mask out structures outside the pleural space to ensure that pleural and parenchymalmore » nodules have a similar appearance. Next, nodule candidates are obtained via a multistage process of thresholding and morphological operations, to detect both larger and smaller candidates. After segmenting each candidate, a set of 24 features based on intensity, shape, blobness, and spatial context are computed. A radial basis support vector machine (SVM) classifier was used to classify nodule candidates, and performance was evaluated using ten-fold cross-validation on the full publicly available lung image database consortium database. Results: The proposed CAD system reaches a sensitivity of 98.3% (234/238) and 94.1% (224/238) large nodules at an average of 4.0 and 1.0 false positives/scan, respectively. Conclusions: The authors conclude that the proposed dedicated CAD system for large pulmonary nodules can identify the vast majority of highly suspicious lesions in thoracic CT scans with a small number of false positives.« less

  16. A database and tool for boundary conditions for regional air quality modeling: description and evaluation

    NASA Astrophysics Data System (ADS)

    Henderson, B. H.; Akhtar, F.; Pye, H. O. T.; Napelenok, S. L.; Hutzell, W. T.

    2014-02-01

    Transported air pollutants receive increasing attention as regulations tighten and global concentrations increase. The need to represent international transport in regional air quality assessments requires improved representation of boundary concentrations. Currently available observations are too sparse vertically to provide boundary information, particularly for ozone precursors, but global simulations can be used to generate spatially and temporally varying lateral boundary conditions (LBC). This study presents a public database of global simulations designed and evaluated for use as LBC for air quality models (AQMs). The database covers the contiguous United States (CONUS) for the years 2001-2010 and contains hourly varying concentrations of ozone, aerosols, and their precursors. The database is complemented by a tool for configuring the global results as inputs to regional scale models (e.g., Community Multiscale Air Quality or Comprehensive Air quality Model with extensions). This study also presents an example application based on the CONUS domain, which is evaluated against satellite retrieved ozone and carbon monoxide vertical profiles. The results show performance is largely within uncertainty estimates for ozone from the Ozone Monitoring Instrument and carbon monoxide from the Measurements Of Pollution In The Troposphere (MOPITT), but there were some notable biases compared with Tropospheric Emission Spectrometer (TES) ozone. Compared with TES, our ozone predictions are high-biased in the upper troposphere, particularly in the south during January. This publication documents the global simulation database, the tool for conversion to LBC, and the evaluation of concentrations on the boundaries. This documentation is intended to support applications that require representation of long-range transport of air pollutants.

  17. gPhoton: THE GALEX PHOTON DATA ARCHIVE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Million, Chase; Fleming, Scott W.; Shiao, Bernie

    gPhoton is a new database product and software package that enables analysis of GALEX ultraviolet data at the photon level. The project’s stand-alone, pure-Python calibration pipeline reproduces the functionality of the original mission pipeline to reduce raw spacecraft data to lists of time-tagged, sky-projected photons, which are then hosted in a publicly available database by the Mikulski Archive at Space Telescope. This database contains approximately 130 terabytes of data describing approximately 1.1 trillion sky-projected events with a timestamp resolution of five milliseconds. A handful of Python and command-line modules serve as a front end to interact with the database andmore » to generate calibrated light curves and images from the photon-level data at user-defined temporal and spatial scales. The gPhoton software and source code are in active development and publicly available under a permissive license. We describe the motivation, design, and implementation of the calibration pipeline, database, and tools, with emphasis on divergence from prior work, as well as challenges created by the large data volume. We summarize the astrometric and photometric performance of gPhoton relative to the original mission pipeline. For a brief example of short time-domain science capabilities enabled by gPhoton, we show new flares from the known M-dwarf flare star CR Draconis. The gPhoton software has permanent object identifiers with the ASCL (ascl:1603.004) and DOI (doi:10.17909/T9CC7G). This paper describes the software as of version v1.27.2.« less

  18. The contribution of virtual reality to the diagnosis of spatial navigation disorders and to the study of the role of navigational aids: A systematic literature review.

    PubMed

    Cogné, M; Taillade, M; N'Kaoua, B; Tarruella, A; Klinger, E; Larrue, F; Sauzéon, H; Joseph, P-A; Sorita, E

    2017-06-01

    Spatial navigation, which involves higher cognitive functions, is frequently implemented in daily activities, and is critical to the participation of human beings in mainstream environments. Virtual reality is an expanding tool, which enables on one hand the assessment of the cognitive functions involved in spatial navigation, and on the other the rehabilitation of patients with spatial navigation difficulties. Topographical disorientation is a frequent deficit among patients suffering from neurological diseases. The use of virtual environments enables the information incorporated into the virtual environment to be manipulated empirically. But the impact of manipulations seems differ according to their nature (quantity, occurrence, and characteristics of the stimuli) and the target population. We performed a systematic review of research on virtual spatial navigation covering the period from 2005 to 2015. We focused first on the contribution of virtual spatial navigation for patients with brain injury or schizophrenia, or in the context of ageing and dementia, and then on the impact of visual or auditory stimuli on virtual spatial navigation. On the basis of 6521 abstracts identified in 2 databases (Pubmed and Scopus) with the keywords « navigation » and « virtual », 1103 abstracts were selected by adding the keywords "ageing", "dementia", "brain injury", "stroke", "schizophrenia", "aid", "help", "stimulus" and "cue"; Among these, 63 articles were included in the present qualitative analysis. Unlike pencil-and-paper tests, virtual reality is useful to assess large-scale navigation strategies in patients with brain injury or schizophrenia, or in the context of ageing and dementia. Better knowledge about both the impact of the different aids and the cognitive processes involved is essential for the use of aids in neurorehabilitation. Copyright © 2016. Published by Elsevier Masson SAS.

  19. Physical Retrieval of Surface Emissivity Spectrum from Hyperspectral Infrared Radiances

    NASA Technical Reports Server (NTRS)

    Li, Jun; Weisz, Elisabeth; Zhou, Daniel K.

    2007-01-01

    Retrieval of temperature, moisture profiles and surface skin temperature from hyperspectral infrared (IR) radiances requires spectral information about the surface emissivity. Using constant or inaccurate surface emissivities typically results in large retrieval errors, particularly over semi-arid or arid areas where the variation in emissivity spectrum is large both spectrally and spatially. In this study, a physically based algorithm has been developed to retrieve a hyperspectral IR emissivity spectrum simultaneously with the temperature and moisture profiles, as well as the surface skin temperature. To make the solution stable and efficient, the hyperspectral emissivity spectrum is represented by eigenvectors, derived from the laboratory measured hyperspectral emissivity database, in the retrieval process. Experience with AIRS (Atmospheric InfraRed Sounder) radiances shows that a simultaneous retrieval of the emissivity spectrum and the sounding improves the surface skin temperature as well as temperature and moisture profiles, particularly in the near surface layer.

  20. Spatial Query for Planetary Data

    NASA Technical Reports Server (NTRS)

    Shams, Khawaja S.; Crockett, Thomas M.; Powell, Mark W.; Joswig, Joseph C.; Fox, Jason M.

    2011-01-01

    Science investigators need to quickly and effectively assess past observations of specific locations on a planetary surface. This innovation involves a location-based search technology that was adapted and applied to planetary science data to support a spatial query capability for mission operations software. High-performance location-based searching requires the use of spatial data structures for database organization. Spatial data structures are designed to organize datasets based on their coordinates in a way that is optimized for location-based retrieval. The particular spatial data structure that was adapted for planetary data search is the R+ tree.

  1. Tags Extarction from Spatial Documents in Search Engines

    NASA Astrophysics Data System (ADS)

    Borhaninejad, S.; Hakimpour, F.; Hamzei, E.

    2015-12-01

    Nowadays the selective access to information on the Web is provided by search engines, but in the cases which the data includes spatial information the search task becomes more complex and search engines require special capabilities. The purpose of this study is to extract the information which lies in spatial documents. To that end, we implement and evaluate information extraction from GML documents and a retrieval method in an integrated approach. Our proposed system consists of three components: crawler, database and user interface. In crawler component, GML documents are discovered and their text is parsed for information extraction; storage. The database component is responsible for indexing of information which is collected by crawlers. Finally the user interface component provides the interaction between system and user. We have implemented this system as a pilot system on an Application Server as a simulation of Web. Our system as a spatial search engine provided searching capability throughout the GML documents and thus an important step to improve the efficiency of search engines has been taken.

  2. Paradise: A Parallel Information System for EOSDIS

    NASA Technical Reports Server (NTRS)

    DeWitt, David

    1996-01-01

    The Paradise project was begun-in 1993 in order to explore the application of the parallel and object-oriented database system technology developed as a part of the Gamma, Exodus. and Shore projects to the design and development of a scaleable, geo-spatial database system for storing both massive spatial and satellite image data sets. Paradise is based on an object-relational data model. In addition to the standard attribute types such as integers, floats, strings and time, Paradise also provides a set of and multimedia data types, designed to facilitate the storage and querying of complex spatial and multimedia data sets. An individual tuple can contain any combination of this rich set of data types. For example, in the EOSDIS context, a tuple might mix terrain and map data for an area along with the latest satellite weather photo of the area. The use of a geo-spatial metaphor simplifies the task of fusing disparate forms of data from multiple data sources including text, image, map, and video data sets.

  3. A Geospatial Database that Supports Derivation of Climatological Features of Severe Weather

    NASA Astrophysics Data System (ADS)

    Phillips, M.; Ansari, S.; Del Greco, S.

    2007-12-01

    The Severe Weather Data Inventory (SWDI) at NOAA's National Climatic Data Center (NCDC) provides user access to archives of several datasets critical to the detection and evaluation of severe weather. These datasets include archives of: · NEXRAD Level-III point features describing general storm structure, hail, mesocyclone and tornado signatures · National Weather Service Storm Events Database · National Weather Service Local Storm Reports collected from storm spotters · National Weather Service Warnings · Lightning strikes from Vaisala's National Lightning Detection Network (NLDN) SWDI archives all of these datasets in a spatial database that allows for convenient searching and subsetting. These data are accessible via the NCDC web site, Web Feature Services (WFS) or automated web services. The results of interactive web page queries may be saved in a variety of formats, including plain text, XML, Google Earth's KMZ, standards-based NetCDF and Shapefile. NCDC's Storm Risk Assessment Project (SRAP) uses data from the SWDI database to derive gridded climatology products that show the spatial distributions of the frequency of various events. SRAP also can relate SWDI events to other spatial data such as roads, population, watersheds, and other geographic, sociological, or economic data to derive products that are useful in municipal planning, emergency management, the insurance industry, and other areas where there is a need to quantify and qualify how severe weather patterns affect people and property.

  4. Spatial Databases for CalVO Volcanoes: Current Status and Future Directions

    NASA Astrophysics Data System (ADS)

    Ramsey, D. W.

    2013-12-01

    The U.S. Geological Survey (USGS) California Volcano Observatory (CalVO) aims to advance scientific understanding of volcanic processes and to lessen harmful impacts of volcanic activity in California and Nevada. Within CalVO's area of responsibility, ten volcanoes or volcanic centers have been identified by a national volcanic threat assessment in support of developing the U.S. National Volcano Early Warning System (NVEWS) as posing moderate, high, or very high threats to surrounding communities based on their recent eruptive histories and their proximity to vulnerable people, property, and infrastructure. To better understand the extent of potential hazards at these and other volcanoes and volcanic centers, the USGS Volcano Science Center (VSC) is continually compiling spatial databases of volcano information, including: geologic mapping, hazards assessment maps, locations of geochemical and geochronological samples, and the distribution of volcanic vents. This digital mapping effort has been ongoing for over 15 years and early databases are being converted to match recent datasets compiled with new data models designed for use in: 1) generating hazard zones, 2) evaluating risk to population and infrastructure, 3) numerical hazard modeling, and 4) display and query on the CalVO as well as other VSC and USGS websites. In these capacities, spatial databases of CalVO volcanoes and their derivative map products provide an integrated and readily accessible framework of VSC hazards science to colleagues, emergency managers, and the general public.

  5. Soil pH Errors Propagation from Measurements to Spatial Predictions - Cost Benefit Analysis and Risk Assessment Implications for Practitioners and Modelers

    NASA Astrophysics Data System (ADS)

    Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.

    2017-12-01

    The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that need to be assessed against the risk. The modeling community can benefit from such analysis, however, error size and spatial distribution for global and regional predictions need to be assessed against the variability of other drivers and impact on management decisions.

  6. Improvement, Verification, and Refinement of Spatially-Explicit Exposure Models in Risk Assessment - FishRand Spatially-Explicit Bioaccumulation Model Demonstration

    DTIC Science & Technology

    2015-08-01

    21  Figure 4. Data-based proportion of DDD , DDE and DDT in total DDx in fish and sediment by... DDD dichlorodiphenyldichloroethane DDE dichlorodiphenyldichloroethylene DDT dichlorodiphenyltrichloroethane DoD Department of Defense ERM... DDD ) at the other site. The spatially-explicit model consistently predicts tissue concentrations that closely match both the average and the

  7. Environmental Controls on Multi-Scale Soil Nutrient Variability in the Tropics: the Importance of Land-Cover Change

    NASA Astrophysics Data System (ADS)

    Holmes, K. W.; Kyriakidis, P. C.; Chadwick, O. A.; Matricardi, E.; Soares, J. V.; Roberts, D. A.

    2003-12-01

    The natural controls on soil variability and the spatial scales at which correlation exists among soil and environmental variables are critical information for evaluating the effects of deforestation. We detect different spatial scales of variability in soil nutrient levels over a large region (hundreds of thousands of km2) in the Amazon, analyze correlations among soil properties at these different scales, and evaluate scale-specific relationships among soil properties and the factors potentially driving soil development. Statistical relationships among physical drivers of soil formation, namely geology, precipitation, terrain attributes, classified soil types, and land cover derived from remote sensing, were included to determine which factors are related to soil biogeochemistry at each spatial scale. Surface and subsurface soil profile data from a 3000 sample database collected in Rond“nia, Brazil, were used to investigate patterns in pH, phosphorus, nitrogen, organic carbon, effective cation exchange capacity, calcium, magnesium, potassium, aluminum, sand, and clay in this environment grading from closed canopy tropical forest to savanna. We focus on pH in this presentation for simplicity, because pH is the single most important soil characteristic for determining the chemical environment of higher plants and soil microbial activity. We determined four spatial scales which characterize integrated patterns of soil chemistry: less than 3 km; 3 to 10 km; 10 to 68 km; and from 68 to 550 km (extent of study area). Although the finest observable scale was fixed by the field sampling density, the coarser scales were determined from relationships in the data through coregionalization modeling, rather than being imposed by the researcher. Processes which affect soils over short distances, such as land cover and terrain attributes, were good predictors of fine scale spatial components of nutrients; processes which affect soils over very large distances, such as precipitation and geology, were better predictors at coarse spatial scales. However, this result may be affected by the resolution of the available predictor maps. Land-cover change exerted a strong influence on soil chemistry at fine spatial scales, and had progressively less of an effect at coarser scales. It is important to note that land cover, and interactions among land cover and the other predictors, continued to be a significant predictor of soil chemistry at every spatial scale up to hundreds of thousands of kilometers.

  8. Remote sensing information sciences research group: Browse in the EOS era

    NASA Technical Reports Server (NTRS)

    Estes, John E.; Star, Jeffrey L.

    1989-01-01

    The problem of science data browse was examined. Given the tremendous data volumes that are planned for future space missions, particularly the Earth Observing System in the late 1990's, the need for access to large spatial databases must be understood. Work was continued to refine the concept of data browse. Further, software was developed to provide a testbed of the concepts, both to locate possibly interesting data, as well as view a small portion of the data. Build II was placed on a minicomputer and a PC in the laboratory, and provided accounts for use in the testbed. Consideration of the testbed software as an element of in-house data management plans was begun.

  9. Ultra-low field nuclear magnetic resonance and magnetic resonance imaging to discriminate and identify materials

    DOEpatents

    Kraus, Robert H.; Matlashov, Andrei N.; Espy, Michelle A.; Volegov, Petr L.

    2010-03-30

    An ultra-low magnetic field NMR system can non-invasively examine containers. Database matching techniques can then identify hazardous materials within the containers. Ultra-low field NMR systems are ideal for this purpose because they do not require large powerful magnets and because they can examine materials enclosed in conductive shells such as lead shells. The NMR examination technique can be combined with ultra-low field NMR imaging, where an NMR image is obtained and analyzed to identify target volumes. Spatial sensitivity encoding can also be used to identify target volumes. After the target volumes are identified the NMR measurement technique can be used to identify their contents.

  10. The State Geologic Map Compilation (SGMC) geodatabase of the conterminous United States

    USGS Publications Warehouse

    Horton, John D.; San Juan, Carma A.; Stoeser, Douglas B.

    2017-06-30

    The State Geologic Map Compilation (SGMC) geodatabase of the conterminous United States (https://doi. org/10.5066/F7WH2N65) represents a seamless, spatial database of 48 State geologic maps that range from 1:50,000 to 1:1,000,000 scale. A national digital geologic map database is essential in interpreting other datasets that support numerous types of national-scale studies and assessments, such as those that provide geochemistry, remote sensing, or geophysical data. The SGMC is a compilation of the individual U.S. Geological Survey releases of the Preliminary Integrated Geologic Map Databases for the United States. The SGMC geodatabase also contains updated data for seven States and seven entirely new State geologic maps that have been added since the preliminary databases were published. Numerous errors have been corrected and enhancements added to the preliminary datasets using thorough quality assurance/quality control procedures. The SGMC is not a truly integrated geologic map database because geologic units have not been reconciled across State boundaries. However, the geologic data contained in each State geologic map have been standardized to allow spatial analyses of lithology, age, and stratigraphy at a national scale.

  11. Architecture of a spatial data service system for statistical analysis and visualization of regional climate changes

    NASA Astrophysics Data System (ADS)

    Titov, A. G.; Okladnikov, I. G.; Gordov, E. P.

    2017-11-01

    The use of large geospatial datasets in climate change studies requires the development of a set of Spatial Data Infrastructure (SDI) elements, including geoprocessing and cartographical visualization web services. This paper presents the architecture of a geospatial OGC web service system as an integral part of a virtual research environment (VRE) general architecture for statistical processing and visualization of meteorological and climatic data. The architecture is a set of interconnected standalone SDI nodes with corresponding data storage systems. Each node runs a specialized software, such as a geoportal, cartographical web services (WMS/WFS), a metadata catalog, and a MySQL database of technical metadata describing geospatial datasets available for the node. It also contains geospatial data processing services (WPS) based on a modular computing backend realizing statistical processing functionality and, thus, providing analysis of large datasets with the results of visualization and export into files of standard formats (XML, binary, etc.). Some cartographical web services have been developed in a system’s prototype to provide capabilities to work with raster and vector geospatial data based on OGC web services. The distributed architecture presented allows easy addition of new nodes, computing and data storage systems, and provides a solid computational infrastructure for regional climate change studies based on modern Web and GIS technologies.

  12. Spatial trends in leaf size of Amazonian rainforest trees

    NASA Astrophysics Data System (ADS)

    Malhado, A. C. M.; Malhi, Y.; Whittaker, R. J.; Ladle, R. J.; Ter Steege, H.; Aragão, L. E. O. C.; Quesada, C. A.; Araujo-Murakami, A.; Phillips, O. L.; Peacock, J.; Lopez-Gonzalez, G.; Baker, T. R.; Butt, N.; Anderson, L. O.; Arroyo, L.; Almeida, S.; Higuchi, N.; Killeen, T. J.; Monteagudo, A.; Neill, D.; Pitman, N.; Prieto, A.; Salomão, R. P.; Silva, N.; Vásquez-Martínez, R.; Laurance, W. F.

    2009-02-01

    Leaf size influences many aspects of tree function such as rates of transpiration and photosynthesis and, consequently, often varies in a predictable way in response to environmental gradients. The recent development of pan-Amazonian databases based on permanent botanical plots (e.g. RAINFOR, ATDN) has now made it possible to assess trends in leaf size across environmental gradients in Amazonia. Previous plot-based studies have shown that the community structure of Amazonian trees breaks down into at least two major ecological gradients corresponding with variations in soil fertility (decreasing south to northeast) and length of the dry season (increasing from northwest to south and east). Here we describe the results of the geographic distribution of leaf size categories based on 121 plots distributed across eight South American countries. We find that, as predicted, the Amazon forest is predominantly populated by tree species and individuals in the mesophyll size class (20.25-182.25 cm2). The geographic distribution of species and individuals with large leaves (>20.25 cm2) is complex but is generally characterized by a higher proportion of such trees in the north-west of the region. Spatially corrected regressions reveal weak correlations between the proportion of large-leaved species and metrics of water availability. We also find a significant negative relationship between leaf size and wood density.

  13. Spatial trends in leaf size of Amazonian rainforest trees

    NASA Astrophysics Data System (ADS)

    Malhado, A. C. M.; Malhi, Y.; Whittaker, R. J.; Ladle, R. J.; Ter Steege, H.; Phillips, O. L.; Butt, N.; Aragão, L. E. O. C.; Quesada, C. A.; Araujo-Murakami, A.; Arroyo, L.; Peacock, J.; Lopez-Gonzalez, G.; Baker, T. R.; Anderson, L. O.; Almeida, S.; Higuchi, N.; Killeen, T. J.; Monteagudo, A.; Neill, D.; Pitman, N.; Prieto, A.; Salomão, R. P.; Vásquez-Martínez, R.; Laurance, W. F.

    2009-08-01

    Leaf size influences many aspects of tree function such as rates of transpiration and photosynthesis and, consequently, often varies in a predictable way in response to environmental gradients. The recent development of pan-Amazonian databases based on permanent botanical plots has now made it possible to assess trends in leaf size across environmental gradients in Amazonia. Previous plot-based studies have shown that the community structure of Amazonian trees breaks down into at least two major ecological gradients corresponding with variations in soil fertility (decreasing from southwest to northeast) and length of the dry season (increasing from northwest to south and east). Here we describe the geographic distribution of leaf size categories based on 121 plots distributed across eight South American countries. We find that the Amazon forest is predominantly populated by tree species and individuals in the mesophyll size class (20.25-182.25 cm2). The geographic distribution of species and individuals with large leaves (>20.25 cm2) is complex but is generally characterized by a higher proportion of such trees in the northwest of the region. Spatially corrected regressions reveal weak correlations between the proportion of large-leaved species and metrics of water availability. We also find a significant negative relationship between leaf size and wood density.

  14. Knowledge Based Engineering for Spatial Database Management and Use

    NASA Technical Reports Server (NTRS)

    Peuquet, D. (Principal Investigator)

    1984-01-01

    The use of artificial intelligence techniques that are applicable to Geographic Information Systems (GIS) are examined. Questions involving the performance and modification to the database structure, the definition of spectra in quadtree structures and their use in search heuristics, extension of the knowledge base, and learning algorithm concepts are investigated.

  15. Scale effects of STATSGO and SSURGO databases on flow and water quality predictions

    USDA-ARS?s Scientific Manuscript database

    Soil information is one of the crucial inputs needed to assess the impacts of existing and alternative agricultural management practices on water quality. Therefore, it is important to understand the effects of spatial scale at which soil databases are developed on water quality evaluations. In the ...

  16. Expansion of the MANAGE database with forest and drainage studies

    USDA-ARS?s Scientific Manuscript database

    The “Measured Annual Nutrient loads from AGricultural Environments” (MANAGE) database was published in 2006 to expand an early 1980’s compilation of nutrient export (load) data from agricultural land uses at the field or farm spatial scale. Then in 2008, MANAGE was updated with 15 additional studie...

  17. National database for calculating fuel available to wildfires

    Treesearch

    Donald McKenzie; Nancy H.F. French; Roger D. Ottmar

    2012-01-01

    Wildfires are increasingly emerging as an important component of Earth system models, particularly those that involve emissions from fires and their effects on climate. Currently, there are few resources available for estimating emissions from wildfires in real time, at subcontinental scales, in a spatially consistent manner. Developing subcontinental-scale databases...

  18. An Optical Flow-Based Full Reference Video Quality Assessment Algorithm.

    PubMed

    K, Manasa; Channappayya, Sumohana S

    2016-06-01

    We present a simple yet effective optical flow-based full-reference video quality assessment (FR-VQA) algorithm for assessing the perceptual quality of natural videos. Our algorithm is based on the premise that local optical flow statistics are affected by distortions and the deviation from pristine flow statistics is proportional to the amount of distortion. We characterize the local flow statistics using the mean, the standard deviation, the coefficient of variation (CV), and the minimum eigenvalue ( λ min ) of the local flow patches. Temporal distortion is estimated as the change in the CV of the distorted flow with respect to the reference flow, and the correlation between λ min of the reference and of the distorted patches. We rely on the robust multi-scale structural similarity index for spatial quality estimation. The computed temporal and spatial distortions, thus, are then pooled using a perceptually motivated heuristic to generate a spatio-temporal quality score. The proposed method is shown to be competitive with the state-of-the-art when evaluated on the LIVE SD database, the EPFL Polimi SD database, and the LIVE Mobile HD database. The distortions considered in these databases include those due to compression, packet-loss, wireless channel errors, and rate-adaptation. Our algorithm is flexible enough to allow for any robust FR spatial distortion metric for spatial distortion estimation. In addition, the proposed method is not only parameter-free but also independent of the choice of the optical flow algorithm. Finally, we show that the replacement of the optical flow vectors in our proposed method with the much coarser block motion vectors also results in an acceptable FR-VQA algorithm. Our algorithm is called the flow similarity index.

  19. An online spatial database of Australian Indigenous Biocultural Knowledge for contemporary natural and cultural resource management.

    PubMed

    Pert, Petina L; Ens, Emilie J; Locke, John; Clarke, Philip A; Packer, Joanne M; Turpin, Gerry

    2015-11-15

    With growing international calls for the enhanced involvement of Indigenous peoples and their biocultural knowledge in managing conservation and the sustainable use of physical environment, it is timely to review the available literature and develop cross-cultural approaches to the management of biocultural resources. Online spatial databases are becoming common tools for educating land managers about Indigenous Biocultural Knowledge (IBK), specifically to raise a broad awareness of issues, identify knowledge gaps and opportunities, and to promote collaboration. Here we describe a novel approach to the application of internet and spatial analysis tools that provide an overview of publically available documented Australian IBK (AIBK) and outline the processes used to develop the online resource. By funding an AIBK working group, the Australian Centre for Ecological Analysis and Synthesis (ACEAS) provided a unique opportunity to bring together cross-cultural, cross-disciplinary and trans-organizational contributors who developed these resources. Without such an intentionally collaborative process, this unique tool would not have been developed. The tool developed through this process is derived from a spatial and temporal literature review, case studies and a compilation of methods, as well as other relevant AIBK papers. The online resource illustrates the depth and breadth of documented IBK and identifies opportunities for further work, partnerships and investment for the benefit of not only Indigenous Australians, but all Australians. The database currently includes links to over 1500 publically available IBK documents, of which 568 are geo-referenced and were mapped. It is anticipated that as awareness of the online resource grows, more documents will be provided through the website to build the database. It is envisaged that this will become a well-used tool, integral to future natural and cultural resource management and maintenance. Copyright © 2015. Published by Elsevier B.V.

  20. The Northern Circumpolar Soil Carbon Database: spatially distributed datasets of soil coverage and soil carbon storage in the northern permafrost regions

    NASA Astrophysics Data System (ADS)

    Hugelius, G.; Tarnocai, C.; Broll, G.; Canadell, J. G.; Kuhry, P.; Swanson, D. K.

    2012-08-01

    High latitude terrestrial ecosystems are key components in the global carbon (C) cycle. Estimates of global soil organic carbon (SOC), however, do not include updated estimates of SOC storage in permafrost-affected soils or representation of the unique pedogenic processes that affect these soils. The Northern Circumpolar Soil Carbon Database (NCSCD) was developed to quantify the SOC stocks in the circumpolar permafrost region (18.7 × 106 km2). The NCSCD is a polygon-based digital database compiled from harmonized regional soil classification maps in which data on soil order coverage has been linked to pedon data (n = 1647) from the northern permafrost regions to calculate SOC content and mass. In addition, new gridded datasets at different spatial resolutions have been generated to facilitate research applications using the NCSCD (standard raster formats for use in Geographic Information Systems and Network Common Data Form files common for applications in numerical models). This paper describes the compilation of the NCSCD spatial framework, the soil sampling and soil analyses procedures used to derive SOC content in pedons from North America and Eurasia and the formatting of the digital files that are available online. The potential applications and limitations of the NCSCD in spatial analyses are also discussed. The database has the doi:10.5879/ecds/00000001. An open access data-portal with all the described GIS-datasets is available online at: http://dev1.geo.su.se/bbcc/dev/ncscd/.

  1. The Northern Circumpolar Soil Carbon Database: spatially distributed datasets of soil coverage and soil carbon storage in the northern permafrost regions

    NASA Astrophysics Data System (ADS)

    Hugelius, G.; Tarnocai, C.; Broll, G.; Canadell, J. G.; Kuhry, P.; Swanson, D. K.

    2013-01-01

    High-latitude terrestrial ecosystems are key components in the global carbon (C) cycle. Estimates of global soil organic carbon (SOC), however, do not include updated estimates of SOC storage in permafrost-affected soils or representation of the unique pedogenic processes that affect these soils. The Northern Circumpolar Soil Carbon Database (NCSCD) was developed to quantify the SOC stocks in the circumpolar permafrost region (18.7 × 106 km2). The NCSCD is a polygon-based digital database compiled from harmonized regional soil classification maps in which data on soil order coverage have been linked to pedon data (n = 1778) from the northern permafrost regions to calculate SOC content and mass. In addition, new gridded datasets at different spatial resolutions have been generated to facilitate research applications using the NCSCD (standard raster formats for use in geographic information systems and Network Common Data Form files common for applications in numerical models). This paper describes the compilation of the NCSCD spatial framework, the soil sampling and soil analytical procedures used to derive SOC content in pedons from North America and Eurasia and the formatting of the digital files that are available online. The potential applications and limitations of the NCSCD in spatial analyses are also discussed. The database has the doi:10.5879/ecds/00000001. An open access data portal with all the described GIS-datasets is available online at: http://www.bbcc.su.se/data/ncscd/.

  2. Spatial digital database for the tectonic map of Southeast Arizona

    USGS Publications Warehouse

    map by Drewes, Harald; digital database by Fields, Robert A.; Hirschberg, Douglas M.; Bolm, Karen S.

    2002-01-01

    A spatial database was created for Drewes' (1980) tectonic map of southeast Arizona: this database supercedes Drewes and others (2001, ver. 1.0). Staff and a contractor at the U.S. Geological Survey in Tucson, Arizona completed an interim digital geologic map database for the east part of the map in 2001, made revisions to the previously released digital data for the west part of the map (Drewes and others, 2001, ver. 1.0), merged data files for the east and west parts, and added additional data not previously captured. Digital base map data files (such as topography, roads, towns, rivers and lakes) are not included: they may be obtained from a variety of commercial and government sources. This digital geospatial database is one of many being created by the U.S. Geological Survey as an ongoing effort to provide geologic information in a geographic information system (GIS) for use in spatial analysis. The resulting digital geologic map database can be queried in many ways to produce a variety of geologic maps and derivative products. Because Drewes' (1980) map sheets include additional text and graphics that were not included in this report, scanned images of his maps (i1109_e.jpg, i1109_w.jpg) are included as a courtesy to the reader. This database should not be used or displayed at any scale larger than 1:125,000 (for example, 1:100,000 or 1:24,000). The digital geologic map plot files (i1109_e.pdf and i1109_w.pdf) that are provided herein are representations of the database (see Appendix A). The map area is located in southeastern Arizona (fig. 1). This report describes the map units (from Drewes, 1980), the methods used to convert the geologic map data into a digital format, the ArcInfo GIS file structures and relationships, and explains how to download the digital files from the U.S. Geological Survey public access World Wide Web site on the Internet. The manuscript and digital data review by Helen Kayser (Information Systems Support, Inc.) is greatly appreciated.

  3. Preliminary surficial geologic map of a Calico Mountains piedmont and part of Coyote Lake, Mojave desert, San Bernardino County, California

    USGS Publications Warehouse

    Dudash, Stephanie L.

    2006-01-01

    This 1:24,000 scale detailed surficial geologic map and digital database of a Calico Mountains piedmont and part of Coyote Lake in south-central California depicts surficial deposits and generalized bedrock units. The mapping is part of a USGS project to investigate the spatial distribution of deposits linked to changes in climate, to provide framework geology for land use management (http://deserts.wr.usgs.gov), to understand the Quaternary tectonic history of the Mojave Desert, and to provide additional information on the history of Lake Manix, of which Coyote Lake is a sub-basin. Mapping is displayed on parts of four USGS 7.5 minute series topographic maps. The map area lies in the central Mojave Desert of California, northeast of Barstow, Calif. and south of Fort Irwin, Calif. and covers 258 sq.km. (99.5 sq.mi.). Geologic deposits in the area consist of Paleozoic metamorphic rocks, Mesozoic plutonic rocks, Miocene volcanic rocks, Pliocene-Pleistocene basin fill, and Quaternary surficial deposits. McCulloh (1960, 1965) conducted bedrock mapping and a generalized version of his maps are compiled into this map. McCulloh's maps contain many bedrock structures within the Calico Mountains that are not shown on the present map. This study resulted in several new findings, including the discovery of previously unrecognized faults, one of which is the Tin Can Alley fault. The north-striking Tin Can Alley fault is part of the Paradise fault zone (Miller and others, 2005), a potentially important feature for studying neo-tectonic strain in the Mojave Desert. Additionally, many Anodonta shells were collected in Coyote Lake lacustrine sediments for radiocarbon dating. Preliminary results support some of Meek's (1999) conclusions on the timing of Mojave River inflow into the Coyote Basin. The database includes information on geologic deposits, samples, and geochronology. The database is distributed in three parts: spatial map-based data, documentation, and printable map graphics of the database. Spatial data are distributed as an ArcInfo personal geodatabase, or as tabular data in the form of Microsoft Access Database (MDB) or dBase Format (DBF) file formats. Documentation includes this file, which provides a discussion of the surficial geology and describes the format and content of the map data, and Federal Geographic Data Committee (FGDC) metadata for the spatial map information. Map graphics files are distributed as Postscript and Adobe Acrobat Portable Document Format (PDF) files, and are appropriate for representing a view of the spatial database at the mapped scale.

  4. New Zealand's National Landslide Database

    NASA Astrophysics Data System (ADS)

    Rosser, B.; Dellow, S.; Haubrook, S.; Glassey, P.

    2016-12-01

    Since 1780, landslides have caused an average of about 3 deaths a year in New Zealand and have cost the economy an average of at least NZ$250M/a (0.1% GDP). To understand the risk posed by landslide hazards to society, a thorough knowledge of where, when and why different types of landslides occur is vital. The main objective for establishing the database was to provide a centralised national-scale, publically available database to collate landslide information that could be used for landslide hazard and risk assessment. Design of a national landslide database for New Zealand required consideration of both existing landslide data stored in a variety of digital formats, and future data, yet to be collected. Pre-existing databases were developed and populated with data reflecting the needs of the landslide or hazard project, and the database structures of the time. Bringing these data into a single unified database required a new structure capable of storing and delivering data at a variety of scales and accuracy and with different attributes. A "unified data model" was developed to enable the database to hold old and new landslide data irrespective of scale and method of capture. The database contains information on landslide locations and where available: 1) the timing of landslides and the events that may have triggered them; 2) the type of landslide movement; 3) the volume and area; 4) the source and debris tail; and 5) the impacts caused by the landslide. Information from a variety of sources including aerial photographs (and other remotely sensed data), field reconnaissance and media accounts has been collated and is presented for each landslide along with metadata describing the data sources and quality. There are currently nearly 19,000 landslide records in the database that include point locations, polygons of landslide source and deposit areas, and linear features. Several large datasets are awaiting upload which will bring the total number of landslides to over 100,000. The geo-spatial database is publicly available via the Internet. Software components, including the underlying database (PostGIS), Web Map Server (GeoServer) and web application use open-source software. The hope is that others will add relevant information to the database as well as download the data contained in it.

  5. Contribution of human, climate and biophysical drivers to the spatial distribution of wildfires in a French Mediterranean area: where do wildfires start and spread?

    NASA Astrophysics Data System (ADS)

    Ruffault, Julien; Mouillot, Florent; Moebius, Flavia

    2013-04-01

    Understanding the contribution of biophysical and human drivers to the spatial distribution of fires at regional scale has many ecological and economical implications in a context of on-going global changes. However these fire drivers often interact in complex ways, such that disentangling and assessing the relative contribution of human vs. biophysical factors remains a major challenge. Indeed, the identification of biophysical conditions that promote fires are confused by the inherent stochasticity in fire occurrences and fire spread on the one hand and, by the influence of human factors -through both fire ignition and suppression - on the other. Moreover, different factors may drive fire ignition and fire spread, in such a way that the areas with the highest density of ignitions may not coincide with those where large fires occur. In the present study, we investigated the drivers of fires ignition and spread in a Mediterranean area of southern France. We used a 17 years fire database (the PROMETHEE database from 1989-2006) combined with a set of 8 explanatory variables describing the spatial pattern in ignitions, vegetation and fire weather. We first isolated the weather conditions affecting the fire occurrence and their spread using a statistical model of the weather/fuel water status for each fire event.. The results of these statistical models were used to map the fire weather in terms of average number of days with suitable conditions for burning. Then, we used Boosted regression trees (BRT) models to assess the relative importance of the different variables on the distribution of wildfire with different sizes and to assess the relationship between each variables and fire occurrence and spread probabilities. We found that human activities explained up to 50 % of the spatial distribution of fire ignitions (SDI). The distribution of large fire was chiefly explained by fuel characteristics (about 40%). Surprisingly, the weather indices explained only 20 % of the SDI and its contribution does no vary according to the size of considered fire events. These results suggest that changes in fuel characteristics and human settlements/ activities, rather than weather conditions are the most likely to modify the future distribution of fires in this Mediterranean area. These conclusions provide useful information on the scenarios that could arise from the interaction of changes in climate and land cover for the Mediterranean area in the near future.

  6. Storm-centric view of Tropical Cyclone oceanic wakes

    NASA Astrophysics Data System (ADS)

    Gentemann, C. L.; Scott, J. P.; Smith, D.

    2012-12-01

    Tropical cyclones (TCs) have a dramatic impact on the upper ocean. Storm-generated oceanic mixing, high amplitude near-inertial currents, upwelling, and heat fluxes often warm or cool the surface ocean temperatures over large regions near tropical cyclones. These SST anomalies occur to the right (Northern Hemisphere) or left (Southern Hemisphere) of the storm track, varying along and across the storm track. These wide swaths of temperature change have been previously documented by in situ field programs as well as IR and visible satellite data. The amplitude, temporal and spatial variability of these surface temperature anomalies depend primarily upon the storm size, storm intensity, translational velocity, and the underlying ocean conditions. Tropical cyclone 'cold wakes' are usually 2 - 5 °C cooler than pre-storm SSTs, and persist for days to weeks. Since storms that occur in rapid succession typically follow similar paths, the cold wake from one storm can affect development of subsequent storms. Recent studies, on both warm and cold wakes, have mostly focused on small subsets of global storms because of the amount of work it takes to co-locate different data sources to a storm's location. While a number of hurricane/typhoon websites exist that co-locate various datasets to TC locations, none provide 3-dimensional temporal and spatial structure of the ocean-atmosphere necessary to study cold/warm wake development and impact. We are developing a global 3-dimensional storm centric database for TC research. The database we propose will include in situ data, satellite data, and model analyses. Remote Sensing Systems (RSS) has a widely-used storm watch archive which provides the user an interface for visually analyzing collocated NASA Quick Scatterometer (QuikSCAT) winds with GHRSST microwave SSTs and SSM/I, TMI or AMSR-E rain rates for all global tropical cyclones 1999-2009. We will build on this concept of bringing together different data near storm locations when developing the storm-centric database. This database will be made available to researchers via the web display tools previously developed for RSS web pages. The database will provide scientists with a single data format collection of various atmospheric and oceanographic data, and will include all tropical storms since 1998, when the passive MW SSTs from the TMI instrument first became available. Initial results showing an analysis of Typhoon Man-Yi will be presented.

  7. On the frequency-magnitude distribution of converging boundaries

    NASA Astrophysics Data System (ADS)

    Marzocchi, W.; Laura, S.; Heuret, A.; Funiciello, F.

    2011-12-01

    The occurrence of the last mega-thrust earthquake in Japan has clearly remarked the high risk posed to society by such events in terms of social and economic losses even at large spatial scale. The primary component for a balanced and objective mitigation of the impact of these earthquakes is the correct forecast of where such kind of events may occur in the future. To date, there is a wide range of opinions about where mega-thrust earthquakes can occur. Here, we aim at presenting some detailed statistical analysis of a database of worldwide interplate earthquakes occurring at current subduction zones. The database has been recently published in the framework of the EURYI Project 'Convergent margins and seismogenesis: defining the risk of great earthquakes by using statistical data and modelling', and it provides a unique opportunity to explore in detail the seismogenic process in subducting lithosphere. In particular, the statistical analysis of this database allows us to explore many interesting scientific issues such as the existence of different frequency-magnitude distributions across the trenches, the quantitative characterization of subduction zones that are able to produce more likely mega-thrust earthquakes, the prominent features that characterize converging boundaries with different seismic activity and so on. Besides the scientific importance, such issues may lead to improve our mega-thrust earthquake forecasting capability.

  8. Estimating regional plant biodiversity with GIS modelling

    Treesearch

    Louis R. Iverson; Anantha M. Prasad; Anantha M. Prasad

    1998-01-01

    We analyzed a statewide species database together with a county-level geographic information system to build a model based on well-surveyed areas to estimate species richness in less surveyed counties. The model involved GIS (Arc/Info) and statistics (S-PLUS), including spatial statistics (S+SpatialStats).

  9. Modelling the Spread of Farming in the Bantu-Speaking Regions of Africa: An Archaeology-Based Phylogeography

    PubMed Central

    Russell, Thembi; Silva, Fabio; Steele, James

    2014-01-01

    We use archaeological data and spatial methods to reconstruct the dispersal of farming into areas of sub-Saharan Africa now occupied by Bantu language speakers, and introduce a new large-scale radiocarbon database and a new suite of spatial modelling techniques. We also introduce a method of estimating phylogeographic relationships from archaeologically-modelled dispersal maps, with results produced in a format that enables comparison with linguistic and genetic phylogenies. Several hypotheses are explored. The ‘deep split’ hypothesis suggests that an early-branching eastern Bantu stream spread around the northern boundary of the equatorial rainforest, but recent linguistic and genetic work tends not to support this. An alternative riverine/littoral hypothesis suggests that rivers and coastlines facilitated the migration of the first farmers/horticulturalists, with some extending this to include rivers through the rainforest as conduits to East Africa. More recently, research has shown that a grassland corridor opened through the rainforest at around 3000–2500 BP, and the possible effect of this on migrating populations is also explored. Our results indicate that rivers and coasts were important dispersal corridors, but do not resolve the debate about a ‘Deep Split’. Future work should focus on improving the size, quality and geographical coverage of the archaeological 14C database; on augmenting the information base to establish descent relationships between archaeological sites and regions based on shared material cultural traits; and on refining the associated physical geographical reconstructions of changing land cover. PMID:24498213

  10. Modelling the distribution of domestic ducks in Monsoon Asia

    USGS Publications Warehouse

    Van Bockel, Thomas P.; Prosser, Diann; Franceschini, Gianluca; Biradar, Chandra; Wint, William; Robinson, Tim; Gilbert, Marius

    2011-01-01

    Domestic ducks are considered to be an important reservoir of highly pathogenic avian influenza (HPAI), as shown by a number of geospatial studies in which they have been identified as a significant risk factor associated with disease presence. Despite their importance in HPAI epidemiology, their large-scale distribution in Monsoon Asia is poorly understood. In this study, we created a spatial database of domestic duck census data in Asia and used it to train statistical distribution models for domestic duck distributions at a spatial resolution of 1km. The method was based on a modelling framework used by the Food and Agriculture Organisation to produce the Gridded Livestock of the World (GLW) database, and relies on stratified regression models between domestic duck densities and a set of agro-ecological explanatory variables. We evaluated different ways of stratifying the analysis and of combining the prediction to optimize the goodness of fit of the predictions. We found that domestic duck density could be predicted with reasonable accuracy (mean RMSE and correlation coefficient between log-transformed observed and predicted densities being 0.58 and 0.80, respectively), using a stratification based on livestock production systems. We tested the use of artificially degraded data on duck distributions in Thailand and Vietnam as training data, and compared the modelled outputs with the original high-resolution data. This showed, for these two countries at least, that these approaches could be used to accurately disaggregate provincial level (administrative level 1) statistical data to provide high resolution model distributions.

  11. Effective resolution concepts for lidar observations

    NASA Astrophysics Data System (ADS)

    Iarlori, M.; Madonna, F.; Rizi, V.; Trickl, T.; Amodeo, A.

    2015-12-01

    Since its establishment in 2000, EARLINET (European Aerosol Research Lidar NETwork) has provided, through its database, quantitative aerosol properties, such as aerosol backscatter and aerosol extinction coefficients, the latter only for stations able to retrieve it independently (from Raman or high-spectral-resolution lidars). These coefficients are stored in terms of vertical profiles, and the EARLINET database also includes the details of the range resolution of the vertical profiles. In fact, the algorithms used in the lidar data analysis often alter the spectral content of the data, mainly acting as low-pass filters to reduce the high-frequency noise. Data filtering is described by the digital signal processing (DSP) theory as a convolution sum: each filtered signal output at a given range is the result of a linear combination of several signal input data samples (relative to different ranges from the lidar receiver), and this could be seen as a loss of range resolution of the output signal. Low-pass filtering always introduces distortions in the lidar profile shape. Thus, both the removal of high frequency, i.e., the removal of details up to a certain spatial extension, and the spatial distortion produce a reduction of the range resolution. This paper discusses the determination of the effective resolution (ERes) of the vertical profiles of aerosol properties retrieved from lidar data. Large attention has been dedicated to providing an assessment of the impact of low-pass filtering on the effective range resolution in the retrieval procedure.

  12. Incorporating Spatial Data into Enterprise Applications

    NASA Astrophysics Data System (ADS)

    Akiki, Pierre; Maalouf, Hoda

    The main goal of this chapter is to discuss the usage of spatial data within enterprise as well as smaller line-of-business applications. In particular, this chapter proposes new methodologies for storing and manipulating vague spatial data and provides methods for visualizing both crisp and vague spatial data. It also provides a comparison between different types of spatial data, mainly 2D crisp and vague spatial data, and their respective fields of application. Additionally, it compares existing commercial relational database management systems, which are the most widely used with enterprise applications, and discusses their deficiencies in terms of spatial data support. A new spatial extension package called Spatial Extensions (SPEX) is provided in this chapter and is tested on a software prototype.

  13. Filling in the GAPS: evaluating completeness and coverage of open-access biodiversity databases in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Troia, Matthew J.; McManamay, Ryan A.

    Primary biodiversity data constitute observations of particular species at given points in time and space. Open-access electronic databases provide unprecedented access to these data, but their usefulness in characterizing species distributions and patterns in biodiversity depend on how complete species inventories are at a given survey location and how uniformly distributed survey locations are along dimensions of time, space, and environment. Our aim was to compare completeness and coverage among three open-access databases representing ten taxonomic groups (amphibians, birds, freshwater bivalves, crayfish, freshwater fish, fungi, insects, mammals, plants, and reptiles) in the contiguous United States. We compiled occurrence records frommore » the Global Biodiversity Information Facility (GBIF), the North American Breeding Bird Survey (BBS), and federally administered fish surveys (FFS). In this study, we aggregated occurrence records by 0.1° × 0.1° grid cells and computed three completeness metrics to classify each grid cell as well-surveyed or not. Next, we compared frequency distributions of surveyed grid cells to background environmental conditions in a GIS and performed Kolmogorov–Smirnov tests to quantify coverage through time, along two spatial gradients, and along eight environmental gradients. The three databases contributed >13.6 million reliable occurrence records distributed among >190,000 grid cells. The percent of well-surveyed grid cells was substantially lower for GBIF (5.2%) than for systematic surveys (BBS and FFS; 82.5%). Still, the large number of GBIF occurrence records produced at least 250 well-surveyed grid cells for six of nine taxonomic groups. Coverages of systematic surveys were less biased across spatial and environmental dimensions but were more biased in temporal coverage compared to GBIF data. GBIF coverages also varied among taxonomic groups, consistent with commonly recognized geographic, environmental, and institutional sampling biases. Lastly, this comprehensive assessment of biodiversity data across the contiguous United States provides a prioritization scheme to fill in the gaps by contributing existing occurrence records to the public domain and planning future surveys.« less

  14. Filling in the GAPS: evaluating completeness and coverage of open-access biodiversity databases in the United States

    DOE PAGES

    Troia, Matthew J.; McManamay, Ryan A.

    2016-06-12

    Primary biodiversity data constitute observations of particular species at given points in time and space. Open-access electronic databases provide unprecedented access to these data, but their usefulness in characterizing species distributions and patterns in biodiversity depend on how complete species inventories are at a given survey location and how uniformly distributed survey locations are along dimensions of time, space, and environment. Our aim was to compare completeness and coverage among three open-access databases representing ten taxonomic groups (amphibians, birds, freshwater bivalves, crayfish, freshwater fish, fungi, insects, mammals, plants, and reptiles) in the contiguous United States. We compiled occurrence records frommore » the Global Biodiversity Information Facility (GBIF), the North American Breeding Bird Survey (BBS), and federally administered fish surveys (FFS). In this study, we aggregated occurrence records by 0.1° × 0.1° grid cells and computed three completeness metrics to classify each grid cell as well-surveyed or not. Next, we compared frequency distributions of surveyed grid cells to background environmental conditions in a GIS and performed Kolmogorov–Smirnov tests to quantify coverage through time, along two spatial gradients, and along eight environmental gradients. The three databases contributed >13.6 million reliable occurrence records distributed among >190,000 grid cells. The percent of well-surveyed grid cells was substantially lower for GBIF (5.2%) than for systematic surveys (BBS and FFS; 82.5%). Still, the large number of GBIF occurrence records produced at least 250 well-surveyed grid cells for six of nine taxonomic groups. Coverages of systematic surveys were less biased across spatial and environmental dimensions but were more biased in temporal coverage compared to GBIF data. GBIF coverages also varied among taxonomic groups, consistent with commonly recognized geographic, environmental, and institutional sampling biases. Lastly, this comprehensive assessment of biodiversity data across the contiguous United States provides a prioritization scheme to fill in the gaps by contributing existing occurrence records to the public domain and planning future surveys.« less

  15. A global database of ant species abundances.

    PubMed

    Gibb, Heloise; Dunn, Rob R; Sanders, Nathan J; Grossman, Blair F; Photakis, Manoli; Abril, Silvia; Agosti, Donat; Andersen, Alan N; Angulo, Elena; Armbrecht, Inge; Arnan, Xavier; Baccaro, Fabricio B; Bishop, Tom R; Boulay, Raphaël; Brühl, Carsten; Castracani, Cristina; Cerda, Xim; Del Toro, Israel; Delsinne, Thibaut; Diaz, Mireia; Donoso, David A; Ellison, Aaron M; Enriquez, Martha L; Fayle, Tom M; Feener, Donald H; Fisher, Brian L; Fisher, Robert N; Fitzpatrick, Matthew C; Gómez, Crisanto; Gotelli, Nicholas J; Gove, Aaron; Grasso, Donato A; Groc, Sarah; Guenard, Benoit; Gunawardene, Nihara; Heterick, Brian; Hoffmann, Benjamin; Janda, Milan; Jenkins, Clinton; Kaspari, Michael; Klimes, Petr; Lach, Lori; Laeger, Thomas; Lattke, John; Leponce, Maurice; Lessard, Jean-Philippe; Longino, John; Lucky, Andrea; Luke, Sarah H; Majer, Jonathan; McGlynn, Terrence P; Menke, Sean; Mezger, Dirk; Mori, Alessandra; Moses, Jimmy; Munyai, Thinandavha Caswell; Pacheco, Renata; Paknia, Omid; Pearce-Duvet, Jessica; Pfeiffer, Martin; Philpott, Stacy M; Resasco, Julian; Retana, Javier; Silva, Rogerio R; Sorger, Magdalena D; Souza, Jorge; Suarez, Andrew; Tista, Melanie; Vasconcelos, Heraldo L; Vonshak, Merav; Weiser, Michael D; Yates, Michelle; Parr, Catherine L

    2017-03-01

    What forces structure ecological assemblages? A key limitation to general insights about assemblage structure is the availability of data that are collected at a small spatial grain (local assemblages) and a large spatial extent (global coverage). Here, we present published and unpublished data from 51 ,388 ant abundance and occurrence records of more than 2,693 species and 7,953 morphospecies from local assemblages collected at 4,212 locations around the world. Ants were selected because they are diverse and abundant globally, comprise a large fraction of animal biomass in most terrestrial communities, and are key contributors to a range of ecosystem functions. Data were collected between 1949 and 2014, and include, for each geo-referenced sampling site, both the identity of the ants collected and details of sampling design, habitat type, and degree of disturbance. The aim of compiling this data set was to provide comprehensive species abundance data in order to test relationships between assemblage structure and environmental and biogeographic factors. Data were collected using a variety of standardized methods, such as pitfall and Winkler traps, and will be valuable for studies investigating large-scale forces structuring local assemblages. Understanding such relationships is particularly critical under current rates of global change. We encourage authors holding additional data on systematically collected ant assemblages, especially those in dry and cold, and remote areas, to contact us and contribute their data to this growing data set. © 2016 by the Ecological Society of America.

  16. Extending GIS Technology to Study Karst Features of Southeastern Minnesota

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Tipping, R. G.; Alexander, E. C.; Alexander, S. C.

    2001-12-01

    This paper summarizes ongoing research on karst feature distribution of southeastern Minnesota. The main goals of this interdisciplinary research are: 1) to look for large-scale patterns in the rate and distribution of sinkhole development; 2) to conduct statistical tests of hypotheses about the formation of sinkholes; 3) to create management tools for land-use managers and planners; and 4) to deliver geomorphic and hydrogeologic criteria for making scientifically valid land-use policies and ethical decisions in karst areas of southeastern Minnesota. Existing county and sub-county karst feature datasets of southeastern Minnesota have been assembled into a large GIS-based database capable of analyzing the entire data set. The central database management system (DBMS) is a relational GIS-based system interacting with three modules: GIS, statistical and hydrogeologic modules. ArcInfo and ArcView were used to generate a series of 2D and 3D maps depicting karst feature distributions in southeastern Minnesota. IRIS ExplorerTM was used to produce satisfying 3D maps and animations using data exported from GIS-based database. Nearest-neighbor analysis has been used to test sinkhole distributions in different topographic and geologic settings. All current nearest-neighbor analyses testify that sinkholes in southeastern Minnesota are not evenly distributed in this area (i.e., they tend to be clustered). More detailed statistical methods such as cluster analysis, histograms, probability estimation, correlation and regression have been used to study the spatial distributions of some mapped karst features of southeastern Minnesota. A sinkhole probability map for Goodhue County has been constructed based on sinkhole distribution, bedrock geology, depth to bedrock, GIS buffer analysis and nearest-neighbor analysis. A series of karst features for Winona County including sinkholes, springs, seeps, stream sinks and outcrop has been mapped and entered into the Karst Feature Database of Southeastern Minnesota. The Karst Feature Database of Winona County is being expanded to include all the mapped karst features of southeastern Minnesota. Air photos from 1930s to 1990s of Spring Valley Cavern Area in Fillmore County were scanned and geo-referenced into our GIS system. This technology has been proved to be very useful to identify sinkholes and study the rate of sinkhole development.

  17. On the Planning and Design of Hospital Circulation Zones.

    PubMed

    Jiang, Shan; Verderber, Stephen

    2017-01-01

    This present literature review explores current issues and research inconsistencies regarding the design of hospital circulation zones and the associated health-related outcomes. Large general hospitals are immense, highly sophisticated institutions. Empirical studies have indicated excessively institutional environments in large medical centers are a cause of negative effects to occupants, including stress, anxiety, wayfinding difficulties and spatial disorientation, lack of cognitional control, and stress associated with inadequate access to nature. The rise of patient-centered and evidence-based movements in healthcare planning and design has resulted in a general rise in the quality of hospital physical environments. However, as a core component of any healthcare delivery system, hospital circulation zones have tended to remain neglected within the comparatively broad palette of research conducted and reported to date. A systematic literature review was conducted based upon combinations of key words developed vis-à-vis a literature search in 11 major databases in the realm of the health sciences and the planning and design of built environments for healthcare. Eleven peer-reviewed articles were included in the analysis. Six research themes were identified according to associated health-related outcomes, including wayfinding difficulties and spatial disorientation, communication and socialization patterns, measures and control of excessive noise, patient fall incidents, and occupants' stress and satisfaction levels. Several knowledge gaps as well as commonalities in the pertinent research literature were identified. Perhaps the overriding finding is that occupants' meaningful exposure to views of nature from within hospital circulation zones can potentially enhance wayfinding and spatial navigation. Future research priories on this subject are discussed.

  18. Long-term citizen-collected data reveal geographical patterns and temporal trends in lake water clarity

    USGS Publications Warehouse

    Lottig, Noah R.; Wagner, Tyler; Henry, Emily N.; Cheruvelil, Kendra Spence; Webster, Katherine E.; Downing, John A.; Stow, Craig A.

    2014-01-01

    We compiled a lake-water clarity database using publically available, citizen volunteer observations made between 1938 and 2012 across eight states in the Upper Midwest, USA. Our objectives were to determine (1) whether temporal trends in lake-water clarity existed across this large geographic area and (2) whether trends were related to the lake-specific characteristics of latitude, lake size, or time period the lake was monitored. Our database consisted of >140,000 individual Secchi observations from 3,251 lakes that we summarized per lake-year, resulting in 21,020 summer averages. Using Bayesian hierarchical modeling, we found approximately a 1% per year increase in water clarity (quantified as Secchi depth) for the entire population of lakes. On an individual lake basis, 7% of lakes showed increased water clarity and 4% showed decreased clarity. Trend direction and strength were related to latitude and median sample date. Lakes in the southern part of our study-region had lower average annual summer water clarity, more negative long-term trends, and greater inter-annual variability in water clarity compared to northern lakes. Increasing trends were strongest for lakes with median sample dates earlier in the period of record (1938–2012). Our ability to identify specific mechanisms for these trends is currently hampered by the lack of a large, multi-thematic database of variables that drive water clarity (e.g., climate, land use/cover). Our results demonstrate, however, that citizen science can provide the critical monitoring data needed to address environmental questions at large spatial and long temporal scales. Collaborations among citizens, research scientists, and government agencies may be important for developing the data sources and analytical tools necessary to move toward an understanding of the factors influencing macro-scale patterns such as those shown here for lake water clarity.

  19. Statistical Downscaling in Multi-dimensional Wave Climate Forecast

    NASA Astrophysics Data System (ADS)

    Camus, P.; Méndez, F. J.; Medina, R.; Losada, I. J.; Cofiño, A. S.; Gutiérrez, J. M.

    2009-04-01

    Wave climate at a particular site is defined by the statistical distribution of sea state parameters, such as significant wave height, mean wave period, mean wave direction, wind velocity, wind direction and storm surge. Nowadays, long-term time series of these parameters are available from reanalysis databases obtained by numerical models. The Self-Organizing Map (SOM) technique is applied to characterize multi-dimensional wave climate, obtaining the relevant "wave types" spanning the historical variability. This technique summarizes multi-dimension of wave climate in terms of a set of clusters projected in low-dimensional lattice with a spatial organization, providing Probability Density Functions (PDFs) on the lattice. On the other hand, wind and storm surge depend on instantaneous local large-scale sea level pressure (SLP) fields while waves depend on the recent history of these fields (say, 1 to 5 days). Thus, these variables are associated with large-scale atmospheric circulation patterns. In this work, a nearest-neighbors analog method is used to predict monthly multi-dimensional wave climate. This method establishes relationships between the large-scale atmospheric circulation patterns from numerical models (SLP fields as predictors) with local wave databases of observations (monthly wave climate SOM PDFs as predictand) to set up statistical models. A wave reanalysis database, developed by Puertos del Estado (Ministerio de Fomento), is considered as historical time series of local variables. The simultaneous SLP fields calculated by NCEP atmospheric reanalysis are used as predictors. Several applications with different size of sea level pressure grid and with different temporal domain resolution are compared to obtain the optimal statistical model that better represents the monthly wave climate at a particular site. In this work we examine the potential skill of this downscaling approach considering perfect-model conditions, but we will also analyze the suitability of this methodology to be used for seasonal forecast and for long-term climate change scenario projection of wave climate.

  20. Long-Term Citizen-Collected Data Reveal Geographical Patterns and Temporal Trends in Lake Water Clarity

    PubMed Central

    Lottig, Noah R.; Wagner, Tyler; Norton Henry, Emily; Spence Cheruvelil, Kendra; Webster, Katherine E.; Downing, John A.; Stow, Craig A.

    2014-01-01

    We compiled a lake-water clarity database using publically available, citizen volunteer observations made between 1938 and 2012 across eight states in the Upper Midwest, USA. Our objectives were to determine (1) whether temporal trends in lake-water clarity existed across this large geographic area and (2) whether trends were related to the lake-specific characteristics of latitude, lake size, or time period the lake was monitored. Our database consisted of >140,000 individual Secchi observations from 3,251 lakes that we summarized per lake-year, resulting in 21,020 summer averages. Using Bayesian hierarchical modeling, we found approximately a 1% per year increase in water clarity (quantified as Secchi depth) for the entire population of lakes. On an individual lake basis, 7% of lakes showed increased water clarity and 4% showed decreased clarity. Trend direction and strength were related to latitude and median sample date. Lakes in the southern part of our study-region had lower average annual summer water clarity, more negative long-term trends, and greater inter-annual variability in water clarity compared to northern lakes. Increasing trends were strongest for lakes with median sample dates earlier in the period of record (1938–2012). Our ability to identify specific mechanisms for these trends is currently hampered by the lack of a large, multi-thematic database of variables that drive water clarity (e.g., climate, land use/cover). Our results demonstrate, however, that citizen science can provide the critical monitoring data needed to address environmental questions at large spatial and long temporal scales. Collaborations among citizens, research scientists, and government agencies may be important for developing the data sources and analytical tools necessary to move toward an understanding of the factors influencing macro-scale patterns such as those shown here for lake water clarity. PMID:24788722

  1. Planting the SEED: Towards a Spatial Economic Ecological Database for a shared understanding of the Dutch Wadden area

    NASA Astrophysics Data System (ADS)

    Daams, Michiel N.; Sijtsma, Frans J.

    2013-09-01

    In this paper we address the characteristics of a publicly accessible Spatial Economic Ecological Database (SEED) and its ability to support a shared understanding among planners and experts of the economy and ecology of the Dutch Wadden area. Theoretical building blocks for a Wadden SEED are discussed. Our SEED contains a comprehensive set of stakeholder validated spatially explicit data on key economic and ecological indicators. These data extend over various spatial scales. Spatial issues relevant to the specification of a Wadden-SEED and its data needs are explored in this paper and illustrated using empirical data for the Dutch Wadden area. The purpose of the SEED is to integrate basic economic and ecologic information in order to support the resolution of specific (policy) questions and to facilitate connections between project level and strategic level in the spatial planning process. Although modest in its ambitions, we will argue that a Wadden SEED can serve as a valuable element in the much debated science-policy interface. A Wadden SEED is valuable since it is a consensus-based common knowledge base on the economy and ecology of an area rife with ecological-economic conflict, including conflict in which scientific information is often challenged and disputed.

  2. Software reuse example and challenges at NSIDC

    NASA Astrophysics Data System (ADS)

    Billingsley, B. W.; Brodzik, M.; Collins, J. A.

    2009-12-01

    NSIDC has created a new data discovery and access system, Searchlight, to provide users with the data they want in the format they want. NSIDC Searchlight supports discovery and access to disparate data types with on-the-fly reprojection, regridding and reformatting. Architected to both reuse open source systems and be reused itself, Searchlight reuses GDAL and Proj4 for manipulating data and format conversions, the netCDF Java library for creating netCDF output, MapServer and OpenLayers for defining spatial criteria and the JTS Topology Suite (JTS) in conjunction with Hibernate Spatial for database interaction and rich OGC-compliant spatial objects. The application reuses popular Java and Java Script libraries including Struts 2, Spring, JPA (Hibernate), Sitemesh, JFreeChart, JQuery, DOJO and a PostGIS PostgreSQL database. Future reuse of Searchlight components is supported at varying architecture levels, ranging from the database and model components to web services. We present the tools, libraries and programs that Searchlight has reused. We describe the architecture of Searchlight and explain the strategies deployed for reusing existing software and how Searchlight is built for reuse. We will discuss NSIDC reuse of the Searchlight components to support rapid development of new data delivery systems.

  3. Designing a data portal for synthesis modeling

    NASA Astrophysics Data System (ADS)

    Holmes, M. A.

    2006-12-01

    Processing of field and model data in multi-disciplinary integrated science studies is a vital part of synthesis modeling. Collection and storage techniques for field data vary greatly between the participating scientific disciplines due to the nature of the data being collected, whether it be in situ, remotely sensed, or recorded by automated data logging equipment. Spreadsheets, personal databases, text files and binary files are used in the initial storage and processing of the raw data. In order to be useful to scientists, engineers and modelers the data need to be stored in a format that is easily identifiable, accessible and transparent to a variety of computing environments. The Model Operations and Synthesis (MOAS) database and associated web portal were created to provide such capabilities. The industry standard relational database is comprised of spatial and temporal data tables, shape files and supporting metadata accessible over the network, through a menu driven web-based portal or spatially accessible through ArcSDE connections from the user's local GIS desktop software. A separate server provides public access to spatial data and model output in the form of attributed shape files through an ArcIMS web-based graphical user interface.

  4. The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform

    NASA Astrophysics Data System (ADS)

    Xie, Qingyun

    2016-06-01

    This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.

  5. BIOFRAG – a new database for analyzing BIOdiversity responses to forest FRAGmentation

    PubMed Central

    Pfeifer, Marion; Lefebvre, Veronique; Gardner, Toby A; Arroyo-Rodriguez, Victor; Baeten, Lander; Banks-Leite, Cristina; Barlow, Jos; Betts, Matthew G; Brunet, Joerg; Cerezo, Alexis; Cisneros, Laura M; Collard, Stuart; D'Cruze, Neil; da Silva Motta, Catarina; Duguay, Stephanie; Eggermont, Hilde; Eigenbrod, Felix; Hadley, Adam S; Hanson, Thor R; Hawes, Joseph E; Heartsill Scalley, Tamara; Klingbeil, Brian T; Kolb, Annette; Kormann, Urs; Kumar, Sunil; Lachat, Thibault; Lakeman Fraser, Poppy; Lantschner, Victoria; Laurance, William F; Leal, Inara R; Lens, Luc; Marsh, Charles J; Medina-Rangel, Guido F; Melles, Stephanie; Mezger, Dirk; Oldekop, Johan A; Overal, William L; Owen, Charlotte; Peres, Carlos A; Phalan, Ben; Pidgeon, Anna M; Pilia, Oriana; Possingham, Hugh P; Possingham, Max L; Raheem, Dinarzarde C; Ribeiro, Danilo B; Ribeiro Neto, Jose D; Douglas Robinson, W; Robinson, Richard; Rytwinski, Trina; Scherber, Christoph; Slade, Eleanor M; Somarriba, Eduardo; Stouffer, Philip C; Struebig, Matthew J; Tylianakis, Jason M; Tscharntke, Teja; Tyre, Andrew J; Urbina Cardona, Jose N; Vasconcelos, Heraldo L; Wearn, Oliver; Wells, Konstans; Willig, Michael R; Wood, Eric; Young, Richard P; Bradley, Andrew V; Ewers, Robert M

    2014-01-01

    Habitat fragmentation studies have produced complex results that are challenging to synthesize. Inconsistencies among studies may result from variation in the choice of landscape metrics and response variables, which is often compounded by a lack of key statistical or methodological information. Collating primary datasets on biodiversity responses to fragmentation in a consistent and flexible database permits simple data retrieval for subsequent analyses. We present a relational database that links such field data to taxonomic nomenclature, spatial and temporal plot attributes, and environmental characteristics. Field assessments include measurements of the response(s) (e.g., presence, abundance, ground cover) of one or more species linked to plots in fragments within a partially forested landscape. The database currently holds 9830 unique species recorded in plots of 58 unique landscapes in six of eight realms: mammals 315, birds 1286, herptiles 460, insects 4521, spiders 204, other arthropods 85, gastropods 70, annelids 8, platyhelminthes 4, Onychophora 2, vascular plants 2112, nonvascular plants and lichens 320, and fungi 449. Three landscapes were sampled as long-term time series (>10 years). Seven hundred and eleven species are found in two or more landscapes. Consolidating the substantial amount of primary data available on biodiversity responses to fragmentation in the context of land-use change and natural disturbances is an essential part of understanding the effects of increasing anthropogenic pressures on land. The consistent format of this database facilitates testing of generalizations concerning biologic responses to fragmentation across diverse systems and taxa. It also allows the re-examination of existing datasets with alternative landscape metrics and robust statistical methods, for example, helping to address pseudo-replication problems. The database can thus help researchers in producing broad syntheses of the effects of land use. The database is dynamic and inclusive, and contributions from individual and large-scale data-collection efforts are welcome. PMID:24967073

  6. SPATIAL FOREST SOIL PROPERTIES FOR ECOLOGICAL MODELING IN THE WESTERN OREGON CASCADES

    EPA Science Inventory

    The ultimate objective of this work is to provide a spatially distributed database of soil properties to serve as inputs to model ecological processes in western forests at the landscape scale. The Central Western Oregon Cascades are rich in biodiversity and they are a fascinati...

  7. Using Large Diabetes Databases for Research.

    PubMed

    Wild, Sarah; Fischbacher, Colin; McKnight, John

    2016-09-01

    There are an increasing number of clinical, administrative and trial databases that can be used for research. These are particularly valuable if there are opportunities for linkage to other databases. This paper describes examples of the use of large diabetes databases for research. It reviews the advantages and disadvantages of using large diabetes databases for research and suggests solutions for some challenges. Large, high-quality databases offer potential sources of information for research at relatively low cost. Fundamental issues for using databases for research are the completeness of capture of cases within the population and time period of interest and accuracy of the diagnosis of diabetes and outcomes of interest. The extent to which people included in the database are representative should be considered if the database is not population based and there is the intention to extrapolate findings to the wider diabetes population. Information on key variables such as date of diagnosis or duration of diabetes may not be available at all, may be inaccurate or may contain a large amount of missing data. Information on key confounding factors is rarely available for the nondiabetic or general population limiting comparisons with the population of people with diabetes. However comparisons that allow for differences in distribution of important demographic factors may be feasible using data for the whole population or a matched cohort study design. In summary, diabetes databases can be used to address important research questions. Understanding the strengths and limitations of this approach is crucial to interpret the findings appropriately. © 2016 Diabetes Technology Society.

  8. A new Volcanic managEment Risk Database desIgn (VERDI): Application to El Hierro Island (Canary Islands)

    NASA Astrophysics Data System (ADS)

    Bartolini, S.; Becerril, L.; Martí, J.

    2014-11-01

    One of the most important issues in modern volcanology is the assessment of volcanic risk, which will depend - among other factors - on both the quantity and quality of the available data and an optimum storage mechanism. This will require the design of purpose-built databases that take into account data format and availability and afford easy data storage and sharing, and will provide for a more complete risk assessment that combines different analyses but avoids any duplication of information. Data contained in any such database should facilitate spatial and temporal analysis that will (1) produce probabilistic hazard models for future vent opening, (2) simulate volcanic hazards and (3) assess their socio-economic impact. We describe the design of a new spatial database structure, VERDI (Volcanic managEment Risk Database desIgn), which allows different types of data, including geological, volcanological, meteorological, monitoring and socio-economic information, to be manipulated, organized and managed. The root of the question is to ensure that VERDI will serve as a tool for connecting different kinds of data sources, GIS platforms and modeling applications. We present an overview of the database design, its components and the attributes that play an important role in the database model. The potential of the VERDI structure and the possibilities it offers in regard to data organization are here shown through its application on El Hierro (Canary Islands). The VERDI database will provide scientists and decision makers with a useful tool that will assist to conduct volcanic risk assessment and management.

  9. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  10. Toward an open-access global database for mapping, control, and surveillance of neglected tropical diseases.

    PubMed

    Hürlimann, Eveline; Schur, Nadine; Boutsika, Konstantina; Stensgaard, Anna-Sofie; Laserna de Himpsl, Maiti; Ziegelbauer, Kathrin; Laizer, Nassor; Camenzind, Lukas; Di Pasquale, Aurelio; Ekpo, Uwem F; Simoonga, Christopher; Mushinge, Gabriel; Saarnak, Christopher F L; Utzinger, Jürg; Kristensen, Thomas K; Vounatsou, Penelope

    2011-12-01

    After many years of general neglect, interest has grown and efforts came under way for the mapping, control, surveillance, and eventual elimination of neglected tropical diseases (NTDs). Disease risk estimates are a key feature to target control interventions, and serve as a benchmark for monitoring and evaluation. What is currently missing is a georeferenced global database for NTDs providing open-access to the available survey data that is constantly updated and can be utilized by researchers and disease control managers to support other relevant stakeholders. We describe the steps taken toward the development of such a database that can be employed for spatial disease risk modeling and control of NTDs. With an emphasis on schistosomiasis in Africa, we systematically searched the literature (peer-reviewed journals and 'grey literature'), contacted Ministries of Health and research institutions in schistosomiasis-endemic countries for location-specific prevalence data and survey details (e.g., study population, year of survey and diagnostic techniques). The data were extracted, georeferenced, and stored in a MySQL database with a web interface allowing free database access and data management. At the beginning of 2011, our database contained more than 12,000 georeferenced schistosomiasis survey locations from 35 African countries available under http://www.gntd.org. Currently, the database is expanded to a global repository, including a host of other NTDs, e.g. soil-transmitted helminthiasis and leishmaniasis. An open-access, spatially explicit NTD database offers unique opportunities for disease risk modeling, targeting control interventions, disease monitoring, and surveillance. Moreover, it allows for detailed geostatistical analyses of disease distribution in space and time. With an initial focus on schistosomiasis in Africa, we demonstrate the proof-of-concept that the establishment and running of a global NTD database is feasible and should be expanded without delay.

  11. Application of kernel functions for accurate similarity search in large chemical databases.

    PubMed

    Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H

    2010-04-29

    Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.

  12. Spatial aspects of building and population exposure data and their implications for global earthquake exposure modeling

    USGS Publications Warehouse

    Dell’Acqua, F.; Gamba, P.; Jaiswal, K.

    2012-01-01

    This paper discusses spatial aspects of the global exposure dataset and mapping needs for earthquake risk assessment. We discuss this in the context of development of a Global Exposure Database for the Global Earthquake Model (GED4GEM), which requires compilation of a multi-scale inventory of assets at risk, for example, buildings, populations, and economic exposure. After defining the relevant spatial and geographic scales of interest, different procedures are proposed to disaggregate coarse-resolution data, to map them, and if necessary to infer missing data by using proxies. We discuss the advantages and limitations of these methodologies and detail the potentials of utilizing remote-sensing data. The latter is used especially to homogenize an existing coarser dataset and, where possible, replace it with detailed information extracted from remote sensing using the built-up indicators for different environments. Present research shows that the spatial aspects of earthquake risk computation are tightly connected with the availability of datasets of the resolution necessary for producing sufficiently detailed exposure. The global exposure database designed by the GED4GEM project is able to manage datasets and queries of multiple spatial scales.

  13. EMAGE mouse embryo spatial gene expression database: 2010 update

    PubMed Central

    Richardson, Lorna; Venkataraman, Shanmugasundaram; Stevenson, Peter; Yang, Yiya; Burton, Nicholas; Rao, Jianguo; Fisher, Malcolm; Baldock, Richard A.; Davidson, Duncan R.; Christiansen, Jeffrey H.

    2010-01-01

    EMAGE (http://www.emouseatlas.org/emage) is a freely available online database of in situ gene expression patterns in the developing mouse embryo. Gene expression domains from raw images are extracted and integrated spatially into a set of standard 3D virtual mouse embryos at different stages of development, which allows data interrogation by spatial methods. An anatomy ontology is also used to describe sites of expression, which allows data to be queried using text-based methods. Here, we describe recent enhancements to EMAGE including: the release of a completely re-designed website, which offers integration of many different search functions in HTML web pages, improved user feedback and the ability to find similar expression patterns at the click of a button; back-end refactoring from an object oriented to relational architecture, allowing associated SQL access; and the provision of further access by standard formatted URLs and a Java API. We have also increased data coverage by sourcing from a greater selection of journals and developed automated methods for spatial data annotation that are being applied to spatially incorporate the genome-wide (∼19 000 gene) ‘EURExpress’ dataset into EMAGE. PMID:19767607

  14. High-resolution inventory of technologies, activities, and emissions of coal-fired power plants in China from 1990 to 2010

    NASA Astrophysics Data System (ADS)

    Liu, F.; Zhang, Q.; Tong, D.; Zheng, B.; Li, M.; Huo, H.; He, K. B.

    2015-12-01

    This paper, which focuses on emissions from China's coal-fired power plants during 1990-2010, is the second in a series of papers that aims to develop a high-resolution emission inventory for China. This is the first time that emissions from China's coal-fired power plants were estimated at unit level for a 20-year period. This inventory is constructed from a unit-based database compiled in this study, named the China coal-fired Power plant Emissions Database (CPED), which includes detailed information on the technologies, activity data, operation situation, emission factors, and locations of individual units and supplements with aggregated data where unit-based information is not available. Between 1990 and 2010, compared to a 479 % growth in coal consumption, emissions from China's coal-fired power plants increased by 56, 335, and 442 % for SO2, NOx, and CO2, respectively, and decreased by 23 and 27 % for PM2.5 and PM10 respectively. Driven by the accelerated economic growth, large power plants were constructed throughout the country after 2000, resulting in a dramatic growth in emissions. The growth trend of emissions has been effectively curbed since 2005 due to strengthened emission control measures including the installation of flue gas desulfurization (FGD) systems and the optimization of the generation fleet mix by promoting large units and decommissioning small ones. Compared to previous emission inventories, CPED significantly improved the spatial resolution and temporal profile of the power plant emission inventory in China by extensive use of underlying data at unit level. The new inventory developed in this study will enable a close examination of temporal and spatial variations of power plant emissions in China and will help to improve the performances of chemical transport models by providing more accurate emission data.

  15. What if we took a global look?

    NASA Astrophysics Data System (ADS)

    Ouellet Dallaire, C.; Lehner, B.

    2014-12-01

    Freshwater resources are facing unprecedented pressures. In hope to cope with this, Environmental Hydrology, Freshwater Biology, and Fluvial Geomorphology have defined conceptual approaches such as "environmental flow requirements", "instream flow requirements" or "normative flow regime" to define appropriate flow regime to maintain a given ecological status. These advances in the fields of freshwater resources management are asking scientists to create bridges across disciplines. Holistic and multi-scales approaches are becoming more and more common in water sciences research. The intrinsic nature of river systems demands these approaches to account for the upstream-downstream link of watersheds. Before recent technological developments, large scale analyses were cumbersome and, often, the necessary data was unavailable. However, new technologies, both for information collection and computing capacity, enable a high resolution look at the global scale. For rivers around the world, this new outlook is facilitated by the hydrologically relevant geo-spatial database HydroSHEDS. This database now offers more than 24 millions of kilometers of rivers, some never mapped before, at the click of a fingertip. Large and, even, global scale assessments can now be used to compare rivers around the world. A river classification framework was developed using HydroSHEDS called GloRiC (Global River Classification). This framework advocates for holistic approach to river systems by using sub-classifications drawn from six disciplines related to river sciences: Hydrology, Physiography and climate, Geomorphology, Chemistry, Biology and Human impact. Each of these disciplines brings complementary information on the rivers that is relevant at different scales. A first version of a global river reach classification was produced at the 500m resolution. Variables used in the classification have influence on processes involved at different scales (ex. topography index vs. pH). However, all variables are computed at the same high spatial resolution. This way, we can have a global look at local phenomenon.

  16. POLARIS: A 30-meter probabilistic soil series map of the contiguous United States

    USGS Publications Warehouse

    Chaney, Nathaniel W; Wood, Eric F; McBratney, Alexander B; Hempel, Jonathan W; Nauman, Travis; Brungard, Colby W.; Odgers, Nathan P

    2016-01-01

    A new complete map of soil series probabilities has been produced for the contiguous United States at a 30 m spatial resolution. This innovative database, named POLARIS, is constructed using available high-resolution geospatial environmental data and a state-of-the-art machine learning algorithm (DSMART-HPC) to remap the Soil Survey Geographic (SSURGO) database. This 9 billion grid cell database is possible using available high performance computing resources. POLARIS provides a spatially continuous, internally consistent, quantitative prediction of soil series. It offers potential solutions to the primary weaknesses in SSURGO: 1) unmapped areas are gap-filled using survey data from the surrounding regions, 2) the artificial discontinuities at political boundaries are removed, and 3) the use of high resolution environmental covariate data leads to a spatial disaggregation of the coarse polygons. The geospatial environmental covariates that have the largest role in assembling POLARIS over the contiguous United States (CONUS) are fine-scale (30 m) elevation data and coarse-scale (~ 2 km) estimates of the geographic distribution of uranium, thorium, and potassium. A preliminary validation of POLARIS using the NRCS National Soil Information System (NASIS) database shows variable performance over CONUS. In general, the best performance is obtained at grid cells where DSMART-HPC is most able to reduce the chance of misclassification. The important role of environmental covariates in limiting prediction uncertainty suggests including additional covariates is pivotal to improving POLARIS' accuracy. This database has the potential to improve the modeling of biogeochemical, water, and energy cycles in environmental models; enhance availability of data for precision agriculture; and assist hydrologic monitoring and forecasting to ensure food and water security.

  17. Indexing and retrieving point and region objects

    NASA Astrophysics Data System (ADS)

    Ibrahim, Azzam T.; Fotouhi, Farshad A.

    1996-03-01

    R-tree and its variants are examples of spatial data structures for paged-secondary memory. To process a query, these structures require multiple path traversals. In this paper, we present a new image access method, SB+-tree which requires a single path traversal to process a query. Also, SB+-tree will allow commercial databases an access method for spatial objects without a major change, since most commercial databases already support B+-tree as an access method for text data. The SB+-tree can be used for zero and non-zero size data objects. Non-zero size objects are approximated by their minimum bounding rectangles (MBRs). The number of SB+-trees generated is dependent upon the number of dimensions of the approximation of the object. The structure supports efficient spatial operations such as regions-overlap, distance and direction. In this paper, we experimentally and analytically demonstrate the superiority of SB+-tree over R-tree.

  18. UTILIZATION OF GEOGRAPHIC INFORMATION SYSTEMS TECHNOLOGY IN THE ASSESSMENT OF REGIONAL GROUND-WATER QUALITY.

    USGS Publications Warehouse

    Nebert, Douglas; Anderson, Dean

    1987-01-01

    The U. S. Geological Survey (USGS) in cooperation with the U. S. Environmental Protection Agency Office of Pesticide Programs and several State agencies in Oregon has prepared a digital spatial database at 1:500,000 scale to be used as a basis for evaluating the potential for ground-water contamination by pesticides and other agricultural chemicals. Geographic information system (GIS) software was used to assemble, analyze, and manage spatial and tabular environmental data in support of this project. Physical processes were interpreted relative to published spatial data and an integrated database to support the appraisal of regional ground-water contamination was constructed. Ground-water sampling results were reviewed relative to the environmental factors present in several agricultural areas to develop an empirical knowledge base which could be used to assist in the selection of future sampling or study areas.

  19. Spatial and Temporal Mapping of the Evolution of the Miami-Fort Lauderdale-West Palm Beach Metropolitan Statistical Area (MSA)

    NASA Astrophysics Data System (ADS)

    Rochelo, Mark

    Urbanization is a fundamental reality in the developed and developing countries around the world creating large concentrations of the population centering on cities and urban centers. Cities can offer many opportunities for those residing there, including infrastructure, health services, rescue services and more. The living space density of cities allows for the opportunity of more effective and environmentally friendly housing, transportation and resources. Cities play a vital role in generating economic production as entities by themselves and as a part of larger urban complex. The benefits can provide for extraordinary amount of people, but only if proper planning and consideration is undertaken. Global urbanization is a progressive evolution, unique in spatial location while consistent to an overall growth pattern and trend. Remotely sensing these patterns from the last forty years of space borne satellites to understand how urbanization has developed is important to understanding past growth as well as planning for the future. Imagery from the Landsat sensor program provides the temporal component, it was the first satellite launched in 1972, providing appropriate spatial resolution needed to cover a large metropolitan statistical area to monitor urban growth and change on a large scale. This research maps the urban spatial and population growth over the Miami - Fort Lauderdale - West Palm Beach Metropolitan Statistical Area (MSA) covering Miami-Dade, Broward, and Palm Beach counties in Southeast Florida from 1974 to 2010 using Landsat imagery. Supervised Maximum Likelihood classification was performed with a combination of spectral and textural training fields employed in ERDAS Image 2014 to classify the images into urban and non-urban areas. Dasymetric mapping of the classification results were combined with census tract data then created a coherent depiction of the Miami - Fort Lauderdale - West Palm Beach MSA. Static maps and animated files were created from the final datasets for enhanced visualizations and understanding of the MSA evolution from 60-meter resolution remotely sensed Landsat images. The simplified methodology will create a database for urban planning and population growth as well as future work in this area.

  20. Building a database for long-term monitoring of benthic macrofauna in the Pertuis-Charentais (2004-2014).

    PubMed

    Philippe, Anne S; Plumejeaud-Perreau, Christine; Jourde, Jérôme; Pineau, Philippe; Lachaussée, Nicolas; Joyeux, Emmanuel; Corre, Frédéric; Delaporte, Philippe; Bocher, Pierrick

    2017-01-01

    Long-term benthic monitoring is rewarding in terms of science, but labour-intensive, whether in the field, the laboratory, or behind the computer. Building and managing databases require multiple skills, including consistency over time as well as organisation via a systematic approach. Here, we introduce and share our spatially explicit benthic database, comprising 11 years of benthic data. It is the result of intensive benthic sampling that has been conducted on a regular grid (259 stations) covering the intertidal mudflats of the Pertuis-Charentais (Marennes-Oléron Bay and Aiguillon Bay). Samples were taken by foot or by boats during winter depending on tidal height, from December 2003 to February 2014. The present dataset includes abundances and biomass densities of all mollusc species of the study regions and principal polychaetes as well as their length, accessibility to shorebirds, energy content and shell mass when appropriate and available. This database has supported many studies dealing with the spatial distribution of benthic invertebrates and temporal variations in food resources for shorebird species as well as latitudinal comparisons with other databases. In this paper, we introduce our benthos monitoring, share our data, and present a "guide of good practices" for building, cleaning and using it efficiently, providing examples of results with associated R code. The dataset has been formatted into a geo-referenced relational database, using PostgreSQL open-source DBMS. We provide density information, measurements, energy content and accessibility of thirteen bivalve, nine gastropod and two polychaete taxa (a total of 66,620 individuals)​ for 11 consecutive winters. Figures and maps are provided to describe how the dataset was built, cleaned, and how it can be used. This dataset can again support studies concerning spatial and temporal variations in species abundance, interspecific interactions as well as evaluations of the availability of food resources for small- and medium size shorebirds and, potentially, conservation and impact assessment studies.

  1. Using LiCSAR as a fast-response system for the detection and the monitoring of volcanic unrest

    NASA Astrophysics Data System (ADS)

    Albino, F.; Biggs, J.; Hatton, E. L.; Spaans, K.; Gaddes, M.; McDougall, A.

    2017-12-01

    Based on the Smithsonian Institution volcano database, a total of 13256 volcanoes exist on Earth with 1273 having evidence of eruptive or unrest activity during the Holocene. InSAR techniques have proven their ability to detect and to quantify volcanic ground deformation on a case-by-case basis. However, the use of InSAR for the daily monitoring of every active volcano requires the development of automatic processing that can provide information in a couple of hours after a new radar acquisition. The LiCSAR system (http://comet.nerc.ac.uk/COMET-LiCS-portal/) answers this requirement by processing the vast amounts of data generated daily by the EU's Sentinel-1 satellite constellation. It provides now high-resolution deformation data for the entire Alpine-Himalayan seismic belt. The aim of our study is to extend LiCSAR system to the purpose of volcano monitoring. For each active volcano, the last Sentinel products calculated (phase, coherence and amplitude) will be available online in the COMET Volcano Deformation Database. To analyse this large amount of InSAR products, we develop an algorithm to automatically detect ground deformation signals as well as changes in coherence and amplitude in the time series. This toolbox could be a powerful fast-response system for helping volcanological observatories to manage new or ongoing volcanic crisis. Important information regarding the spatial and the temporal evolution of each ground deformation signal will also be added to the COMET database. This will benefit to better understand the conditions in which volcanic unrest leads to an eruption. Such worldwide survey enables us to establish a large catalogue of InSAR products, which will also be suitable for further studies (mapping of new lava flows, modelling of magmatic sources, evaluation of stress interactions).

  2. Parameterizing a Large-scale Water Balance Model in Regions with Sparse Data: The Tigris-Euphrates River Basins as an Example

    NASA Astrophysics Data System (ADS)

    Flint, A. L.; Flint, L. E.

    2010-12-01

    The characterization of hydrologic response to current and future climates is of increasing importance to many countries around the world that rely heavily on changing and uncertain water supplies. Large-scale models that can calculate a spatially distributed water balance and elucidate groundwater recharge and surface water flows for large river basins provide a basis of estimates of changes due to future climate projections. Unfortunately many regions in the world have very sparse data for parameterization or calibration of hydrologic models. For this study, the Tigris and Euphrates River basins were used for the development of a regional water balance model at 180-m spatial scale, using the Basin Characterization Model, to estimate historical changes in groundwater recharge and surface water flows in the countries of Turkey, Syria, Iraq, Iran, and Saudi Arabia. Necessary input parameters include precipitation, air temperature, potential evapotranspiration (PET), soil properties and thickness, and estimates of bulk permeability from geologic units. Data necessary for calibration includes snow cover, reservoir volumes (from satellite data and historic, pre-reservoir elevation data) and streamflow measurements. Global datasets for precipitation, air temperature, and PET were available at very large spatial scales (50 km) through the world scale databases, finer scale WorldClim climate data, and required downscaling to fine scales for model input. Soils data were available through world scale soil maps but required parameterization on the basis of textural data to estimate soil hydrologic properties. Soil depth was interpreted from geomorphologic interpretation and maps of quaternary deposits, and geologic materials were categorized from generalized geologic maps of each country. Estimates of bedrock permeability were made on the basis of literature and data on driller’s logs and adjusted during calibration of the model to streamflow measurements where available. Results of historical water balance calculations throughout the Tigris and Euphrates River basins will be shown along with details of processing input data to provide spatial continuity and downscaling. Basic water availability analysis for recharge and runoff is readily available from a determinisitic solar radiation energy balance model and a global potential evapotranspiration model and global estimates of precipitation and air temperature. Future climate estimates can be readily applied to the same water and energy balance models to evaluate future water availability for countries around the globe.

  3. A spatio-temporal index for aerial full waveform laser scanning data

    NASA Astrophysics Data System (ADS)

    Laefer, Debra F.; Vo, Anh-Vu; Bertolotto, Michela

    2018-04-01

    Aerial laser scanning is increasingly available in the full waveform version of the raw signal, which can provide greater insight into and control over the data and, thus, richer information about the scanned scenes. However, when compared to conventional discrete point storage, preserving raw waveforms leads to vastly larger and more complex data volumes. To begin addressing these challenges, this paper introduces a novel bi-level approach for storing and indexing full waveform (FWF) laser scanning data in a relational database environment, while considering both the spatial and the temporal dimensions of that data. In the storage scheme's upper level, the full waveform datasets are partitioned into spatial and temporal coherent groups that are indexed by a two-dimensional R∗-tree. To further accelerate intra-block data retrieval, at the lower level a three-dimensional local octree is created for each pulse block. The local octrees are implemented in-memory and can be efficiently written to a database for reuse. The indexing solution enables scalable and efficient three-dimensional (3D) spatial and spatio-temporal queries on the actual pulse data - functionalities not available in other systems. The proposed FWF laser scanning data solution is capable of managing multiple FWF datasets derived from large flight missions. The flight structure is embedded into the data storage model and can be used for querying predicates. Such functionality is important to FWF data exploration since aircraft locations and orientations are frequently required for FWF data analyses. Empirical tests on real datasets of up to 1 billion pulses from Dublin, Ireland prove the almost perfect scalability of the system. The use of the local 3D octree in the indexing structure accelerated pulse clipping by 1.2-3.5 times for non-axis-aligned (NAA) polyhedron shaped clipping windows, while axis-aligned (AA) polyhedron clipping was better served using only the top indexing layer. The distinct behaviours of the hybrid indexing for AA and NAA clipping windows are attributable to the different proportion of the local-index-related overheads with respect to the total querying costs. When temporal constraints were added, generally the number of costly spatial checks were reduced, thereby shortening the querying times.

  4. Bayesian geostatistical modelling of soil-transmitted helminth survey data in the People's Republic of China.

    PubMed

    Lai, Ying-Si; Zhou, Xiao-Nong; Utzinger, Jürg; Vounatsou, Penelope

    2013-12-18

    Soil-transmitted helminth infections affect tens of millions of individuals in the People's Republic of China (P.R. China). There is a need for high-resolution estimates of at-risk areas and number of people infected to enhance spatial targeting of control interventions. However, such information is not yet available for P.R. China. A geo-referenced database compiling surveys pertaining to soil-transmitted helminthiasis, carried out from 2000 onwards in P.R. China, was established. Bayesian geostatistical models relating the observed survey data with potential climatic, environmental and socioeconomic predictors were developed and used to predict at-risk areas at high spatial resolution. Predictors were extracted from remote sensing and other readily accessible open-source databases. Advanced Bayesian variable selection methods were employed to develop a parsimonious model. Our results indicate that the prevalence of soil-transmitted helminth infections in P.R. China considerably decreased from 2005 onwards. Yet, some 144 million people were estimated to be infected in 2010. High prevalence (>20%) of the roundworm Ascaris lumbricoides infection was predicted for large areas of Guizhou province, the southern part of Hubei and Sichuan provinces, while the northern part and the south-eastern coastal-line areas of P.R. China had low prevalence (<5%). High infection prevalence (>20%) with hookworm was found in Hainan, the eastern part of Sichuan and the southern part of Yunnan provinces. High infection prevalence (>20%) with the whipworm Trichuris trichiura was found in a few small areas of south P.R. China. Very low prevalence (<0.1%) of hookworm and whipworm infections were predicted for the northern parts of P.R. China. We present the first model-based estimates for soil-transmitted helminth infections throughout P.R. China at high spatial resolution. Our prediction maps provide useful information for the spatial targeting of soil-transmitted helminthiasis control interventions and for long-term monitoring and surveillance in the frame of enhanced efforts to control and eliminate the public health burden of these parasitic worm infections.

  5. Bayesian geostatistical modelling of soil-transmitted helminth survey data in the People’s Republic of China

    PubMed Central

    2013-01-01

    Background Soil-transmitted helminth infections affect tens of millions of individuals in the People’s Republic of China (P.R. China). There is a need for high-resolution estimates of at-risk areas and number of people infected to enhance spatial targeting of control interventions. However, such information is not yet available for P.R. China. Methods A geo-referenced database compiling surveys pertaining to soil-transmitted helminthiasis, carried out from 2000 onwards in P.R. China, was established. Bayesian geostatistical models relating the observed survey data with potential climatic, environmental and socioeconomic predictors were developed and used to predict at-risk areas at high spatial resolution. Predictors were extracted from remote sensing and other readily accessible open-source databases. Advanced Bayesian variable selection methods were employed to develop a parsimonious model. Results Our results indicate that the prevalence of soil-transmitted helminth infections in P.R. China considerably decreased from 2005 onwards. Yet, some 144 million people were estimated to be infected in 2010. High prevalence (>20%) of the roundworm Ascaris lumbricoides infection was predicted for large areas of Guizhou province, the southern part of Hubei and Sichuan provinces, while the northern part and the south-eastern coastal-line areas of P.R. China had low prevalence (<5%). High infection prevalence (>20%) with hookworm was found in Hainan, the eastern part of Sichuan and the southern part of Yunnan provinces. High infection prevalence (>20%) with the whipworm Trichuris trichiura was found in a few small areas of south P.R. China. Very low prevalence (<0.1%) of hookworm and whipworm infections were predicted for the northern parts of P.R. China. Conclusions We present the first model-based estimates for soil-transmitted helminth infections throughout P.R. China at high spatial resolution. Our prediction maps provide useful information for the spatial targeting of soil-transmitted helminthiasis control interventions and for long-term monitoring and surveillance in the frame of enhanced efforts to control and eliminate the public health burden of these parasitic worm infections. PMID:24350825

  6. The LSST Data Mining Research Agenda

    NASA Astrophysics Data System (ADS)

    Borne, K.; Becla, J.; Davidson, I.; Szalay, A.; Tyson, J. A.

    2008-12-01

    We describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night) multi-resolution methods for exploration of petascale databases; indexing of multi-attribute multi-dimensional astronomical databases (beyond spatial indexing) for rapid querying of petabyte databases; and more.

  7. Evaluating the far-field sound of a turbulent jet with one-way Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Pickering, Ethan; Rigas, Georgios; Towne, Aaron; Colonius, Tim

    2017-11-01

    The one-way Navier-Stokes (OWNS) method has shown promising ability to predict both near field coherent structures (i.e. wave packets) and far field acoustics of turbulent jets while remaining computationally efficient through implementation of a spatial marching scheme. Considering the speed and relative accuracy of OWNS, a predictive model for various jet configurations may be conceived and applied for noise control. However, there still remain discrepancies between OWNS and large eddy simulation (LES) databases which may be linked to the previous neglect of nonlinear forcing. Therefore, to better predict wave packets and far field acoustics, this study investigates the effect of nonlinear forcing terms derived from high-fidelity LES databases. The results of the nonlinear forcings are evaluated for several azimuthal modes and frequencies, as well as compared to LES derived acoustics using spectral proper orthogonal decomposition (SPOD). This research was supported by the Department of Defense (DoD) through the Office of Naval Research (Grant No. N00014-16-1-2445) and the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program.

  8. Face detection on distorted images using perceptual quality-aware features

    NASA Astrophysics Data System (ADS)

    Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.

    2014-02-01

    We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.

  9. PPDB - A tool for investigation of plants physiology based on gene ontology.

    PubMed

    Sharma, Ajay Shiv; Gupta, Hari Om; Prasad, Rajendra

    2014-09-02

    Representing the way forward, from functional genomics and its ontology to functional understanding and physiological model, in a computationally tractable fashion is one of the ongoing challenges faced by computational biology. To tackle the standpoint, we herein feature the applications of contemporary database management to the development of PPDB, a searching and browsing tool for the Plants Physiology Database that is based upon the mining of a large amount of gene ontology data currently available. The working principles and search options associated with the PPDB are publicly available and freely accessible on-line ( http://www.iitr.ernet.in/ajayshiv/ ) through a user friendly environment generated by means of Drupal-6.24. By knowing that genes are expressed in temporally and spatially characteristic patterns and that their functionally distinct products often reside in specific cellular compartments and may be part of one or more multi-component complexes, this sort of work is intended to be relevant for investigating the functional relationships of gene products at a system level and, thus, helps us approach to the full physiology.

  10. PPDB: A Tool for Investigation of Plants Physiology Based on Gene Ontology.

    PubMed

    Sharma, Ajay Shiv; Gupta, Hari Om; Prasad, Rajendra

    2015-09-01

    Representing the way forward, from functional genomics and its ontology to functional understanding and physiological model, in a computationally tractable fashion is one of the ongoing challenges faced by computational biology. To tackle the standpoint, we herein feature the applications of contemporary database management to the development of PPDB, a searching and browsing tool for the Plants Physiology Database that is based upon the mining of a large amount of gene ontology data currently available. The working principles and search options associated with the PPDB are publicly available and freely accessible online ( http://www.iitr.ac.in/ajayshiv/ ) through a user-friendly environment generated by means of Drupal-6.24. By knowing that genes are expressed in temporally and spatially characteristic patterns and that their functionally distinct products often reside in specific cellular compartments and may be part of one or more multicomponent complexes, this sort of work is intended to be relevant for investigating the functional relationships of gene products at a system level and, thus, helps us approach to the full physiology.

  11. Developing a Multi-Dimensional Hydrodynamics Code with Astrochemical Reactions

    NASA Astrophysics Data System (ADS)

    Kwak, Kyujin; Yang, Seungwon

    2015-08-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) revealed high resolution molecular lines some of which are still unidentified yet. Because formation of these astrochemical molecules has been seldom studied in traditional chemistry, observations of new molecular lines drew a lot of attention from not only astronomers but also chemists both experimental and theoretical. Theoretical calculations for the formation of these astrochemical molecules have been carried out providing reaction rates for some important molecules, and some of theoretical predictions have been measured in laboratories. The reaction rates for the astronomically important molecules are now collected to form databases some of which are publically available. By utilizing these databases, we develop a multi-dimensional hydrodynamics code that includes the reaction rates of astrochemical molecules. Because this type of hydrodynamics code is able to trace the molecular formation in a non-equilibrium fashion, it is useful to study the formation history of these molecules that affects the spatial distribution of some specific molecules. We present the development procedure of this code and some test problems in order to verify and validate the developed code.

  12. Large Differences in Global and Regional Total Soil Carbon Stock Estimates Based on SoilGrids, HWSD, and NCSCD: Intercomparison and Evaluation Based on Field Data From USA, England, Wales, and France

    NASA Astrophysics Data System (ADS)

    Tifafi, Marwa; Guenet, Bertrand; Hatté, Christine

    2018-01-01

    Soils are the major component of the terrestrial ecosystem and the largest organic carbon reservoir on Earth. However, they are a nonrenewable natural resource and especially reactive to human disturbance and climate change. Despite its importance, soil carbon dynamics is an important source of uncertainty for future climate predictions and there is a growing need for more precise information to better understand the mechanisms controlling soil carbon dynamics and better constrain Earth system models. The aim of our work is to compare soil organic carbon stocks given by different global and regional databases that already exist. We calculated global and regional soil carbon stocks at 1 m depth given by three existing databases (SoilGrids, the Harmonized World Soil Database, and the Northern Circumpolar Soil Carbon Database). We observed that total stocks predicted by each product differ greatly: it is estimated to be around 3,400 Pg by SoilGrids and is about 2,500 Pg according to Harmonized World Soil Database. This difference is marked in particular for boreal regions where differences can be related to high disparities in soil organic carbon concentration. Differences in other regions are more limited and may be related to differences in bulk density estimates. Finally, evaluation of the three data sets versus ground truth data shows that (i) there is a significant difference in spatial patterns between ground truth data and compared data sets and that (ii) data sets underestimate by more than 40% the soil organic carbon stock compared to field data.

  13. A BRDF-BPDF database for the analysis of Earth target reflectances

    NASA Astrophysics Data System (ADS)

    Breon, Francois-Marie; Maignan, Fabienne

    2017-01-01

    Land surface reflectance is not isotropic. It varies with the observation geometry that is defined by the sun, view zenith angles, and the relative azimuth. In addition, the reflectance is linearly polarized. The reflectance anisotropy is quantified by the bidirectional reflectance distribution function (BRDF), while its polarization properties are defined by the bidirectional polarization distribution function (BPDF). The POLDER radiometer that flew onboard the PARASOL microsatellite remains the only space instrument that measured numerous samples of the BRDF and BPDF of Earth targets. Here, we describe a database of representative BRDFs and BPDFs derived from the POLDER measurements. From the huge number of data acquired by the spaceborne instrument over a period of 7 years, we selected a set of targets with high-quality observations. The selection aimed for a large number of observations, free of significant cloud or aerosol contamination, acquired in diverse observation geometries with a focus on the backscatter direction that shows the specific hot spot signature. The targets are sorted according to the 16-class International Geosphere-Biosphere Programme (IGBP) land cover classification system, and the target selection aims at a spatial representativeness within the class. The database thus provides a set of high-quality BRDF and BPDF samples that can be used to assess the typical variability of natural surface reflectances or to evaluate models. It is available freely from the PANGAEA website (doi:10.1594/PANGAEA.864090). In addition to the database, we provide a visualization and analysis tool based on the Interactive Data Language (IDL). It allows an interactive analysis of the measurements and a comparison against various BRDF and BPDF analytical models. The present paper describes the input data, the selection principles, the database format, and the analysis tool

  14. Forecasting Safe or Dangerous Space Weather from HMI Magnetograms

    NASA Technical Reports Server (NTRS)

    Falconer, David; Barghouty, Abdulnasser F.; Khazanov, Igor; Moore, Ron

    2011-01-01

    We have developed a space-weather forecasting tool using an active-region free-energy proxy that was measured from MDI line-of-sight magnetograms. To develop this forecasting tool (Falconer et al 2011, Space Weather Journal, in press), we used a database of 40,000 MDI magnetograms of 1300 active regions observed by MDI during the previous solar cycle (cycle 23). From each magnetogram we measured our free-energy proxy and for each active region we determined its history of major flare, CME and Solar Particle Event (SPE) production. This database determines from the value of an active region s free-energy proxy the active region s expected rate of production of 1) major flares, 2) CMEs, 3) fast CMEs, and 4) SPEs during the next few days. This tool was delivered to NASA/SRAG in 2010. With MDI observations ending, we have to be able to use HMI magnetograms instead of MDI magnetograms. One of the difficulties is that the measured value of the free-energy proxy is sensitive to the spatial resolution of the measured magnetogram: the 0.5 /pixel resolution of HMI gives a different value for the free-energy proxy than the 2 /pixels resolution of MDI. To use our MDI-database forecasting curves until a comparably large HMI database is accumulated, we smooth HMI line-of-sight magnetograms to MDI resolution, so that we can use HMI to find the value of the free-energy proxy that MDI would have measured, and then use the forecasting curves given by the MDI database. The new version for use with HMI magnetograms was delivered to NASA/SRAG (March 2011). It can also use GONG magnetograms, as a backup.

  15. Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Binary Facial Attribute Classification in Real-World Face Videos.

    PubMed

    Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal

    2016-06-01

    Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.

  16. GIS-based identification of areas with mineral resource potential for six selected deposit groups, Bureau of Land Management Central Yukon Planning Area, Alaska

    USGS Publications Warehouse

    Jones, James V.; Karl, Susan M.; Labay, Keith A.; Shew, Nora B.; Granitto, Matthew; Hayes, Timothy S.; Mauk, Jeffrey L.; Schmidt, Jeanine M.; Todd, Erin; Wang, Bronwen; Werdon, Melanie B.; Yager, Douglas B.

    2015-01-01

    This study has used a data-driven, geographic information system (GIS)-based method for evaluating the mineral resource potential across the large region of the CYPA. This method systematically and simultaneously analyzes geoscience data from multiple geospatially referenced datasets and uses individual subwatersheds (12-digit hydrologic unit codes or HUCs) as the spatial unit of classification. The final map output indicates an estimated potential (high, medium, low) for a given mineral deposit group and indicates the certainty (high, medium, low) of that estimate for any given subwatershed (HUC). Accompanying tables describe the data layers used in each analysis, the values assigned for specific analysis parameters, and the relative weighting of each data layer that contributes to the estimated potential and certainty determinations. Core datasets used include the U.S. Geological Survey (USGS) Alaska Geochemical Database (AGDB2), the Alaska Division of Geologic and Geophysical Surveys Web-based geochemical database, data from an anticipated USGS geologic map of Alaska, and the USGS Alaska Resource Data File. Map plates accompanying this report illustrate the mineral prospectivity for the six deposit groups across the CYPA and estimates of mineral resource potential. There are numerous areas, some of them large, rated with high potential for one or more of the selected deposit groups within the CYPA.

  17. Nanocubes for real-time exploration of spatiotemporal datasets.

    PubMed

    Lins, Lauro; Klosowski, James T; Scheidegger, Carlos

    2013-12-01

    Consider real-time exploration of large multidimensional spatiotemporal datasets with billions of entries, each defined by a location, a time, and other attributes. Are certain attributes correlated spatially or temporally? Are there trends or outliers in the data? Answering these questions requires aggregation over arbitrary regions of the domain and attributes of the data. Many relational databases implement the well-known data cube aggregation operation, which in a sense precomputes every possible aggregate query over the database. Data cubes are sometimes assumed to take a prohibitively large amount of space, and to consequently require disk storage. In contrast, we show how to construct a data cube that fits in a modern laptop's main memory, even for billions of entries; we call this data structure a nanocube. We present algorithms to compute and query a nanocube, and show how it can be used to generate well-known visual encodings such as heatmaps, histograms, and parallel coordinate plots. When compared to exact visualizations created by scanning an entire dataset, nanocube plots have bounded screen error across a variety of scales, thanks to a hierarchical structure in space and time. We demonstrate the effectiveness of our technique on a variety of real-world datasets, and present memory, timing, and network bandwidth measurements. We find that the timings for the queries in our examples are dominated by network and user-interaction latencies.

  18. Sparse modeling of spatial environmental variables associated with asthma

    PubMed Central

    Chang, Timothy S.; Gangnon, Ronald E.; Page, C. David; Buckingham, William R.; Tandias, Aman; Cowan, Kelly J.; Tomasallo, Carrie D.; Arndt, Brian G.; Hanrahan, Lawrence P.; Guilbert, Theresa W.

    2014-01-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin’s Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5–50 years over a three-year period. Each patient’s home address was geocoded to one of 3,456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin’s geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. PMID:25533437

  19. Sparse modeling of spatial environmental variables associated with asthma.

    PubMed

    Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W

    2015-02-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Design and realization of tourism spatial decision support system based on GIS

    NASA Astrophysics Data System (ADS)

    Ma, Zhangbao; Qi, Qingwen; Xu, Li

    2008-10-01

    In this paper, the existing problems of current tourism management information system are analyzed. GIS, tourism as well as spatial decision support system are introduced, and the application of geographic information system technology and spatial decision support system to tourism management and the establishment of tourism spatial decision support system based on GIS are proposed. System total structure, system hardware and software environment, database design and structure module design of this system are introduced. Finally, realization methods of this systemic core functions are elaborated.

  1. Construction and comparative evaluation of different activity detection methods in brain FDG-PET.

    PubMed

    Buchholz, Hans-Georg; Wenzel, Fabian; Gartenschläger, Martin; Thiele, Frank; Young, Stewart; Reuss, Stefan; Schreckenberger, Mathias

    2015-08-18

    We constructed and evaluated reference brain FDG-PET databases for usage by three software programs (Computer-aided diagnosis for dementia (CAD4D), Statistical Parametric Mapping (SPM) and NEUROSTAT), which allow a user-independent detection of dementia-related hypometabolism in patients' brain FDG-PET. Thirty-seven healthy volunteers were scanned in order to construct brain FDG reference databases, which reflect the normal, age-dependent glucose consumption in human brain, using either software. Databases were compared to each other to assess the impact of different stereotactic normalization algorithms used by either software package. In addition, performance of the new reference databases in the detection of altered glucose consumption in the brains of patients was evaluated by calculating statistical maps of regional hypometabolism in FDG-PET of 20 patients with confirmed Alzheimer's dementia (AD) and of 10 non-AD patients. Extent (hypometabolic volume referred to as cluster size) and magnitude (peak z-score) of detected hypometabolism was statistically analyzed. Differences between the reference databases built by CAD4D, SPM or NEUROSTAT were observed. Due to the different normalization methods, altered spatial FDG patterns were found. When analyzing patient data with the reference databases created using CAD4D, SPM or NEUROSTAT, similar characteristic clusters of hypometabolism in the same brain regions were found in the AD group with either software. However, larger z-scores were observed with CAD4D and NEUROSTAT than those reported by SPM. Better concordance with CAD4D and NEUROSTAT was achieved using the spatially normalized images of SPM and an independent z-score calculation. The three software packages identified the peak z-scores in the same brain region in 11 of 20 AD cases, and there was concordance between CAD4D and SPM in 16 AD subjects. The clinical evaluation of brain FDG-PET of 20 AD patients with either CAD4D-, SPM- or NEUROSTAT-generated databases from an identical reference dataset showed similar patterns of hypometabolism in the brain regions known to be involved in AD. The extent of hypometabolism and peak z-score appeared to be influenced by the calculation method used in each software package rather than by different spatial normalization parameters.

  2. User Generated Spatial Content Sources for Land Use/Land Cover Validation Purposes: Suitability Analysis and Integration Model

    NASA Astrophysics Data System (ADS)

    Estima, Jacinto Paulo Simoes

    Traditional geographic information has been produced by mapping agencies and corporations, using high skilled people as well as expensive precision equipment and procedures, in a very costly approach. The production of land use and land cover databases are just one example of such traditional approach. On the other side, The amount of Geographic Information created and shared by citizens through the Web has been increasing exponentially during the last decade, resulting from the emergence and popularization of technologies such as the Web 2.0, cloud computing, GPS, smart phones, among others. Such comprehensive amount of free geographic data might have valuable information to extract and thus opening great possibilities to improve significantly the production of land use and land cover databases. In this thesis we explored the feasibility of using geographic data from different user generated spatial content initiatives in the process of land use and land cover database production. Data from Panoramio, Flickr and OpenStreetMap were explored in terms of their spatial and temporal distribution, and their distribution over the different land use and land cover classes. We then proposed a conceptual model to integrate data from suitable user generated spatial content initiatives based on identified dissimilarities among a comprehensive list of initiatives. Finally we developed a prototype implementing the proposed integration model, which was then validated by using the prototype to solve four identified use cases. We concluded that data from user generated spatial content initiatives has great value but should be integrated to increase their potential. The possibility of integrating data from such initiatives in an integration model was proved. Using the developed prototype, the relevance of the integration model was also demonstrated for different use cases. None None None

  3. COSMO-SkyMed and GIS applications

    NASA Astrophysics Data System (ADS)

    Milillo, Pietro; Sole, Aurelia; Serio, Carmine

    2013-04-01

    Geographic Information Systems (GIS) and Remote Sensing have become key technology tools for the collection, storage and analysis of spatially referenced data. Industries that utilise these spatial technologies include agriculture, forestry, mining, market research as well as the environmental analysis . Synthetic Aperture Radar (SAR) is a coherent active sensor operating in the microwave band which exploits relative motion between antenna and target in order to obtain a finer spatial resolution in the flight direction exploiting the Doppler effect. SAR have wide applications in Remote Sensing such as cartography, surface deformation detection, forest cover mapping, urban planning, disasters monitoring , surveillance etc… The utilization of satellite remote sensing and GIS technology for this applications has proven to be a powerful and effective tool for environmental monitoring. Remote sensing techniques are often less costly and time-consuming for large geographic areas compared to conventional methods, moreover GIS technology provides a flexible environment for, analyzing and displaying digital data from various sources necessary for classification, change detection and database development. The aim of this work si to illustrate the potential of COSMO-SkyMed data and SAR applications in a GIS environment, in particular a demostration of the operational use of COSMO-SkyMed SAR data and GIS in real cases will be provided for what concern DEM validation, river basin estimation, flood mapping and landslide monitoring.

  4. Spatial modeling of potential woody biomass flow

    Treesearch

    Woodam Chung; Nathaniel Anderson

    2012-01-01

    The flow of woody biomass to end users is determined by economic factors, especially the amount available across a landscape and delivery costs of bioenergy facilities. The objective of this study develop methodology to quantify landscape-level stocks and potential biomass flows using the currently available spatial database road network analysis tool. We applied this...

  5. A geologic and mineral exploration spatial database for the Stillwater Complex, Montana

    USGS Publications Warehouse

    Zientek, Michael L.; Parks, Heather L.

    2014-01-01

    This report provides essential spatially referenced datasets based on geologic mapping and mineral exploration activities conducted from the 1920s to the 1990s. This information will facilitate research on the complex and provide background material needed to explore for mineral resources and to develop sound land-management policy.

  6. Comparison of alternative spatial resolutions in the application of a spatially distributed biogeochemical model over complex terrain

    USGS Publications Warehouse

    Turner, D.P.; Dodson, R.; Marks, D.

    1996-01-01

    Spatially distributed biogeochemical models may be applied over grids at a range of spatial resolutions, however, evaluation of potential errors and loss of information at relatively coarse resolutions is rare. In this study, a georeferenced database at the 1-km spatial resolution was developed to initialize and drive a process-based model (Forest-BGC) of water and carbon balance over a gridded 54976 km2 area covering two river basins in mountainous western Oregon. Corresponding data sets were also prepared at 10-km and 50-km spatial resolutions using commonly employed aggregation schemes. Estimates were made at each grid cell for climate variables including daily solar radiation, air temperature, humidity, and precipitation. The topographic structure, water holding capacity, vegetation type and leaf area index were likewise estimated for initial conditions. The daily time series for the climatic drivers was developed from interpolations of meteorological station data for the water year 1990 (1 October 1989-30 September 1990). Model outputs at the 1-km resolution showed good agreement with observed patterns in runoff and productivity. The ranges for model inputs at the 10-km and 50-km resolutions tended to contract because of the smoothed topography. Estimates for mean evapotranspiration and runoff were relatively insensitive to changing the spatial resolution of the grid whereas estimates of mean annual net primary production varied by 11%. The designation of a vegetation type and leaf area at the 50-km resolution often subsumed significant heterogeneity in vegetation, and this factor accounted for much of the difference in the mean values for the carbon flux variables. Although area wide means for model outputs were generally similar across resolutions, difference maps often revealed large areas of disagreement. Relatively high spatial resolution analyses of biogeochemical cycling are desirable from several perspectives and may be particularly important in the study of the potential impacts of climate change.

  7. Precision measurements from very-large scale aerial digital imagery.

    PubMed

    Booth, D Terrance; Cox, Samuel E; Berryman, Robert D

    2006-01-01

    Managers need measurements and resource managers need the length/width of a variety of items including that of animals, logs, streams, plant canopies, man-made objects, riparian habitat, vegetation patches and other things important in resource monitoring and land inspection. These types of measurements can now be easily and accurately obtained from very large scale aerial (VLSA) imagery having spatial resolutions as fine as 1 millimeter per pixel by using the three new software programs described here. VLSA images have small fields of view and are used for intermittent sampling across extensive landscapes. Pixel-coverage among images is influenced by small changes in airplane altitude above ground level (AGL) and orientation relative to the ground, as well as by changes in topography. These factors affect the object-to-camera distance used for image-resolution calculations. 'ImageMeasurement' offers a user-friendly interface for accounting for pixel-coverage variation among images by utilizing a database. 'LaserLOG' records and displays airplane altitude AGL measured from a high frequency laser rangefinder, and displays the vertical velocity. 'Merge' sorts through large amounts of data generated by LaserLOG and matches precise airplane altitudes with camera trigger times for input to the ImageMeasurement database. We discuss application of these tools, including error estimates. We found measurements from aerial images (collection resolution: 5-26 mm/pixel as projected on the ground) using ImageMeasurement, LaserLOG, and Merge, were accurate to centimeters with an error less than 10%. We recommend these software packages as a means for expanding the utility of aerial image data.

  8. National Databases for Neurosurgical Outcomes Research: Options, Strengths, and Limitations.

    PubMed

    Karhade, Aditya V; Larsen, Alexandra M G; Cote, David J; Dubois, Heloise M; Smith, Timothy R

    2017-08-05

    Quality improvement, value-based care delivery, and personalized patient care depend on robust clinical, financial, and demographic data streams of neurosurgical outcomes. The neurosurgical literature lacks a comprehensive review of large national databases. To assess the strengths and limitations of various resources for outcomes research in neurosurgery. A review of the literature was conducted to identify surgical outcomes studies using national data sets. The databases were assessed for the availability of patient demographics and clinical variables, longitudinal follow-up of patients, strengths, and limitations. The number of unique patients contained within each data set ranged from thousands (Quality Outcomes Database [QOD]) to hundreds of millions (MarketScan). Databases with both clinical and financial data included PearlDiver, Premier Healthcare Database, Vizient Clinical Data Base and Resource Manager, and the National Inpatient Sample. Outcomes collected by databases included patient-reported outcomes (QOD); 30-day morbidity, readmissions, and reoperations (National Surgical Quality Improvement Program); and disease incidence and disease-specific survival (Surveillance, Epidemiology, and End Results-Medicare). The strengths of large databases included large numbers of rare pathologies and multi-institutional nationally representative sampling; the limitations of these databases included variable data veracity, variable data completeness, and missing disease-specific variables. The improvement of existing large national databases and the establishment of new registries will be crucial to the future of neurosurgical outcomes research. Copyright © 2017 by the Congress of Neurological Surgeons

  9. Advancement of a soil parameters geodatabase for the modeling assessment of conservation practice outcomes in the United States

    USDA-ARS?s Scientific Manuscript database

    US-ModSoilParms-TEMPLE is a database composed of a set of geographic databases functionally storing soil-spatial units and soil hydraulic, physical, and chemical parameters for three agriculture management simulation models, SWAT, APEX, and ALMANAC. This paper introduces the updated US-ModSoilParms-...

  10. [Assessment on ecological security spatial differences of west areas of Liaohe River based on GIS].

    PubMed

    Wang, Geng; Wu, Wei

    2005-09-01

    Ecological security assessment and early warning research have spatiality; non-linearity; randomicity, it is needed to deal with much spatial information. Spatial analysis and data management are advantages of GIS, it can define distribution trend and spatial relations of environmental factors, and show ecological security pattern graphically. The paper discusses the method of ecological security spatial differences of west areas of Liaohe River based on GIS and ecosystem non-health. First, studying on pressure-state-response (P-S-R) assessment indicators system, investigating in person and gathering information; Second, digitizing the river, applying fuzzy AHP to put weight, quantizing and calculating by fuzzy comparing; Last, establishing grid data-base; expounding spatial differences of ecological security by GIS Interpolate and Assembly.

  11. a Spatiotemporal Aggregation Query Method Using Multi-Thread Parallel Technique Based on Regional Division

    NASA Astrophysics Data System (ADS)

    Liao, S.; Chen, L.; Li, J.; Xiong, W.; Wu, Q.

    2015-07-01

    Existing spatiotemporal database supports spatiotemporal aggregation query over massive moving objects datasets. Due to the large amounts of data and single-thread processing method, the query speed cannot meet the application requirements. On the other hand, the query efficiency is more sensitive to spatial variation then temporal variation. In this paper, we proposed a spatiotemporal aggregation query method using multi-thread parallel technique based on regional divison and implemented it on the server. Concretely, we divided the spatiotemporal domain into several spatiotemporal cubes, computed spatiotemporal aggregation on all cubes using the technique of multi-thread parallel processing, and then integrated the query results. By testing and analyzing on the real datasets, this method has improved the query speed significantly.

  12. Large-Scale Spatial Distribution Patterns of Gastropod Assemblages in Rocky Shores

    PubMed Central

    Miloslavich, Patricia; Cruz-Motta, Juan José; Klein, Eduardo; Iken, Katrin; Weinberger, Vanessa; Konar, Brenda; Trott, Tom; Pohle, Gerhard; Bigatti, Gregorio; Benedetti-Cecchi, Lisandro; Shirayama, Yoshihisa; Mead, Angela; Palomo, Gabriela; Ortiz, Manuel; Gobin, Judith; Sardi, Adriana; Díaz, Juan Manuel; Knowlton, Ann; Wong, Melisa; Peralta, Ana C.

    2013-01-01

    Gastropod assemblages from nearshore rocky habitats were studied over large spatial scales to (1) describe broad-scale patterns in assemblage composition, including patterns by feeding modes, (2) identify latitudinal pattern of biodiversity, i.e., richness and abundance of gastropods and/or regional hotspots, and (3) identify potential environmental and anthropogenic drivers of these assemblages. Gastropods were sampled from 45 sites distributed within 12 Large Marine Ecosystem regions (LME) following the NaGISA (Natural Geography in Shore Areas) standard protocol (www.nagisa.coml.org). A total of 393 gastropod taxa from 87 families were collected. Eight of these families (9.2%) appeared in four or more different LMEs. Among these, the Littorinidae was the most widely distributed (8 LMEs) followed by the Trochidae and the Columbellidae (6 LMEs). In all regions, assemblages were dominated by few species, the most diverse and abundant of which were herbivores. No latitudinal gradients were evident in relation to species richness or densities among sampling sites. Highest diversity was found in the Mediterranean and in the Gulf of Alaska, while highest densities were found at different latitudes and represented by few species within one genus (e.g. Afrolittorina in the Agulhas Current, Littorina in the Scotian Shelf, and Lacuna in the Gulf of Alaska). No significant correlation was found between species composition and environmental variables (r≤0.355, p>0.05). Contributing variables to this low correlation included invasive species, inorganic pollution, SST anomalies, and chlorophyll-a anomalies. Despite data limitations in this study which restrict conclusions in a global context, this work represents the first effort to sample gastropod biodiversity on rocky shores using a standardized protocol across a wide scale. Our results will generate more work to build global databases allowing for large-scale diversity comparisons of rocky intertidal assemblages. PMID:23967204

  13. A spatial national health facility database for public health sector planning in Kenya in 2008.

    PubMed

    Noor, Abdisalan M; Alegana, Victor A; Gething, Peter W; Snow, Robert W

    2009-03-06

    Efforts to tackle the enormous burden of ill-health in low-income countries are hampered by weak health information infrastructures that do not support appropriate planning and resource allocation. For health information systems to function well, a reliable inventory of health service providers is critical. The spatial referencing of service providers to allow their representation in a geographic information system is vital if the full planning potential of such data is to be realized. A disparate series of contemporary lists of health service providers were used to update a public health facility database of Kenya last compiled in 2003. These new lists were derived primarily through the national distribution of antimalarial and antiretroviral commodities since 2006. A combination of methods, including global positioning systems, was used to map service providers. These spatially-referenced data were combined with high-resolution population maps to analyze disparity in geographic access to public health care. The updated 2008 database contained 5,334 public health facilities (67% ministry of health; 28% mission and nongovernmental organizations; 2% local authorities; and 3% employers and other ministries). This represented an overall increase of 1,862 facilities compared to 2003. Most of the additional facilities belonged to the ministry of health (79%) and the majority were dispensaries (91%). 93% of the health facilities were spatially referenced, 38% using global positioning systems compared to 21% in 2003. 89% of the population was within 5 km Euclidean distance to a public health facility in 2008 compared to 71% in 2003. Over 80% of the population outside 5 km of public health service providers was in the sparsely settled pastoralist areas of the country. We have shown that, with concerted effort, a relatively complete inventory of mapped health services is possible with enormous potential for improving planning. Expansion in public health care in Kenya has resulted in significant increases in geographic access although several areas of the country need further improvements. This information is key to future planning and with this paper we have released the digital spatial database in the public domain to assist the Kenyan Government and its partners in the health sector.

  14. An efficient representation of spatial information for expert reasoning in robotic vehicles

    NASA Technical Reports Server (NTRS)

    Scott, Steven; Interrante, Mark

    1987-01-01

    The previous generation of robotic vehicles and drones was designed for a specific task, with limited flexibility in executing their mission. This limited flexibility arises because the robotic vehicles do not possess the intelligence and knowledge upon which to make significant tactical decisions. Current development of robotic vehicles is toward increased intelligence and capabilities, adapting to a changing environment and altering mission objectives. The latest techniques in artificial intelligence (AI) are being employed to increase the robotic vehicle's intelligent decision-making capabilities. This document describes the design of the SARA spatial database tool, which is composed of request parser, reasoning, computations, and database modules that collectively manage and derive information useful for robotic vehicles.

  15. Cross-checking of Large Evaluated and Experimental Nuclear Reaction Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeydina, O.; Koning, A.J.; Soppera, N.

    2014-06-15

    Automated methods are presented for the verification of large experimental and evaluated nuclear reaction databases (e.g. EXFOR, JEFF, TENDL). These methods allow an assessment of the overall consistency of the data and detect aberrant values in both evaluated and experimental databases.

  16. Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System

    PubMed Central

    Jan, Shau-Shiun; Yeh, Shuo-Ju; Liu, Ya-Wen

    2015-01-01

    The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS) measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs) at measured points, the signal propagation model of the Wi-Fi access point (AP) in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging. PMID:26343673

  17. Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System.

    PubMed

    Jan, Shau-Shiun; Yeh, Shuo-Ju; Liu, Ya-Wen

    2015-08-28

    The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS) measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs) at measured points, the signal propagation model of the Wi-Fi access point (AP) in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging.

  18. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    PubMed

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  19. Seasonal and spatial variations of source and drinking water quality in small municipal systems of two Canadian regions.

    PubMed

    Scheili, A; Rodriguez, M J; Sadiq, R

    2015-03-01

    A one-year sampling program covering twenty-five small municipal systems was carried out in two Canadian regions to improve our understanding of the variability of water quality in small systems from water source to the end of the distribution system (DS). The database obtained was used to develop a global portrait of physical, chemical and microbiological water quality parameters. More precisely, the temporal and the spatial variability of these parameters were investigated. We observed that the levels of natural organic matter (NOM) were variable during different seasons, with maxima in the fall for both provinces. In the regions under study, the highest trihalomethane (THM) and haloacetic acid (HAA) levels were achieved in warmer seasons (summer, fall), as observed in previous studies involving large systems. Observed THM and HAA levels were three times higher in systems in the province of Newfoundland & Labrador than in the province of Quebec. Taste and odor indicators were detected during the summer and fall, and higher heterotrophic plate count (HPC) levels were associated with lower free chlorine levels. To determine spatial variations, stepwise statistical analysis was used to identify parameters and locations in the DS that act as indicators of drinking water quality. As observed for medium and large systems, free chlorine consumption, THM and HAA levels were dependent on their location in the DS. We also observed that the degradation of HAAs is more important in small systems than in medium or large DS reported in the literature, and this degradation can occur from the beginning of the DS. The results of this research may contribute to providing precious information on drinking water quality to small system operators and pave the way for several opportunities to improve water quality management. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Design and implementation of PAVEMON: A GIS web-based pavement monitoring system based on large amounts of heterogeneous sensors data

    NASA Astrophysics Data System (ADS)

    Shahini Shamsabadi, Salar

    A web-based PAVEment MONitoring system, PAVEMON, is a GIS oriented platform for accommodating, representing, and leveraging data from a multi-modal mobile sensor system. Stated sensor system consists of acoustic, optical, electromagnetic, and GPS sensors and is capable of producing as much as 1 Terabyte of data per day. Multi-channel raw sensor data (microphone, accelerometer, tire pressure sensor, video) and processed results (road profile, crack density, international roughness index, micro texture depth, etc.) are outputs of this sensor system. By correlating the sensor measurements and positioning data collected in tight time synchronization, PAVEMON attaches a spatial component to all the datasets. These spatially indexed outputs are placed into an Oracle database which integrates seamlessly with PAVEMON's web-based system. The web-based system of PAVEMON consists of two major modules: 1) a GIS module for visualizing and spatial analysis of pavement condition information layers, and 2) a decision-support module for managing maintenance and repair (Mℝ) activities and predicting future budget needs. PAVEMON weaves together sensor data with third-party climate and traffic information from the National Oceanic and Atmospheric Administration (NOAA) and Long Term Pavement Performance (LTPP) databases for an organized data driven approach to conduct pavement management activities. PAVEMON deals with heterogeneous and redundant observations by fusing them for jointly-derived higher-confidence results. A prominent example of the fusion algorithms developed within PAVEMON is a data fusion algorithm used for estimating the overall pavement conditions in terms of ASTM's Pavement Condition Index (PCI). PAVEMON predicts PCI by undertaking a statistical fusion approach and selecting a subset of all the sensor measurements. Other fusion algorithms include noise-removal algorithms to remove false negatives in the sensor data in addition to fusion algorithms developed for identifying features on the road. PAVEMON offers an ideal research and monitoring platform for rapid, intelligent and comprehensive evaluation of tomorrow's transportation infrastructure based on up-to-date data from heterogeneous sensor systems.

  1. Using Random Forest to Improve the Downscaling of Global Livestock Census Data

    PubMed Central

    Nicolas, Gaëlle; Robinson, Timothy P.; Wint, G. R. William; Conchedda, Giulia; Cinardi, Giuseppina; Gilbert, Marius

    2016-01-01

    Large scale, high-resolution global data on farm animal distributions are essential for spatially explicit assessments of the epidemiological, environmental and socio-economic impacts of the livestock sector. This has been the major motivation behind the development of the Gridded Livestock of the World (GLW) database, which has been extensively used since its first publication in 2007. The database relies on a downscaling methodology whereby census counts of animals in sub-national administrative units are redistributed at the level of grid cells as a function of a series of spatial covariates. The recent upgrade of GLW1 to GLW2 involved automating the processing, improvement of input data, and downscaling at a spatial resolution of 1 km per cell (5 km per cell in the earlier version). The underlying statistical methodology, however, remained unchanged. In this paper, we evaluate new methods to downscale census data with a higher accuracy and increased processing efficiency. Two main factors were evaluated, based on sample census datasets of cattle in Africa and chickens in Asia. First, we implemented and evaluated Random Forest models (RF) instead of stratified regressions. Second, we investigated whether models that predicted the number of animals per rural person (per capita) could provide better downscaled estimates than the previous approach that predicted absolute densities (animals per km2). RF models consistently provided better predictions than the stratified regressions for both continents and species. The benefit of per capita over absolute density models varied according to the species and continent. In addition, different technical options were evaluated to reduce the processing time while maintaining their predictive power. Future GLW runs (GLW 3.0) will apply the new RF methodology with optimized modelling options. The potential benefit of per capita models will need to be further investigated with a better distinction between rural and agricultural populations. PMID:26977807

  2. Estimating Long Term Surface Soil Moisture in the GCIP Area From Satellite Microwave Observations

    NASA Technical Reports Server (NTRS)

    Owe, Manfred; deJeu, Vrije; VandeGriend, Adriaan A.

    2000-01-01

    Soil moisture is an important component of the water and energy balances of the Earth's surface. Furthermore, it has been identified as a parameter of significant potential for improving the accuracy of large-scale land surface-atmosphere interaction models. However, accurate estimates of surface soil moisture are often difficult to make, especially at large spatial scales. Soil moisture is a highly variable land surface parameter, and while point measurements are usually accurate, they are representative only of the immediate site which was sampled. Simple averaging of point values to obtain spatial means often leads to substantial errors. Since remotely sensed observations are already a spatially averaged or areally integrated value, they are ideally suited for measuring land surface parameters, and as such, are a logical input to regional or larger scale land process models. A nine-year database of surface soil moisture is being developed for the Central United States from satellite microwave observations. This region forms much of the GCIP study area, and contains most of the Mississippi, Rio Grande, and Red River drainages. Daytime and nighttime microwave brightness temperatures were observed at a frequency of 6.6 GHz, by the Scanning Multichannel Microwave Radiometer (SMMR), onboard the Nimbus 7 satellite. The life of the SMMR instrument spanned from Nov. 1978 to Aug. 1987. At 6.6 GHz, the instrument provided a spatial resolution of approximately 150 km, and an orbital frequency over any pixel-sized area of about 2 daytime and 2 nighttime passes per week. Ground measurements of surface soil moisture from various locations throughout the study area are used to calibrate the microwave observations. Because ground measurements are usually only single point values, and since the time of satellite coverage does not always coincide with the ground measurements, the soil moisture data were used to calibrate a regional water balance for the top 1, 5, and 10 cm surface layers in order to interpolate daily surface moisture values. Such a climate-based approach is often more appropriate for estimating large-area spatially averaged soil moisture because meteorological data are generally more spatially representative than isolated point measurements of soil moisture. Vegetation radiative transfer characteristics, such as the canopy transmissivity, were estimated from vegetation indices such as the Normalized Difference Vegetation Index (NDVI) and the 37 GHz Microwave Polarization Difference Index (MPDI). Passive microwave remote sensing presents the greatest potential for providing regular spatially representative estimates of surface soil moisture at global scales. Real time estimates should improve weather and climate modelling efforts, while the development of historical data sets will provide necessary information for simulation and validation of long-term climate and global change studies.

  3. A case study in adaptable and reusable infrastructure at the Keck Observatory Archive: VO interfaces, moving targets, and more

    NASA Astrophysics Data System (ADS)

    Berriman, G. Bruce; Cohen, Richard W.; Colson, Andrew; Gelino, Christopher R.; Good, John C.; Kong, Mihseh; Laity, Anastasia C.; Mader, Jeffrey A.; Swain, Melanie A.; Tran, Hien D.; Wang, Shin-Ywan

    2016-08-01

    The Keck Observatory Archive (KOA) (https://koa.ipac.caltech.edu) curates all observations acquired at the W. M. Keck Observatory (WMKO) since it began operations in 1994, including data from eight active instruments and two decommissioned instruments. The archive is a collaboration between WMKO and the NASA Exoplanet Science Institute (NExScI). Since its inception in 2004, the science information system used at KOA has adopted an architectural approach that emphasizes software re-use and adaptability. This paper describes how KOA is currently leveraging and extending open source software components to develop new services and to support delivery of a complete set of instrument metadata, which will enable more sophisticated and extensive queries than currently possible. In August 2015, KOA deployed a program interface to discover public data from all instruments equipped with an imaging mode. The interface complies with version 2 of the Simple Imaging Access Protocol (SIAP), under development by the International Virtual Observatory Alliance (IVOA), which defines a standard mechanism for discovering images through spatial queries. The heart of the KOA service is an R-tree-based, database-indexing mechanism prototyped by the Virtual Astronomical Observatory (VAO) and further developed by the Montage Image Mosaic project, designed to provide fast access to large imaging data sets as a first step in creating wide-area image mosaics (such as mosaics of subsets of the 4.7 million images of the SDSS DR9 release). The KOA service uses the results of the spatial R-tree search to create an SQLite data database for further relational filtering. The service uses a JSON configuration file to describe the association between instrument parameters and the service query parameters, and to make it applicable beyond the Keck instruments. The images generated at the Keck telescope usually do not encode the image footprints as WCS fields in the FITS file headers. Because SIAP searches are spatial, much of the effort in developing the program interface involved processing the instrument and telescope parameters to understand how accurately we can derive the WCS information for each instrument. This knowledge is now being fed back into the KOA databases as part of a program to include complete metadata information for all imaging observations. The R-tree program was itself extended to support temporal (in addition to spatial) indexing, in response to requests from the planetary science community for a search engine to discover observations of Solar System objects. With this 3D-indexing scheme, the service performs very fast time and spatial matches between the target ephemerides, obtained from the JPL SPICE service. Our experiments indicate these matches can be more than 100 times faster than when separating temporal and spatial searches. Images of the tracks of the moving targets, overlaid with the image footprints, are computed with a new command-line visualization tool, mViewer, released with the Montage distribution. The service is currently in test and will be released in late summer 2016.

  4. Surgical research using national databases

    PubMed Central

    Leland, Hyuma; Heckmann, Nathanael

    2016-01-01

    Recent changes in healthcare and advances in technology have increased the use of large-volume national databases in surgical research. These databases have been used to develop perioperative risk stratification tools, assess postoperative complications, calculate costs, and investigate numerous other topics across multiple surgical specialties. The results of these studies contain variable information but are subject to unique limitations. The use of large-volume national databases is increasing in popularity, and thorough understanding of these databases will allow for a more sophisticated and better educated interpretation of studies that utilize such databases. This review will highlight the composition, strengths, and weaknesses of commonly used national databases in surgical research. PMID:27867945

  5. Surgical research using national databases.

    PubMed

    Alluri, Ram K; Leland, Hyuma; Heckmann, Nathanael

    2016-10-01

    Recent changes in healthcare and advances in technology have increased the use of large-volume national databases in surgical research. These databases have been used to develop perioperative risk stratification tools, assess postoperative complications, calculate costs, and investigate numerous other topics across multiple surgical specialties. The results of these studies contain variable information but are subject to unique limitations. The use of large-volume national databases is increasing in popularity, and thorough understanding of these databases will allow for a more sophisticated and better educated interpretation of studies that utilize such databases. This review will highlight the composition, strengths, and weaknesses of commonly used national databases in surgical research.

  6. Using Historical Atlas Data to Develop High-Resolution Distribution Models of Freshwater Fishes

    PubMed Central

    Huang, Jian; Frimpong, Emmanuel A.

    2015-01-01

    Understanding the spatial pattern of species distributions is fundamental in biogeography, and conservation and resource management applications. Most species distribution models (SDMs) require or prefer species presence and absence data for adequate estimation of model parameters. However, observations with unreliable or unreported species absences dominate and limit the implementation of SDMs. Presence-only models generally yield less accurate predictions of species distribution, and make it difficult to incorporate spatial autocorrelation. The availability of large amounts of historical presence records for freshwater fishes of the United States provides an opportunity for deriving reliable absences from data reported as presence-only, when sampling was predominantly community-based. In this study, we used boosted regression trees (BRT), logistic regression, and MaxEnt models to assess the performance of a historical metacommunity database with inferred absences, for modeling fish distributions, investigating the effect of model choice and data properties thereby. With models of the distribution of 76 native, non-game fish species of varied traits and rarity attributes in four river basins across the United States, we show that model accuracy depends on data quality (e.g., sample size, location precision), species’ rarity, statistical modeling technique, and consideration of spatial autocorrelation. The cross-validation area under the receiver-operating-characteristic curve (AUC) tended to be high in the spatial presence-absence models at the highest level of resolution for species with large geographic ranges and small local populations. Prevalence affected training but not validation AUC. The key habitat predictors identified and the fish-habitat relationships evaluated through partial dependence plots corroborated most previous studies. The community-based SDM framework broadens our capability to model species distributions by innovatively removing the constraint of lack of species absence data, thus providing a robust prediction of distribution for stream fishes in other regions where historical data exist, and for other taxa (e.g., benthic macroinvertebrates, birds) usually observed by community-based sampling designs. PMID:26075902

  7. Some Reliability Issues in Very Large Databases.

    ERIC Educational Resources Information Center

    Lynch, Clifford A.

    1988-01-01

    Describes the unique reliability problems of very large databases that necessitate specialized techniques for hardware problem management. The discussion covers the use of controlled partial redundancy to improve reliability, issues in operating systems and database management systems design, and the impact of disk technology on very large…

  8. Sifting Through SDO's AIA Cosmic Ray Hits to Find Treasure

    NASA Astrophysics Data System (ADS)

    Kirk, M. S.; Thompson, B. J.; Viall, N. M.; Young, P. R.

    2017-12-01

    The Solar Dynamics Observatory's Atmospheric Imaging Assembly (SDO AIA) has revolutionized solar imaging with its high temporal and spatial resolution, unprecedented spatial and temporal coverage, and seven EUV channels. Automated algorithms routinely clean these images to remove cosmic ray intensity spikes as a part of its preprocessing algorithm. We take a novel approach to survey the entire set of AIA "spike" data to identify and group compact brightenings across the entire SDO mission. The AIA team applies a de-spiking algorithm to remove magnetospheric particle impacts on the CCD cameras, but it has been found that compact, intense solar brightenings are often removed as well. We use the spike database to mine the data and form statistics on compact solar brightenings without having to process large volumes of full-disk AIA data. There are approximately 3 trillion "spiked pixels" removed from images over the mission to date. We estimate that 0.001% of those are of solar origin and removed by mistake, giving us a pre-segmented dataset of 30 million events. We explore the implications of these statistics and the physical qualities of the "spikes" of solar origin.

  9. Use of large healthcare databases for rheumatology clinical research.

    PubMed

    Desai, Rishi J; Solomon, Daniel H

    2017-03-01

    Large healthcare databases, which contain data collected during routinely delivered healthcare to patients, can serve as a valuable resource for generating actionable evidence to assist medical and healthcare policy decision-making. In this review, we summarize use of large healthcare databases in rheumatology clinical research. Large healthcare data are critical to evaluate medication safety and effectiveness in patients with rheumatologic conditions. Three major sources of large healthcare data are: first, electronic medical records, second, health insurance claims, and third, patient registries. Each of these sources offers unique advantages, but also has some inherent limitations. To address some of these limitations and maximize the utility of these data sources for evidence generation, recent efforts have focused on linking different data sources. Innovations such as randomized registry trials, which aim to facilitate design of low-cost randomized controlled trials built on existing infrastructure provided by large healthcare databases, are likely to make clinical research more efficient in coming years. Harnessing the power of information contained in large healthcare databases, while paying close attention to their inherent limitations, is critical to generate a rigorous evidence-base for medical decision-making and ultimately enhancing patient care.

  10. Separate mechanisms for magnitude and order processing in the spatial-numerical association of response codes (SNARC) effect: The strange case of musical note values.

    PubMed

    Prpic, Valter; Fumarola, Antonia; De Tommaso, Matteo; Luccio, Riccardo; Murgia, Mauro; Agostini, Tiziano

    2016-08-01

    The spatial-numerical association of response codes (SNARC) effect is considered an evidence of the association between numbers and space, with faster left key-press responses to small numbers and faster right key-press responses to large numbers. We examined whether visually presented note values produce a SNARC-like effect. Differently from numbers, note values are represented as a decreasing left-to-right progression, allowing us to disambiguate the contribution of order and magnitude in determining the direction of the effect. Musicians with formal education performed a note value comparison in Experiment 1 (direct task), a line orientation judgment in Experiment 2 (indirect task), and a detection task in Experiment 3 (indirect task). When note values were task relevant (direct task), participants responded faster to large note values with the left key-press, and vice versa. Conversely, when note values were task irrelevant (indirect tasks), the direction of this association was reversed. This evidence suggests the existence of separate mechanisms underlying the SNARC effect. Namely, an Order-Related Mechanism (ORM) and a Magnitude-Related Mechanism (MRM) that are revealed by different task demands. Indeed, according to a new model we proposed, ordinal and magnitude related information appears to be preferentially involved in direct and indirect tasks, respectively. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. An algorithm of discovering signatures from DNA databases on a computer cluster.

    PubMed

    Lee, Hsiao Ping; Sheu, Tzu-Fang

    2014-10-05

    Signatures are short sequences that are unique and not similar to any other sequence in a database that can be used as the basis to identify different species. Even though several signature discovery algorithms have been proposed in the past, these algorithms require the entirety of databases to be loaded in the memory, thus restricting the amount of data that they can process. It makes those algorithms unable to process databases with large amounts of data. Also, those algorithms use sequential models and have slower discovery speeds, meaning that the efficiency can be improved. In this research, we are debuting the utilization of a divide-and-conquer strategy in signature discovery and have proposed a parallel signature discovery algorithm on a computer cluster. The algorithm applies the divide-and-conquer strategy to solve the problem posed to the existing algorithms where they are unable to process large databases and uses a parallel computing mechanism to effectively improve the efficiency of signature discovery. Even when run with just the memory of regular personal computers, the algorithm can still process large databases such as the human whole-genome EST database which were previously unable to be processed by the existing algorithms. The algorithm proposed in this research is not limited by the amount of usable memory and can rapidly find signatures in large databases, making it useful in applications such as Next Generation Sequencing and other large database analysis and processing. The implementation of the proposed algorithm is available at http://www.cs.pu.edu.tw/~fang/DDCSDPrograms/DDCSD.htm.

  12. Effective resolution concepts for lidar observations

    NASA Astrophysics Data System (ADS)

    Iarlori, M.; Madonna, F.; Rizi, V.; Trickl, T.; Amodeo, A.

    2015-05-01

    Since its first establishment in 2000, EARLINET (European Aerosol Research Lidar NETwork) has been devoted to providing, through its database, exclusively quantitative aerosol properties, such as aerosol backscatter and aerosol extinction coefficients, the latter only for stations able to retrieve it independently (from Raman or High Spectral Resolution Lidars). As these coefficients are provided in terms of vertical profiles, EARLINET database must also include the details on the range resolution of the submitted data. In fact, the algorithms used in the lidar data analysis often alter the spectral content of the data, mainly working as low pass filters with the purpose of noise damping. Low pass filters are mathematically described by the Digital Signal Processing (DSP) theory as a convolution sum. As a consequence, this implies that each filter's output, at a given range (or time) in our case, will be the result of a linear combination of several lidar input data relative to different ranges (times) before and after the given range (time): a first hint of loss of resolution of the output signal. The application of filtering processes will also always distort the underlying true profile whose relevant features, like aerosol layers, will then be affected both in magnitude and in spatial extension. Thus, both the removal of noise and the spatial distortion of the true profile produce a reduction of the range resolution. This paper provides the determination of the effective resolution (ERes) of the vertical profiles of aerosol properties retrieved starting from lidar data. Large attention has been addressed to provide an assessment of the impact of low-pass filtering on the effective range resolution in the retrieval procedure.

  13. Spatial configuration and distribution of forest patches in Champaign County, Illinois: 1940 to 1993

    Treesearch

    J. Danilo Chinea

    1997-01-01

    Spatial configuration and distribution of landscape elements have implications for the dynamics of forest ecosystems, and, therefore, for the management of these resources. The forest cover of Champaign County, in east-central Illinois, was mapped from 1940 and 1993 aerial photography and entered in a geographical information system database. In 1940, 208 forest...

  14. Data Rods: High Speed, Time-Series Analysis of Massive Cryospheric Data Sets Using Object-Oriented Database Methods

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Gallaher, D. W.; Grant, G.; Lv, Q.

    2011-12-01

    Change over time, is the central driver of climate change detection. The goal is to diagnose the underlying causes, and make projections into the future. In an effort to optimize this process we have developed the Data Rod model, an object-oriented approach that provides the ability to query grid cell changes and their relationships to neighboring grid cells through time. The time series data is organized in time-centric structures called "data rods." A single data rod can be pictured as the multi-spectral data history at one grid cell: a vertical column of data through time. This resolves the long-standing problem of managing time-series data and opens new possibilities for temporal data analysis. This structure enables rapid time- centric analysis at any grid cell across multiple sensors and satellite platforms. Collections of data rods can be spatially and temporally filtered, statistically analyzed, and aggregated for use with pattern matching algorithms. Likewise, individual image pixels can be extracted to generate multi-spectral imagery at any spatial and temporal location. The Data Rods project has created a series of prototype databases to store and analyze massive datasets containing multi-modality remote sensing data. Using object-oriented technology, this method overcomes the operational limitations of traditional relational databases. To demonstrate the speed and efficiency of time-centric analysis using the Data Rods model, we have developed a sea ice detection algorithm. This application determines the concentration of sea ice in a small spatial region across a long temporal window. If performed using traditional analytical techniques, this task would typically require extensive data downloads and spatial filtering. Using Data Rods databases, the exact spatio-temporal data set is immediately available No extraneous data is downloaded, and all selected data querying occurs transparently on the server side. Moreover, fundamental statistical calculations such as running averages are easily implemented against the time-centric columns of data.

  15. Verifying the geographic origin of mahogany (Swietenia macrophylla King) with DNA-fingerprints.

    PubMed

    Degen, B; Ward, S E; Lemes, M R; Navarro, C; Cavers, S; Sebbenn, A M

    2013-01-01

    Illegal logging is one of the main causes of ongoing worldwide deforestation and needs to be eradicated. The trade in illegal timber and wood products creates market disadvantages for products from sustainable forestry. Although various measures have been established to counter illegal logging and the subsequent trade, there is a lack of practical mechanisms for identifying the origin of timber and wood products. In this study, six nuclear microsatellites were used to generate DNA fingerprints for a genetic reference database characterising the populations of origin of a large set of mahogany (Swietenia macrophylla King, Meliaceae) samples. For the database, leaves and/or cambium from 1971 mahogany trees sampled in 31 stands from Mexico to Bolivia were genotyped. A total of 145 different alleles were found, showing strong genetic differentiation (δ(Gregorious)=0.52, F(ST)=0.18, G(ST(Hedrick))=0.65) and clear correlation between genetic and spatial distances among stands (r=0.82, P<0.05). We used the genetic reference database and Bayesian assignment testing to determine the geographic origins of two sets of mahogany wood samples, based on their multilocus genotypes. In both cases the wood samples were assigned to the correct country of origin. We discuss the overall applicability of this methodology to tropical timber trading. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  16. Developing a Near Real-time System for Earthquake Slip Distribution Inversion

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen

    2016-04-01

    Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.

  17. Dione's resurfacing history as determined from a global impact crater database

    NASA Astrophysics Data System (ADS)

    Kirchoff, Michelle R.; Schenk, Paul

    2015-08-01

    Saturn's moon Dione has an interesting and unique resurfacing history recorded by the impact craters on its surface. In order to further resolve this history, we compile a crater database that is nearly global for diameters (D) equal to and larger than 4 km using standard techniques and Cassini Imaging Science Subsystem images. From this database, spatial crater density maps for different diameter ranges are generated. These maps, along with the observed surface morphology, have been used to define seven terrain units for Dione, including refinement of the smooth and "wispy" (or faulted) units from Voyager observations. Analysis of the terrains' crater size-frequency distributions (SFDs) indicates that: (1) removal of D ≈ 4-50 km craters in the "wispy" terrain was most likely by the formation of D ≳ 50 km craters, not faulting, and likely occurred over a couple billion of years; (2) resurfacing of the smooth plains was most likely by cryovolcanism at ∼2 Ga; (3) most of Dione's largest craters (D ⩾ 100 km), including Evander (D = 350 km), may have formed quite recently (<2 Ga), but are still relaxed, indicating Dione has been thermally active for at least half its history; and (4) the variation in crater SFDs at D ≈ 4-15 km is plausibly due to different levels of minor resurfacing (mostly subsequent large impacts) within each terrain.

  18. Introducing GFWED: The Global Fire Weather Database

    NASA Technical Reports Server (NTRS)

    Field, R. D.; Spessa, A. C.; Aziz, N. A.; Camia, A.; Cantin, A.; Carr, R.; de Groot, W. J.; Dowdy, A. J.; Flannigan, M. D.; Manomaiphiboon, K.; hide

    2015-01-01

    The Canadian Forest Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations, beginning in 1980, called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5 latitude by 2-3 longitude. Input weather data were obtained from the NASA Modern Era Retrospective-Analysis for Research and Applications (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded data sets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia,Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DCD1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously identified in MERRAs precipitation, and they reinforce the need to consider alternative sources of precipitation data. GFWED can be used for analyzing historical relationships between fire weather and fire activity at continental and global scales, in identifying large-scale atmosphereocean controls on fire weather, and calibration of FWI-based fire prediction models.

  19. Development of a land-cover characteristics database for the conterminous U.S.

    USGS Publications Warehouse

    Loveland, Thomas R.; Merchant, J.W.; Ohlen, D.O.; Brown, Jesslyn F.

    1991-01-01

    Information regarding the characteristics and spatial distribution of the Earth's land cover is critical to global environmental research. A prototype land-cover database for the conterminous United States designed for use in a variety of global modelling, monitoring, mapping, and analytical endeavors has been created. The resultant database contains multiple layers, including the source AVHRR data, the ancillary data layers, the land-cover regions defined by the research, and translation tables linking the regions to other land classification schema (for example, UNESCO, USGS Anderson System). The land-cover characteristics database can be analyzed, transformed, or aggregated by users to meet a broad spectrum of requirements. -from Authors

  20. Geologic map and map database of the Palo Alto 30' x 60' quadrangle, California

    USGS Publications Warehouse

    Brabb, E.E.; Jones, D.L.; Graymer, R.W.

    2000-01-01

    This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (pamf.ps, pamf.pdf, pamf.txt), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller.

  1. Geologic map and map database of western Sonoma, northernmost Marin, and southernmost Mendocino counties, California

    USGS Publications Warehouse

    Blake, M.C.; Graymer, R.W.; Stamski, R.E.

    2002-01-01

    This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (wsomf.ps, wsomf.pdf, wsomf.txt), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller.

  2. Geographical Distribution of Biomass Carbon in Tropical Southeast Asian Forests: A Database (NPD-068)

    DOE Data Explorer

    Brown, Sandra [University of Illinois, Urbana, Illinois (USA); Iverson, Louis R. [University of Illinois, Urbana, Illinois (USA); Prasad, Anantha [University of Illinois, Urbana, Illinois (USA); Beaty, Tammy W. [CDIAC, Oak Ridge National Laboratory, Oak Ridge, TN (USA); Olsen, Lisa M. [CDIAC, Oak Ridge National Laboratory, Oak Ridge, TN (USA); Cushman, Robert M. [CDIAC, Oak Ridge National Laboratory, Oak Ridge, TN (USA); Brenkert, Antoinette L. [CDIAC, Oak Ridge National Laboratory, Oak Ridge, TN (USA)

    2001-03-01

    A database was generated of estimates of geographically referenced carbon densities of forest vegetation in tropical Southeast Asia for 1980. A geographic information system (GIS) was used to incorporate spatial databases of climatic, edaphic, and geomorphological indices and vegetation to estimate potential (i.e., in the absence of human intervention and natural disturbance) carbon densities of forests. The resulting map was then modified to estimate actual 1980 carbon density as a function of population density and climatic zone. The database covers the following 13 countries: Bangladesh, Brunei, Cambodia (Campuchea), India, Indonesia, Laos, Malaysia, Myanmar (Burma), Nepal, the Philippines, Sri Lanka, Thailand, and Vietnam.

  3. A 30-meter spatial database for the nation's forests

    Treesearch

    Raymond L. Czaplewski

    2002-01-01

    The FIA vision for remote sensing originated in 1992 with the Blue Ribbon Panel on FIA, and it has since evolved into an ambitious performance target for 2003. FIA is joining a consortium of Federal agencies to map the Nation's land cover. FIA field data will help produce a seamless, standardized, national geospatial database for forests at the scale of 30-m...

  4. The importance of data quality for generating reliable distribution models for rare, elusive, and cryptic species

    Treesearch

    Keith B. Aubry; Catherine M. Raley; Kevin S. McKelvey

    2017-01-01

    The availability of spatially referenced environmental data and species occurrence records in online databases enable practitioners to easily generate species distribution models (SDMs) for a broad array of taxa. Such databases often include occurrence records of unknown reliability, yet little information is available on the influence of data quality on SDMs generated...

  5. MAPS: The Organization of a Spatial Database System Using Imagery, Terrain, and Map Data

    DTIC Science & Technology

    1983-06-01

    segments which share the same pixel position. Finally, in any largo system, a logical partitioning of the database must be performed in order to avoid...34theodore roosevelt memoria entry 0; entry 1: Virginia ’northwest Washington* 2 en 11" ies for "crossover" for ’theodore roosevelt memor i entry 0

  6. VEMAP phase 2 bioclimatic database. I. Gridded historical (20th century) climate for modeling ecosystem dynamics across the conterminous USA

    Treesearch

    Timothy G.F. Kittel; Nan. A. Rosenbloom; J.A. Royle; C. Daly; W.P. Gibson; H.H. Fisher; P. Thornton; D.N. Yates; S. Aulenbach; C. Kaufman; R. McKeown; Dominque Bachelet; David S. Schimel

    2004-01-01

    Analysis and simulation of biospheric responses to historical forcing require surface climate data that capture those aspects of climate that control ecological processes, including key spatial gradients and modes of temporal variability. We developed a multivariate, gridded historical climate dataset for the conterminous USA as a common input database for the...

  7. MetPetDB: A database for metamorphic geochemistry

    NASA Astrophysics Data System (ADS)

    Spear, Frank S.; Hallett, Benjamin; Pyle, Joseph M.; Adalı, Sibel; Szymanski, Boleslaw K.; Waters, Anthony; Linder, Zak; Pearce, Shawn O.; Fyffe, Matthew; Goldfarb, Dennis; Glickenhouse, Nickolas; Buletti, Heather

    2009-12-01

    We present a data model for the initial implementation of MetPetDB, a geochemical database specific to metamorphic rock samples. The database is designed around the concept of preservation of spatial relationships, at all scales, of chemical analyses and their textural setting. Objects in the database (samples) represent physical rock samples; each sample may contain one or more subsamples with associated geochemical and image data. Samples, subsamples, geochemical data, and images are described with attributes (some required, some optional); these attributes also serve as search delimiters. All data in the database are classified as published (i.e., archived or published data), public or private. Public and published data may be freely searched and downloaded. All private data is owned; permission to view, edit, download and otherwise manipulate private data may be granted only by the data owner; all such editing operations are recorded by the database to create a data version log. The sharing of data permissions among a group of collaborators researching a common sample is done by the sample owner through the project manager. User interaction with MetPetDB is hosted by a web-based platform based upon the Java servlet application programming interface, with the PostgreSQL relational database. The database web portal includes modules that allow the user to interact with the database: registered users may save and download public and published data, upload private data, create projects, and assign permission levels to project collaborators. An Image Viewer module provides for spatial integration of image and geochemical data. A toolkit consisting of plotting and geochemical calculation software for data analysis and a mobile application for viewing the public and published data is being developed. Future issues to address include population of the database, integration with other geochemical databases, development of the analysis toolkit, creation of data models for derivative data, and building a community-wide user base. It is believed that this and other geochemical databases will enable more productive collaborations, generate more efficient research efforts, and foster new developments in basic research in the field of solid earth geochemistry.

  8. Spatial digital database for the geologic map of the east part of the Pullman 1° x 2° quadrangle, Idaho

    USGS Publications Warehouse

    Rember, William C.; Bennett, Earl H.

    2001-01-01

    he paper geologic map of the east part of the Pullman 1·x 2· degree quadrangle, Idaho (Rember and Bennett, 1979) was scanned and initially attributed by Optronics Specialty Co., Inc. (Northridge, CA) and remitted to the U.S. Geological Survey for further attribution and publication of the geospatial digital files. The resulting digital geologic map GIS can be queried in many ways to produce a variety of geologic maps. This digital geospatial database is one of many being created by the U.S. Geological Survey as an ongoing effort to provide geologic information in a geographic information system (GIS) for use in spatial analysis. Digital base map data files (topography, roads, towns, rivers and lakes, and others.) are not included: they may be obtained from a variety of commercial and government sources. This database is not meant to be used or displayed at any scale larger than 1:250,000 (for example, 1:100,000 or 1:24,000). The digital geologic map graphics and plot files (pull250k.gra/.hp /.eps) that are provided in the digital package are representations of the digital database.

  9. Characterizing spatiotemporal dynamics of methane emissions from rice paddies in Northeast China from 1990 to 2010.

    PubMed

    Zhang, Yuan; Su, Shiliang; Zhang, Feng; Shi, Runhe; Gao, Wei

    2012-01-01

    Rice paddies have been identified as major methane (CH(4)) source induced by human activities. As a major rice production region in Northern China, the rice paddies in the Three-Rivers Plain (TRP) have experienced large changes in spatial distribution over the recent 20 years (from 1990 to 2010). Consequently, accurate estimation and characterization of spatiotemporal patterns of CH₄ emissions from rice paddies has become an pressing issue for assessing the environmental impacts of agroecosystems, and further making GHG mitigation strategies at regional or global levels. Integrating remote sensing mapping with a process-based biogeochemistry model, Denitrification and Decomposition (DNDC), was utilized to quantify the regional CH(4) emissions from the entire rice paddies in study region. Based on site validation and sensitivity tests, geographic information system (GIS) databases with the spatially differentiated input information were constructed to drive DNDC upscaling for its regional simulations. Results showed that (1) The large change in total methane emission that occurred in 2000 and 2010 compared to 1990 is distributed to the explosive growth in amounts of rice planted; (2) the spatial variations in CH₄ fluxes in this study are mainly attributed to the most sensitive factor soil properties, i.e., soil clay fraction and soil organic carbon (SOC) content, and (3) the warming climate could enhance CH₄ emission in the cool paddies. The study concluded that the introduction of remote sensing analysis into the DNDC upscaling has a great capability in timely quantifying the methane emissions from cool paddies with fast land use and cover changes. And also, it confirmed that the northern wetland agroecosystems made great contributions to global greenhouse gas inventory.

  10. Characterizing Spatiotemporal Dynamics of Methane Emissions from Rice Paddies in Northeast China from 1990 to 2010

    PubMed Central

    Zhang, Yuan; Su, Shiliang; Zhang, Feng; Shi, Runhe; Gao, Wei

    2012-01-01

    Background Rice paddies have been identified as major methane (CH4) source induced by human activities. As a major rice production region in Northern China, the rice paddies in the Three-Rivers Plain (TRP) have experienced large changes in spatial distribution over the recent 20 years (from 1990 to 2010). Consequently, accurate estimation and characterization of spatiotemporal patterns of CH4 emissions from rice paddies has become an pressing issue for assessing the environmental impacts of agroecosystems, and further making GHG mitigation strategies at regional or global levels. Methodology/Principal Findings Integrating remote sensing mapping with a process-based biogeochemistry model, Denitrification and Decomposition (DNDC), was utilized to quantify the regional CH4 emissions from the entire rice paddies in study region. Based on site validation and sensitivity tests, geographic information system (GIS) databases with the spatially differentiated input information were constructed to drive DNDC upscaling for its regional simulations. Results showed that (1) The large change in total methane emission that occurred in 2000 and 2010 compared to 1990 is distributed to the explosive growth in amounts of rice planted; (2) the spatial variations in CH4 fluxes in this study are mainly attributed to the most sensitive factor soil properties, i.e., soil clay fraction and soil organic carbon (SOC) content, and (3) the warming climate could enhance CH4 emission in the cool paddies. Conclusions/Significance The study concluded that the introduction of remote sensing analysis into the DNDC upscaling has a great capability in timely quantifying the methane emissions from cool paddies with fast land use and cover changes. And also, it confirmed that the northern wetland agroecosystems made great contributions to global greenhouse gas inventory. PMID:22235268

  11. Does filler database size influence identification accuracy?

    PubMed

    Bergold, Amanda N; Heaton, Paul

    2018-06-01

    Police departments increasingly use large photo databases to select lineup fillers using facial recognition software, but this technological shift's implications have been largely unexplored in eyewitness research. Database use, particularly if coupled with facial matching software, could enable lineup constructors to increase filler-suspect similarity and thus enhance eyewitness accuracy (Fitzgerald, Oriet, Price, & Charman, 2013). However, with a large pool of potential fillers, such technologies might theoretically produce lineup fillers too similar to the suspect (Fitzgerald, Oriet, & Price, 2015; Luus & Wells, 1991; Wells, Rydell, & Seelau, 1993). This research proposes a new factor-filler database size-as a lineup feature affecting eyewitness accuracy. In a facial recognition experiment, we select lineup fillers in a legally realistic manner using facial matching software applied to filler databases of 5,000, 25,000, and 125,000 photos, and find that larger databases are associated with a higher objective similarity rating between suspects and fillers and lower overall identification accuracy. In target present lineups, witnesses viewing lineups created from the larger databases were less likely to make correct identifications and more likely to select known innocent fillers. When the target was absent, database size was associated with a lower rate of correct rejections and a higher rate of filler identifications. Higher algorithmic similarity ratings were also associated with decreases in eyewitness identification accuracy. The results suggest that using facial matching software to select fillers from large photograph databases may reduce identification accuracy, and provides support for filler database size as a meaningful system variable. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  12. NLCD 2011 database

    EPA Pesticide Factsheets

    National Land Cover Database 2011 (NLCD 2011) is the most recent national land cover product created by the Multi-Resolution Land Characteristics (MRLC) Consortium. NLCD 2011 provides - for the first time - the capability to assess wall-to-wall, spatially explicit, national land cover changes and trends across the United States from 2001 to 2011. As with two previous NLCD land cover products NLCD 2011 keeps the same 16-class land cover classification scheme that has been applied consistently across the United States at a spatial resolution of 30 meters. NLCD 2011 is based primarily on a decision-tree classification of circa 2011 Landsat satellite data. This dataset is associated with the following publication:Homer, C., J. Dewitz, L. Yang, S. Jin, P. Danielson, G. Xian, J. Coulston, N. Herold, J. Wickham , and K. Megown. Completion of the 2011 National Land Cover Database for the Conterminous United States – Representing a Decade of Land Cover Change Information. PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING. American Society for Photogrammetry and Remote Sensing, Bethesda, MD, USA, 81(0): 345-354, (2015).

  13. Thermally distinct ejecta blankets from Martian craters

    NASA Astrophysics Data System (ADS)

    Betts, B. H.; Murray, B. C.

    1992-09-01

    The study of ejecta blankets on Mars gives information about the Martian surface, subsurface, geologic history, atmospheric history, and impact process. In Feb. and Mar. 1989, the Termoskan instrument on board the Phobos 1988 spacecraft of the USSR acquired the highest spatial resolution thermal data ever obtained for Mars, ranging in the resolution from 300 meters to 3 km per pixel. Termoskan simultaneously obtained broad band visible channel data. The data covers a large portion of the equatorial region from 30 degrees S latitude to 6 degrees N latitude. Utilizing the data set we have discovered tens of craters with thermal infrared distinct ejecta (TIDE) in the equatorial regions of Mars. In order to look for correlations within the data, we have compiled a database which currently consists of 110 craters in an area rich in TIDE's and geologic unit variations. For each crater, we include morphologic information from Barlow's Catalog of Large Martian Impact Craters in addition to geographic, geologic, and physical information and Termoskan thermal infrared and visible data.

  14. Thermally distinct ejecta blankets from Martian craters

    NASA Technical Reports Server (NTRS)

    Betts, B. H.; Murray, B. C.

    1992-01-01

    The study of ejecta blankets on Mars gives information about the Martian surface, subsurface, geologic history, atmospheric history, and impact process. In Feb. and Mar. 1989, the Termoskan instrument on board the Phobos 1988 spacecraft of the USSR acquired the highest spatial resolution thermal data ever obtained for Mars, ranging in the resolution from 300 meters to 3 km per pixel. Termoskan simultaneously obtained broad band visible channel data. The data covers a large portion of the equatorial region from 30 degrees S latitude to 6 degrees N latitude. Utilizing the data set we have discovered tens of craters with thermal infrared distinct ejecta (TIDE) in the equatorial regions of Mars. In order to look for correlations within the data, we have compiled a database which currently consists of 110 craters in an area rich in TIDE's and geologic unit variations. For each crater, we include morphologic information from Barlow's Catalog of Large Martian Impact Craters in addition to geographic, geologic, and physical information and Termoskan thermal infrared and visible data.

  15. Meta-Storms: efficient search for similar microbial communities based on a novel indexing scheme and similarity score for metagenomic data.

    PubMed

    Su, Xiaoquan; Xu, Jian; Ning, Kang

    2012-10-01

    It has long been intriguing scientists to effectively compare different microbial communities (also referred as 'metagenomic samples' here) in a large scale: given a set of unknown samples, find similar metagenomic samples from a large repository and examine how similar these samples are. With the current metagenomic samples accumulated, it is possible to build a database of metagenomic samples of interests. Any metagenomic samples could then be searched against this database to find the most similar metagenomic sample(s). However, on one hand, current databases with a large number of metagenomic samples mostly serve as data repositories that offer few functionalities for analysis; and on the other hand, methods to measure the similarity of metagenomic data work well only for small set of samples by pairwise comparison. It is not yet clear, how to efficiently search for metagenomic samples against a large metagenomic database. In this study, we have proposed a novel method, Meta-Storms, that could systematically and efficiently organize and search metagenomic data. It includes the following components: (i) creating a database of metagenomic samples based on their taxonomical annotations, (ii) efficient indexing of samples in the database based on a hierarchical taxonomy indexing strategy, (iii) searching for a metagenomic sample against the database by a fast scoring function based on quantitative phylogeny and (iv) managing database by index export, index import, data insertion, data deletion and database merging. We have collected more than 1300 metagenomic data from the public domain and in-house facilities, and tested the Meta-Storms method on these datasets. Our experimental results show that Meta-Storms is capable of database creation and effective searching for a large number of metagenomic samples, and it could achieve similar accuracies compared with the current popular significance testing-based methods. Meta-Storms method would serve as a suitable database management and search system to quickly identify similar metagenomic samples from a large pool of samples. ningkang@qibebt.ac.cn Supplementary data are available at Bioinformatics online.

  16. Research progress and hotspot analysis of spatial interpolation

    NASA Astrophysics Data System (ADS)

    Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li

    2018-02-01

    In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.

  17. Auditory spectral versus spatial temporal order judgment: Threshold distribution analysis.

    PubMed

    Fostick, Leah; Babkoff, Harvey

    2017-05-01

    Some researchers suggested that one central mechanism is responsible for temporal order judgments (TOJ), within and across sensory channels. This suggestion is supported by findings of similar TOJ thresholds in same modality and cross-modality TOJ tasks. In the present study, we challenge this idea by analyzing and comparing the threshold distributions of the spectral and spatial TOJ tasks. In spectral TOJ, the tones differ in their frequency ("high" and "low") and are delivered either binaurally or monaurally. In spatial (or dichotic) TOJ, the two tones are identical but are presented asynchronously to the two ears and thus differ with respect to which ear received the first tone and which ear received the second tone ("left"/"left"). Although both tasks are regarded as measures of auditory temporal processing, a review of data published in the literature suggests that they trigger different patterns of response. The aim of the current study was to systematically examine spectral and spatial TOJ threshold distributions across a large number of studies. Data are based on 388 participants in 13 spectral TOJ experiments, and 222 participants in 9 spatial TOJ experiments. None of the spatial TOJ distributions deviated significantly from the Gaussian; while all of the spectral TOJ threshold distributions were skewed to the right, with more than half of the participants accurately judging temporal order at very short interstimulus intervals (ISI). The data do not support the hypothesis that 1 central mechanism is responsible for all temporal order judgments. We suggest that different perceptual strategies are employed when performing spectral TOJ than when performing spatial TOJ. We posit that the spectral TOJ paradigm may provide the opportunity for two-tone masking or temporal integration, which is sensitive to the order of the tones and thus provides perceptual cues that may be used to judge temporal order. This possibility should be considered when interpreting spectral TOJ data, especially in the context of comparing different populations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. RADSS: an integration of GIS, spatial statistics, and network service for regional data mining

    NASA Astrophysics Data System (ADS)

    Hu, Haitang; Bao, Shuming; Lin, Hui; Zhu, Qing

    2005-10-01

    Regional data mining, which aims at the discovery of knowledge about spatial patterns, clusters or association between regions, has widely applications nowadays in social science, such as sociology, economics, epidemiology, crime, and so on. Many applications in the regional or other social sciences are more concerned with the spatial relationship, rather than the precise geographical location. Based on the spatial continuity rule derived from Tobler's first law of geography: observations at two sites tend to be more similar to each other if the sites are close together than if far apart, spatial statistics, as an important means for spatial data mining, allow the users to extract the interesting and useful information like spatial pattern, spatial structure, spatial association, spatial outlier and spatial interaction, from the vast amount of spatial data or non-spatial data. Therefore, by integrating with the spatial statistical methods, the geographical information systems will become more powerful in gaining further insights into the nature of spatial structure of regional system, and help the researchers to be more careful when selecting appropriate models. However, the lack of such tools holds back the application of spatial data analysis techniques and development of new methods and models (e.g., spatio-temporal models). Herein, we make an attempt to develop such an integrated software and apply it into the complex system analysis for the Poyang Lake Basin. This paper presents a framework for integrating GIS, spatial statistics and network service in regional data mining, as well as their implementation. After discussing the spatial statistics methods involved in regional complex system analysis, we introduce RADSS (Regional Analysis and Decision Support System), our new regional data mining tool, by integrating GIS, spatial statistics and network service. RADSS includes the functions of spatial data visualization, exploratory spatial data analysis, and spatial statistics. The tool also includes some fundamental spatial and non-spatial database in regional population and environment, which can be updated by external database via CD or network. Utilizing this data mining and exploratory analytical tool, the users can easily and quickly analyse the huge mount of the interrelated regional data, and better understand the spatial patterns and trends of the regional development, so as to make a credible and scientific decision. Moreover, it can be used as an educational tool for spatial data analysis and environmental studies. In this paper, we also present a case study on Poyang Lake Basin as an application of the tool and spatial data mining in complex environmental studies. At last, several concluding remarks are discussed.

  19. Using SQL Databases for Sequence Similarity Searching and Analysis.

    PubMed

    Pearson, William R; Mackey, Aaron J

    2017-09-13

    Relational databases can integrate diverse types of information and manage large sets of similarity search results, greatly simplifying genome-scale analyses. By focusing on taxonomic subsets of sequences, relational databases can reduce the size and redundancy of sequence libraries and improve the statistical significance of homologs. In addition, by loading similarity search results into a relational database, it becomes possible to explore and summarize the relationships between all of the proteins in an organism and those in other biological kingdoms. This unit describes how to use relational databases to improve the efficiency of sequence similarity searching and demonstrates various large-scale genomic analyses of homology-related data. It also describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. The unit also introduces search_demo, a database that stores sequence similarity search results. The search_demo database is then used to explore the evolutionary relationships between E. coli proteins and proteins in other organisms in a large-scale comparative genomic analysis. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  20. MRNIDX - Marine Data Index: Database Description, Operation, Retrieval, and Display

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1982-01-01

    A database referencing the location and content of data stored on magnetic medium was designed to assist in the indexing of time-series and spatially dependent marine geophysical data collected or processed by the U. S. Geological Survey. The database was designed and created for input to the Geologic Retrieval and Synopsis Program (GRASP) to allow selective retrievals of information pertaining to location of data, data format, cruise, geographical bounds and collection dates of data. This information is then used to locate the stored data for administrative purposes or further processing. Database utilization is divided into three distinct operations. The first is the inventorying of the data and the updating of the database, the second is the retrieval of information from the database, and the third is the graphic display of the geographical boundaries to which the retrieved information pertains.

  1. The National Deep-Sea Coral and Sponge Database: A Comprehensive Resource for United States Deep-Sea Coral and Sponge Records

    NASA Astrophysics Data System (ADS)

    Dornback, M.; Hourigan, T.; Etnoyer, P.; McGuinn, R.; Cross, S. L.

    2014-12-01

    Research on deep-sea corals has expanded rapidly over the last two decades, as scientists began to realize their value as long-lived structural components of high biodiversity habitats and archives of environmental information. The NOAA Deep Sea Coral Research and Technology Program's National Database for Deep-Sea Corals and Sponges is a comprehensive resource for georeferenced data on these organisms in U.S. waters. The National Database currently includes more than 220,000 deep-sea coral records representing approximately 880 unique species. Database records from museum archives, commercial and scientific bycatch, and from journal publications provide baseline information with relatively coarse spatial resolution dating back as far as 1842. These data are complemented by modern, in-situ submersible observations with high spatial resolution, from surveys conducted by NOAA and NOAA partners. Management of high volumes of modern high-resolution observational data can be challenging. NOAA is working with our data partners to incorporate this occurrence data into the National Database, along with images and associated information related to geoposition, time, biology, taxonomy, environment, provenance, and accuracy. NOAA is also working to link associated datasets collected by our program's research, to properly archive them to the NOAA National Data Centers, to build a robust metadata record, and to establish a standard protocol to simplify the process. Access to the National Database is provided through an online mapping portal. The map displays point based records from the database. Records can be refined by taxon, region, time, and depth. The queries and extent used to view the map can also be used to download subsets of the database. The database, map, and website is already in use by NOAA, regional fishery management councils, and regional ocean planning bodies, but we envision it as a model that can expand to accommodate data on a global scale.

  2. Toward an Open-Access Global Database for Mapping, Control, and Surveillance of Neglected Tropical Diseases

    PubMed Central

    Hürlimann, Eveline; Schur, Nadine; Boutsika, Konstantina; Stensgaard, Anna-Sofie; Laserna de Himpsl, Maiti; Ziegelbauer, Kathrin; Laizer, Nassor; Camenzind, Lukas; Di Pasquale, Aurelio; Ekpo, Uwem F.; Simoonga, Christopher; Mushinge, Gabriel; Saarnak, Christopher F. L.; Utzinger, Jürg; Kristensen, Thomas K.; Vounatsou, Penelope

    2011-01-01

    Background After many years of general neglect, interest has grown and efforts came under way for the mapping, control, surveillance, and eventual elimination of neglected tropical diseases (NTDs). Disease risk estimates are a key feature to target control interventions, and serve as a benchmark for monitoring and evaluation. What is currently missing is a georeferenced global database for NTDs providing open-access to the available survey data that is constantly updated and can be utilized by researchers and disease control managers to support other relevant stakeholders. We describe the steps taken toward the development of such a database that can be employed for spatial disease risk modeling and control of NTDs. Methodology With an emphasis on schistosomiasis in Africa, we systematically searched the literature (peer-reviewed journals and ‘grey literature’), contacted Ministries of Health and research institutions in schistosomiasis-endemic countries for location-specific prevalence data and survey details (e.g., study population, year of survey and diagnostic techniques). The data were extracted, georeferenced, and stored in a MySQL database with a web interface allowing free database access and data management. Principal Findings At the beginning of 2011, our database contained more than 12,000 georeferenced schistosomiasis survey locations from 35 African countries available under http://www.gntd.org. Currently, the database is expanded to a global repository, including a host of other NTDs, e.g. soil-transmitted helminthiasis and leishmaniasis. Conclusions An open-access, spatially explicit NTD database offers unique opportunities for disease risk modeling, targeting control interventions, disease monitoring, and surveillance. Moreover, it allows for detailed geostatistical analyses of disease distribution in space and time. With an initial focus on schistosomiasis in Africa, we demonstrate the proof-of-concept that the establishment and running of a global NTD database is feasible and should be expanded without delay. PMID:22180793

  3. [Design and implementation of Geographical Information System on prevention and control of cholera].

    PubMed

    Li, Xiu-jun; Fang, Li-qun; Wang, Duo-chun; Wang, Lu-xi; Li, Ya-pin; Li, Yan-li; Yang, Hong; Kan, Biao; Cao, Wu-chun

    2012-04-01

    To build the Geographical Information System (GIS) database for prevention and control of cholera programs as well as using management analysis and function demonstration to show the spatial attribute of cholera. Data from case reporting system regarding diarrhoea, vibrio cholerae, serotypes of vibrio cholerae at the surveillance spots and seafoods, as well as surveillance data on ambient environment and climate were collected. All the data were imported to system database to show the incidence of vibrio cholerae in different provinces, regions and counties to support the spatial analysis through the spatial analysis of GIS. The epidemic trends of cholera, seasonal characteristics of the cholera and the variation of the vibrio cholerae with times were better understood. Information on hotspots, regions and time of epidemics was collected, and helpful in providing risk prediction on the incidence of vibrio cholerae. The exploitation of the software can predict and simulate the spatio-temporal risks, so as to provide guidance for the prevention and control of the disease.

  4. Using a landslide inventory from online news to evaluate the performance of warning models for rainfall-induced landslides in Italy

    NASA Astrophysics Data System (ADS)

    Pecoraro, Gaetano; Calvello, Michele

    2017-04-01

    In Italy rainfall-induced landslides pose a significant and widespread hazard, resulting in a large number of casualties and enormous economic damages. Mitigation of such a diffuse risk cannot be attained with structural measures only. With respect to the risk to life, early warning systems represent a viable and useful tool for landslide risk mitigation over wide areas. Inventories of rainfall-induced landslides are critical to support investigations of where and when landslides have happened and may occur in the future, i.e. to establish reliable correlations between rainfall characteristics and landslide occurrences. In this work a parametric study has been conducted to evaluate the performance of correlation models between rainfall and landslides over the Italian territory using the "FraneItalia" database, an inventory of landslides retrieved from online Italian journalistic news. The information reported for each record of this database always include: the site of occurrence of the landslides, the date of occurrence, the source of the news. Multiple landslides occurring in the same date, within the same province or region, are inventoried together in one single record of the database, in this case also reporting the number of landslides of the event. Each record the database may also include, if the related information is available: hour of occurrence; typology, volume and material of the landslide; activity phase; effects on people, structures, infrastructures, cars or other elements. The database currently contains six complete years of data (2010-2015), including more than 4000 landslide reports, most of them triggered by rainfall. For the aim of this study, different rainfall-landslides correlation models have been tested by analysing the reported landslides, within all the 144 zones identified by the national civil protection for weather-related warnings in Italy, in relation to satellite-based precipitations estimates from the Global Precipitation Measurement (GPM) NASA mission. This remote sensing database contains gridded precipitation and precipitation-error estimates, with a half-hour temporal resolution and a 0.10-degree spatial resolution, covering most of the earth starting from 2014. It is well known that satellite estimates of rainfall have some limitations in resolving specific rainfall features (e.g., shallow orographic events and short-duration, high-intensity events), yet the temporal and spatial accuracy of the GPM data may be considered adequate in relation to the scale of the analysis and the size of the warning zones used for this study. The results of the parametric analysis conducted herein, although providing some indications on the most relevant rainfall conditions leading to widespread landsliding over a warning zone, must be considered preliminary as they show a very heterogeneous behaviour of the employed rainfall-based warning models over the Italian territory. Nevertheless, they clearly show the strong potential of the continuous multi-year landslide records available from the "FraneItalia" database as an important source of information to evaluate the performance of warning models at regional scale throughout Italy.

  5. Colorado Late Cenozoic Fault and Fold Database and Internet Map Server: User-friendly technology for complex information

    USGS Publications Warehouse

    Morgan, K.S.; Pattyn, G.J.; Morgan, M.L.

    2005-01-01

    Internet mapping applications for geologic data allow simultaneous data delivery and collection, enabling quick data modification while efficiently supplying the end user with information. Utilizing Web-based technologies, the Colorado Geological Survey's Colorado Late Cenozoic Fault and Fold Database was transformed from a monothematic, nonspatial Microsoft Access database into a complex information set incorporating multiple data sources. The resulting user-friendly format supports easy analysis and browsing. The core of the application is the Microsoft Access database, which contains information compiled from available literature about faults and folds that are known or suspected to have moved during the late Cenozoic. The database contains nonspatial fields such as structure type, age, and rate of movement. Geographic locations of the fault and fold traces were compiled from previous studies at 1:250,000 scale to form a spatial database containing information such as length and strike. Integration of the two databases allowed both spatial and nonspatial information to be presented on the Internet as a single dataset (http://geosurvey.state.co.us/pubs/ceno/). The user-friendly interface enables users to view and query the data in an integrated manner, thus providing multiple ways to locate desired information. Retaining the digital data format also allows continuous data updating and quick delivery of newly acquired information. This dataset is a valuable resource to anyone interested in earthquake hazards and the activity of faults and folds in Colorado. Additional geologic hazard layers and imagery may aid in decision support and hazard evaluation. The up-to-date and customizable maps are invaluable tools for researchers or the public.

  6. Environmental Justice and the Spatial Distribution of Outdoor Recreation sites: an Applications of Geographic Information Systems

    Treesearch

    Michael A. Tarrant; H. Ken Cordell

    1999-01-01

    This study examines the spatial distribution of outdoor recreation sites and their proximity to census block groups (CBGs), in order to determine potential socio-economic inequities. It is framed within the context of environmental justice. Information from the Southern Appalachian Assessment database was applied to a case study of the Chattahoochee National Forest in...

  7. Definition of spatial patterns of bark beetle Ips typographus (L.) outbreak spreading in Tatra Mountains (Central Europe), using GIS

    Treesearch

    Rastislav Jakus; Wojciech Grodzki; Marek Jezik; Marcin Jachym

    2003-01-01

    The spread of bark beetle outbreaks in the Tatra Mountains was explored by using both terrestrial and remote sensing techniques. Both approaches have proven to be useful for studying spatial patterns of bark beetle population dynamics. The terrestrial methods were applied on existing forestry databases. Vegetation change analysis (image differentiation), digital...

  8. Temporal variation in synchrony among chinook salmon (Oncorhynchus tshawytscha) redd counts from a wilderness area in central Idaho

    Treesearch

    D. J. Isaak; R. F. Thurow; B. E. Rieman; J. B. Dunham

    2003-01-01

    Metapopulation dynamics have emerged as a key consideration in conservation planning for salmonid fishes. Implicit to many models of spatially structured populations is a degree of synchrony, or correlation, among populations. We used a spatially and temporally extensive database of chinook salmon (Oncorhynchus tshawytscha) redd counts from a wilderness area in central...

  9. Spatial Databases

    DTIC Science & Technology

    2007-09-19

    extended object relations such as boundary, interior, open, closed , within, connected, and overlaps, which are invariant under elastic deformation...is required in a geo-spatial semantic web is challenging because the defining properties of geographic entities are very closely related to space. In...Objects under Primitive will be open (i.e., they will not contain their boundary points) and the objects under Complex will be closed . In addition to

  10. Identifying spatially similar gene expression patterns in early stage fruit fly embryo images: binary feature versus invariant moment digital representations

    PubMed Central

    Gurunathan, Rajalakshmi; Van Emden, Bernard; Panchanathan, Sethuraman; Kumar, Sudhir

    2004-01-01

    Background Modern developmental biology relies heavily on the analysis of embryonic gene expression patterns. Investigators manually inspect hundreds or thousands of expression patterns to identify those that are spatially similar and to ultimately infer potential gene interactions. However, the rapid accumulation of gene expression pattern data over the last two decades, facilitated by high-throughput techniques, has produced a need for the development of efficient approaches for direct comparison of images, rather than their textual descriptions, to identify spatially similar expression patterns. Results The effectiveness of the Binary Feature Vector (BFV) and Invariant Moment Vector (IMV) based digital representations of the gene expression patterns in finding biologically meaningful patterns was compared for a small (226 images) and a large (1819 images) dataset. For each dataset, an ordered list of images, with respect to a query image, was generated to identify overlapping and similar gene expression patterns, in a manner comparable to what a developmental biologist might do. The results showed that the BFV representation consistently outperforms the IMV representation in finding biologically meaningful matches when spatial overlap of the gene expression pattern and the genes involved are considered. Furthermore, we explored the value of conducting image-content based searches in a dataset where individual expression components (or domains) of multi-domain expression patterns were also included separately. We found that this technique improves performance of both IMV and BFV based searches. Conclusions We conclude that the BFV representation consistently produces a more extensive and better list of biologically useful patterns than the IMV representation. The high quality of results obtained scales well as the search database becomes larger, which encourages efforts to build automated image query and retrieval systems for spatial gene expression patterns. PMID:15603586

  11. A Secondary Spatial Analysis of Gun Violence near Boston Schools: a Public Health Approach.

    PubMed

    Barboza, Gia

    2018-06-01

    School neighborhood violence continues to be a major public health problem among urban students. A large body of research addresses violence at school; however, fewer studies have explored concentrations of violence in areas proximal to schools. This study aimed to quantify the concentration of shootings near schools to elucidate the place-based dynamics that may be focal points for violence prevention. Geocoded databases of shooting and school locations were used to examine locational patterns of firearm shootings and elementary, middle, and high schools in Boston, Massachusetts. Analyses utilized spatial statistics for point pattern data including distance matrix and K function methodology to quantify the degree of spatial dependence of shootings around schools. Results suggested that between 2012 and 2015, there were 678 shooting incidents in Boston; the average density was 5.1 per square kilometer. The nearest neighbor index (NNI = 0.335 km, p < .001, O = 0.95 km, E = 0.28 km) and G function analysis revealed a clustered pattern of gun shooting incidents indicative of a spatially non-random process. The mean and median distance from any school to the nearest shooting location was 0.35 and 0.33 km, respectively. A majority (56%, 74/133) of schools in Boston had at least one shooting incident within 400 m, a distance that would take about 5 min to walk if traveling by foot. The bivariate K function indicated that a significantly greater number of shootings were clustered within short distances from schools than would be expected under a null hypothesis of no spatial dependence. Implications for students attending schools in racially homogenous neighborhoods across all income levels are discussed.

  12. The market value of cultural heritage in urban areas: an application of spatial hedonic pricing

    NASA Astrophysics Data System (ADS)

    Lazrak, Faroek; Nijkamp, Peter; Rietveld, Piet; Rouwendal, Jan

    2014-01-01

    The current literature often values intangible goods like cultural heritage by applying stated preference methods. In recent years, however, the increasing availability of large databases on real estate transactions and listed prices has opened up new research possibilities and has reduced various existing barriers to applications of conventional (spatial) hedonic analysis to the real estate market. The present paper provides one of the first applications using a spatial autoregressive model to investigate the impact of cultural heritage—in particular, listed buildings and historic-cultural sites (or historic landmarks)—on the value of real estate in cities. In addition, this paper suggests a novel way of specifying the spatial weight matrix—only prices of sold houses influence current price—in identifying the spatial dependency effects between sold properties. The empirical application in the present study concerns the Dutch urban area of Zaanstad, a historic area for which over a long period of more than 20 years detailed information on individual dwellings, and their market prices are available in a GIS context. In this paper, the effect of cultural heritage is analysed in three complementary ways. First, we measure the effect of a listed building on its market price in the relevant area concerned. Secondly, we investigate the value that listed heritage has on nearby property. And finally, we estimate the effect of historic-cultural sites on real estate prices. We find that, to purchase a listed building, buyers are willing to pay an additional 26.9 %, while surrounding houses are worth an extra 0.28 % for each additional listed building within a 50-m radius. Houses sold within a conservation area appear to gain a premium of 26.4 % which confirms the existence of a `historic ensemble' effect.

  13. Design and deployment of a large brain-image database for clinical and nonclinical research

    NASA Astrophysics Data System (ADS)

    Yang, Guo Liang; Lim, Choie Cheio Tchoyoson; Banukumar, Narayanaswami; Aziz, Aamer; Hui, Francis; Nowinski, Wieslaw L.

    2004-04-01

    An efficient database is an essential component of organizing diverse information on image metadata and patient information for research in medical imaging. This paper describes the design, development and deployment of a large database system serving as a brain image repository that can be used across different platforms in various medical researches. It forms the infrastructure that links hospitals and institutions together and shares data among them. The database contains patient-, pathology-, image-, research- and management-specific data. The functionalities of the database system include image uploading, storage, indexing, downloading and sharing as well as database querying and management with security and data anonymization concerns well taken care of. The structure of database is multi-tier client-server architecture with Relational Database Management System, Security Layer, Application Layer and User Interface. Image source adapter has been developed to handle most of the popular image formats. The database has a user interface based on web browsers and is easy to handle. We have used Java programming language for its platform independency and vast function libraries. The brain image database can sort data according to clinically relevant information. This can be effectively used in research from the clinicians" points of view. The database is suitable for validation of algorithms on large population of cases. Medical images for processing could be identified and organized based on information in image metadata. Clinical research in various pathologies can thus be performed with greater efficiency and large image repositories can be managed more effectively. The prototype of the system has been installed in a few hospitals and is working to the satisfaction of the clinicians.

  14. Soil cover characterization at large scale: the example of Perugia Province in central Italy

    NASA Astrophysics Data System (ADS)

    Fanelli, Giulia; Salciarini, Diana; Tamagnini, Claudio

    2015-04-01

    In the last years, physically-based models aimed at predicting the occurrence of landslides have had a large diffusion because the opportunity of having landslide susceptibility maps can be essential to reduce damages and human losses. On one hand physically-based models rationally analyse problems, because mathematically describe the physical processes that actually happen, on the other hand their diffusion is limited by the difficulty of having and managing accurate data over large areas. For this reason, and also because in the Perugia province geotechnical data are partial and not regularly distributed, a data collection campaign has been started in order to have a wide physical-mechanical data set that can be used to apply any physically-based model. The collected data have been derived from mechanical tests and investigations performed to characterize the soil. The data set includes about 3000 points and each record is characterized by the following quantitative information: coordinates, geological description, cohesion, friction angle. Besides, the records contain the results of seismic tests that allow knowing the shear waves velocity in the first 30 meters of soil. The database covers the whole Perugia province territory and it can be used to evaluate the effects of both rainfall-induced and earthquake-induced landslides. The database has been analysed in order to exclude possible outliers; starting from the all data set, 16 lithological units have been isolated, each one with homogeneous geological features and the same mechanical behaviour. It is important to investigate the quality of the data and know how much they are reliable; therefore statistical analyses have been performed to quantify the dispersion of the data - i.e. relative and cumulative frequency - and also geostatistical analyses to know the spatial correlation - i.e. the variogram. The empirical variogram is a common and useful tool in geostatistics because it quantifies the spatial correlation between data. Once the variogram has been calculated, it is possible to use it to forecast the best estimation of a parameter in a generic point where information are missing. One of the most used interpolation techniques is the Kriging, which makes a prediction of a function in a given point as weighted average of known values of such function in the nearest points, deriving the weights from the variogram.

  15. Assessment of Fire Occurrence and Future Fire Potential in Arctic Alaska

    NASA Astrophysics Data System (ADS)

    French, N. H. F.; Jenkins, L. K.; Loboda, T. V.; Bourgeau-Chavez, L. L.; Whitley, M. A.

    2014-12-01

    An analysis of the occurrence of fire in Alaskan tundra was completed using the relatively complete historical record of fire for the region from 1950 to 2013. Spatial fire data for Alaskan tundra regions were obtained from the Alaska Large Fire Database for the region defined from vegetation and ecoregion maps. A detailed presentation of fire records available for assessing the fire regime of the tundra regions of Alaska as well as results evaluating fire size, seasonality, and general geographic and temporal trends is included. Assessment of future fire potential was determined for three future climate scenarios at four locations across the Alaskan tundra using the Canadian Forest Fire Weather Index (FWI). Canadian Earth System Model (CanESM2) weather variables were used for historical (1850-2005) and future (2006-2100) time periods. The database includes 908 fire points and 463 fire polygons within the 482,931 km2 of Alaskan tundra. Based on the polygon database 25,656 km2 (6,340,000 acres) has burned across the six tundra ecoregions since 1950. Approximately 87% of tundra fires start in June and July across all ecoregions. Combining information from the polygon and points data records, the estimated average fire size for fire in the Alaskan Arctic region is 28.1 km2 (7,070 acres), which is much smaller than in the adjacent boreal forest region, averaging 203 km2 for high fire years. The largest fire in the database is the Imuruk Basin Fire which burned 1,680 km2 in 1954 in the Seward Peninsula region (Table 1). Assessment of future fire potential shows that, in comparison with the historical fire record, fire occurrence in Alaskan tundra is expected to increase under all three climate scenarios. Occurrences of high fire weather danger (>10 FWI) are projected to increase in frequency and magnitude in all regions modeled. The changes in fire weather conditions are expected to vary from one region to another in seasonal occurrence as well as severity and frequency of high fire weather danger. While the Alaska Large Fire Database represents the best data available for the Alaskan Arctic, and is superior to many other regions around the world, particularly Arctic regions, these fire records need to be used with some caution due to the mixed origin and minimal validation of the data; this is reviewed in the presentation.

  16. Scenarios of large mammal loss in Europe for the 21st century.

    PubMed

    Rondinini, Carlo; Visconti, Piero

    2015-08-01

    Distributions and populations of large mammals are declining globally, leading to an increase in their extinction risk. We forecasted the distribution of extant European large mammals (17 carnivores and 10 ungulates) based on 2 Rio+20 scenarios of socioeconomic development: business as usual and reduced impact through changes in human consumption of natural resources. These scenarios are linked to scenarios of land-use change and climate change through the spatial allocation of land conversion up to 2050. We used a hierarchical framework to forecast the extent and distribution of mammal habitat based on species' habitat preferences (as described in the International Union for Conservation of Nature Red List database) within a suitable climatic space fitted to the species' current geographic range. We analyzed the geographic and taxonomic variation of habitat loss for large mammals and the potential effect of the reduced impact policy on loss mitigation. Averaging across scenarios, European large mammals were predicted to lose 10% of their habitat by 2050 (25% in the worst-case scenario). Predicted loss was much higher for species in northwestern Europe, where habitat is expected to be lost due to climate and land-use change. Change in human consumption patterns was predicted to substantially improve the conservation of habitat for European large mammals, but not enough to reduce extinction risk if species cannot adapt locally to climate change or disperse. © 2015 Society for Conservation Biology.

  17. Microbial diversity drives multifunctionality in terrestrial ecosystems

    PubMed Central

    Delgado-Baquerizo, Manuel; Maestre, Fernando T.; Reich, Peter B.; Jeffries, Thomas C.; Gaitan, Juan J.; Encinar, Daniel; Berdugo, Miguel; Campbell, Colin D.; Singh, Brajesh K.

    2016-01-01

    Despite the importance of microbial communities for ecosystem services and human welfare, the relationship between microbial diversity and multiple ecosystem functions and services (that is, multifunctionality) at the global scale has yet to be evaluated. Here we use two independent, large-scale databases with contrasting geographic coverage (from 78 global drylands and from 179 locations across Scotland, respectively), and report that soil microbial diversity positively relates to multifunctionality in terrestrial ecosystems. The direct positive effects of microbial diversity were maintained even when accounting simultaneously for multiple multifunctionality drivers (climate, soil abiotic factors and spatial predictors). Our findings provide empirical evidence that any loss in microbial diversity will likely reduce multifunctionality, negatively impacting the provision of services such as climate regulation, soil fertility and food and fibre production by terrestrial ecosystems. PMID:26817514

  18. Microbial diversity drives multifunctionality in terrestrial ecosystems.

    PubMed

    Delgado-Baquerizo, Manuel; Maestre, Fernando T; Reich, Peter B; Jeffries, Thomas C; Gaitan, Juan J; Encinar, Daniel; Berdugo, Miguel; Campbell, Colin D; Singh, Brajesh K

    2016-01-28

    Despite the importance of microbial communities for ecosystem services and human welfare, the relationship between microbial diversity and multiple ecosystem functions and services (that is, multifunctionality) at the global scale has yet to be evaluated. Here we use two independent, large-scale databases with contrasting geographic coverage (from 78 global drylands and from 179 locations across Scotland, respectively), and report that soil microbial diversity positively relates to multifunctionality in terrestrial ecosystems. The direct positive effects of microbial diversity were maintained even when accounting simultaneously for multiple multifunctionality drivers (climate, soil abiotic factors and spatial predictors). Our findings provide empirical evidence that any loss in microbial diversity will likely reduce multifunctionality, negatively impacting the provision of services such as climate regulation, soil fertility and food and fibre production by terrestrial ecosystems.

  19. Forests and Their Canopies: Achievements and Horizons in Canopy Science.

    PubMed

    Nakamura, Akihiro; Kitching, Roger L; Cao, Min; Creedy, Thomas J; Fayle, Tom M; Freiberg, Martin; Hewitt, C N; Itioka, Takao; Koh, Lian Pin; Ma, Keping; Malhi, Yadvinder; Mitchell, Andrew; Novotny, Vojtech; Ozanne, Claire M P; Song, Liang; Wang, Han; Ashton, Louise A

    2017-06-01

    Forest canopies are dynamic interfaces between organisms and atmosphere, providing buffered microclimates and complex microhabitats. Canopies form vertically stratified ecosystems interconnected with other strata. Some forest biodiversity patterns and food webs have been documented and measurements of ecophysiology and biogeochemical cycling have allowed analyses of large-scale transfer of CO 2 , water, and trace gases between forests and the atmosphere. However, many knowledge gaps remain. With global research networks and databases, and new technologies and infrastructure, we envisage rapid advances in our understanding of the mechanisms that drive the spatial and temporal dynamics of forests and their canopies. Such understanding is vital for the successful management and conservation of global forests and the ecosystem services they provide to the world. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. A Data Analysis Expert System For Large Established Distributed Databases

    NASA Astrophysics Data System (ADS)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-05-01

    The purpose of this work is to analyze the applicability of artificial intelligence techniques for developing a user-friendly, parallel interface to large isolated, incompatible NASA databases for the purpose of assisting the management decision process. To carry out this work, a survey was conducted to establish the data access requirements of several key NASA user groups. In addition, current NASA database access methods were evaluated. The results of this work are presented in the form of a design for a natural language database interface system, called the Deductively Augmented NASA Management Decision Support System (DANMDS). This design is feasible principally because of recently announced commercial hardware and software product developments which allow cross-vendor compatibility. The goal of the DANMDS system is commensurate with the central dilemma confronting most large companies and institutions in America, the retrieval of information from large, established, incompatible database systems. The DANMDS system implementation would represent a significant first step toward this problem's resolution.

Top