Sample records for map database position

  1. Cadastral Positioning Accuracy Improvement: a Case Study in Malaysia

    NASA Astrophysics Data System (ADS)

    Hashim, N. M.; Omar, A. H.; Omar, K. M.; Abdullah, N. M.; Yatim, M. H. M.

    2016-09-01

    Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM). With the growth of spatial based technology especially Geographical Information System (GIS), DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI) in cadastral database modernization.

  2. High-Order Methods for Computational Physics

    DTIC Science & Technology

    1999-03-01

    computation is running in 278 Ronald D. Henderson parallel. Instead we use the concept of a voxel database (VDB) of geometric positions in the mesh [85...processor 0 Fig. 4.19. Connectivity and communications axe established by building a voxel database (VDB) of positions. A VDB maps each position to a...studies such as the highly accurate stability computations considered help expand the database for this benchmark problem. The two-dimensional linear

  3. MAPS: The Organization of a Spatial Database System Using Imagery, Terrain, and Map Data

    DTIC Science & Technology

    1983-06-01

    segments which share the same pixel position. Finally, in any largo system, a logical partitioning of the database must be performed in order to avoid...34theodore roosevelt memoria entry 0; entry 1: Virginia ’northwest Washington* 2 en 11" ies for "crossover" for ’theodore roosevelt memor i entry 0

  4. Sequencing of cDNA Clones from the Genetic Map of Tomato (Lycopersicon esculentum)

    PubMed Central

    Ganal, Martin W.; Czihal, Rosemarie; Hannappel, Ulrich; Kloos, Dorothee-U.; Polley, Andreas; Ling, Hong-Qing

    1998-01-01

    The dense RFLP linkage map of tomato (Lycopersicon esculentum) contains >300 anonymous cDNA clones. Of those clones, 272 were partially or completely sequenced. The sequences were compared at the DNA and protein level to known genes in databases. For 57% of the clones, a significant match to previously described genes was found. The information will permit the conversion of those markers to STS markers and allow their use in PCR-based mapping experiments. Furthermore, it will facilitate the comparative mapping of genes across distantly related plant species by direct comparison of DNA sequences and map positions. [cDNA sequence data reported in this paper have been submitted to the EMBL database under accession nos. AA824695–AA825005 and the dbEST_Id database under accession nos. 1546519–1546862.] PMID:9724330

  5. Fast fingerprint database maintenance for indoor positioning based on UGV SLAM.

    PubMed

    Tang, Jian; Chen, Yuwei; Chen, Liang; Liu, Jingbin; Hyyppä, Juha; Kukko, Antero; Kaartinen, Harri; Hyyppä, Hannu; Chen, Ruizhi

    2015-03-04

    Indoor positioning technology has become more and more important in the last two decades. Utilizing Received Signal Strength Indicator (RSSI) fingerprints of Signals of OPportunity (SOP) is a promising alternative navigation solution. However, as the RSSIs vary during operation due to their physical nature and are easily affected by the environmental change, one challenge of the indoor fingerprinting method is maintaining the RSSI fingerprint database in a timely and effective manner. In this paper, a solution for rapidly updating the fingerprint database is presented, based on a self-developed Unmanned Ground Vehicles (UGV) platform NAVIS. Several SOP sensors were installed on NAVIS for collecting indoor fingerprint information, including a digital compass collecting magnetic field intensity, a light sensor collecting light intensity, and a smartphone which collects the access point number and RSSIs of the pre-installed WiFi network. The NAVIS platform generates a map of the indoor environment and collects the SOPs during processing of the mapping, and then the SOP fingerprint database is interpolated and updated in real time. Field tests were carried out to evaluate the effectiveness and efficiency of the proposed method. The results showed that the fingerprint databases can be quickly created and updated with a higher sampling frequency (5Hz) and denser reference points compared with traditional methods, and the indoor map can be generated without prior information. Moreover, environmental changes could also be detected quickly for fingerprint indoor positioning.

  6. Geologic Map Database of Texas

    USGS Publications Warehouse

    Stoeser, Douglas B.; Shock, Nancy; Green, Gregory N.; Dumonceaux, Gayle M.; Heran, William D.

    2005-01-01

    The purpose of this report is to release a digital geologic map database for the State of Texas. This database was compiled for the U.S. Geological Survey (USGS) Minerals Program, National Surveys and Analysis Project, whose goal is a nationwide assemblage of geologic, geochemical, geophysical, and other data. This release makes the geologic data from the Geologic Map of Texas available in digital format. Original clear film positives provided by the Texas Bureau of Economic Geology were photographically enlarged onto Mylar film. These films were scanned, georeferenced, digitized, and attributed by Geologic Data Systems (GDS), Inc., Denver, Colorado. Project oversight and quality control was the responsibility of the U.S. Geological Survey. ESRI ArcInfo coverages, AMLs, and shapefiles are provided.

  7. GDR (Genome Database for Rosaceae): integrated web-database for Rosaceae genomics and genetics data

    PubMed Central

    Jung, Sook; Staton, Margaret; Lee, Taein; Blenda, Anna; Svancara, Randall; Abbott, Albert; Main, Dorrie

    2008-01-01

    The Genome Database for Rosaceae (GDR) is a central repository of curated and integrated genetics and genomics data of Rosaceae, an economically important family which includes apple, cherry, peach, pear, raspberry, rose and strawberry. GDR contains annotated databases of all publicly available Rosaceae ESTs, the genetically anchored peach physical map, Rosaceae genetic maps and comprehensively annotated markers and traits. The ESTs are assembled to produce unigene sets of each genus and the entire Rosaceae. Other annotations include putative function, microsatellites, open reading frames, single nucleotide polymorphisms, gene ontology terms and anchored map position where applicable. Most of the published Rosaceae genetic maps can be viewed and compared through CMap, the comparative map viewer. The peach physical map can be viewed using WebFPC/WebChrom, and also through our integrated GDR map viewer, which serves as a portal to the combined genetic, transcriptome and physical mapping information. ESTs, BACs, markers and traits can be queried by various categories and the search result sites are linked to the mapping visualization tools. GDR also provides online analysis tools such as a batch BLAST/FASTA server for the GDR datasets, a sequence assembly server and microsatellite and primer detection tools. GDR is available at http://www.rosaceae.org. PMID:17932055

  8. Fast Fingerprint Database Maintenance for Indoor Positioning Based on UGV SLAM

    PubMed Central

    Tang, Jian; Chen, Yuwei; Chen, Liang; Liu, Jingbin; Hyyppä, Juha; Kukko, Antero; Kaartinen, Harri; Hyyppä, Hannu; Chen, Ruizhi

    2015-01-01

    Indoor positioning technology has become more and more important in the last two decades. Utilizing Received Signal Strength Indicator (RSSI) fingerprints of Signals of OPportunity (SOP) is a promising alternative navigation solution. However, as the RSSIs vary during operation due to their physical nature and are easily affected by the environmental change, one challenge of the indoor fingerprinting method is maintaining the RSSI fingerprint database in a timely and effective manner. In this paper, a solution for rapidly updating the fingerprint database is presented, based on a self-developed Unmanned Ground Vehicles (UGV) platform NAVIS. Several SOP sensors were installed on NAVIS for collecting indoor fingerprint information, including a digital compass collecting magnetic field intensity, a light sensor collecting light intensity, and a smartphone which collects the access point number and RSSIs of the pre-installed WiFi network. The NAVIS platform generates a map of the indoor environment and collects the SOPs during processing of the mapping, and then the SOP fingerprint database is interpolated and updated in real time. Field tests were carried out to evaluate the effectiveness and efficiency of the proposed method. The results showed that the fingerprint databases can be quickly created and updated with a higher sampling frequency (5Hz) and denser reference points compared with traditional methods, and the indoor map can be generated without prior information. Moreover, environmental changes could also be detected quickly for fingerprint indoor positioning. PMID:25746096

  9. Mapping Indigenous Depth of Place

    ERIC Educational Resources Information Center

    Pearce, Margaret Wickens; Louis, Renee Pualani

    2008-01-01

    Indigenous communities have successfully used Western geospatial technologies (GT) (for example, digital maps, satellite images, geographic information systems (GIS), and global positioning systems (GPS)) since the 1970s to protect tribal resources, document territorial sovereignty, create tribal utility databases, and manage watersheds. The use…

  10. BrainMap VBM: An environment for structural meta-analysis.

    PubMed

    Vanasse, Thomas J; Fox, P Mickle; Barron, Daniel S; Robertson, Michaela; Eickhoff, Simon B; Lancaster, Jack L; Fox, Peter T

    2018-05-02

    The BrainMap database is a community resource that curates peer-reviewed, coordinate-based human neuroimaging literature. By pairing the results of neuroimaging studies with their relevant meta-data, BrainMap facilitates coordinate-based meta-analysis (CBMA) of the neuroimaging literature en masse or at the level of experimental paradigm, clinical disease, or anatomic location. Initially dedicated to the functional, task-activation literature, BrainMap is now expanding to include voxel-based morphometry (VBM) studies in a separate sector, titled: BrainMap VBM. VBM is a whole-brain, voxel-wise method that measures significant structural differences between or within groups which are reported as standardized, peak x-y-z coordinates. Here we describe BrainMap VBM, including the meta-data structure, current data volume, and automated reverse inference functions (region-to-disease profile) of this new community resource. CBMA offers a robust methodology for retaining true-positive and excluding false-positive findings across studies in the VBM literature. As with BrainMap's functional database, BrainMap VBM may be synthesized en masse or at the level of clinical disease or anatomic location. As a use-case scenario for BrainMap VBM, we illustrate a trans-diagnostic data-mining procedure wherein we explore the underlying network structure of 2,002 experiments representing over 53,000 subjects through independent components analysis (ICA). To reduce data-redundancy effects inherent to any database, we demonstrate two data-filtering approaches that proved helpful to ICA. Finally, we apply hierarchical clustering analysis (HCA) to measure network- and disease-specificity. This procedure distinguished psychiatric from neurological diseases. We invite the neuroscientific community to further exploit BrainMap VBM with other modeling approaches. © 2018 Wiley Periodicals, Inc.

  11. Exploring the potential offered by legacy soil databases for ecosystem services mapping of Central African soils

    NASA Astrophysics Data System (ADS)

    Verdoodt, Ann; Baert, Geert; Van Ranst, Eric

    2014-05-01

    Central African soil resources are characterised by a large variability, ranging from stony, shallow or sandy soils with poor life-sustaining capabilities to highly weathered soils that recycle and support large amounts of biomass. Socio-economic drivers within this largely rural region foster inappropriate land use and management, threaten soil quality and finally culminate into a declining soil productivity and increasing food insecurity. For the development of sustainable land use strategies targeting development planning and natural hazard mitigation, decision makers often rely on legacy soil maps and soil profile databases. Recent development cooperation financed projects led to the design of soil information systems for Rwanda, D.R. Congo, and (ongoing) Burundi. A major challenge is to exploit these existing soil databases and convert them into soil inference systems through an optimal combination of digital soil mapping techniques, land evaluation tools, and biogeochemical models. This presentation aims at (1) highlighting some key characteristics of typical Central African soils, (2) assessing the positional, geographic and semantic quality of the soil information systems, and (3) revealing its potential impacts on the use of these datasets for thematic mapping of soil ecosystem services (e.g. organic carbon storage, pH buffering capacity). Soil map quality is assessed considering positional and semantic quality, as well as geographic completeness. Descriptive statistics, decision tree classification and linear regression techniques are used to mine the soil profile databases. Geo-matching as well as class-matching approaches are considered when developing thematic maps. Variability in inherent as well as dynamic soil properties within the soil taxonomic units is highlighted. It is hypothesized that within-unit variation in soil properties highly affects the use and interpretation of thematic maps for ecosystem services mapping. Results will mainly be based on analyses done in Rwanda, but can be complemented with ongoing research results or prospects for Burundi.

  12. Lane Level Localization; Using Images and HD Maps to Mitigate the Lateral Error

    NASA Astrophysics Data System (ADS)

    Hosseinyalamdary, S.; Peter, M.

    2017-05-01

    In urban canyon where the GNSS signals are blocked by buildings, the accuracy of measured position significantly deteriorates. GIS databases have been frequently utilized to improve the accuracy of measured position using map matching approaches. In map matching, the measured position is projected to the road links (centerlines) in this approach and the lateral error of measured position is reduced. By the advancement in data acquision approaches, high definition maps which contain extra information, such as road lanes are generated. These road lanes can be utilized to mitigate the positional error and improve the accuracy in position. In this paper, the image content of a camera mounted on the platform is utilized to detect the road boundaries in the image. We apply color masks to detect the road marks, apply the Hough transform to fit lines to the left and right road boundaries, find the corresponding road segment in GIS database, estimate the homography transformation between the global and image coordinates of the road boundaries, and estimate the camera pose with respect to the global coordinate system. The proposed approach is evaluated on a benchmark. The position is measured by a smartphone's GPS receiver, images are taken from smartphone's camera and the ground truth is provided by using Real-Time Kinematic (RTK) technique. Results show the proposed approach significantly improves the accuracy of measured GPS position. The error in measured GPS position with average and standard deviation of 11.323 and 11.418 meters is reduced to the error in estimated postion with average and standard deviation of 6.725 and 5.899 meters.

  13. Open Geoscience Database

    NASA Astrophysics Data System (ADS)

    Bashev, A.

    2012-04-01

    Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data treatment could be conducted in other programs after extraction the filtered data into *.csv file. It makes the database understandable for non-experts. The database employs open data format (*.csv) and wide spread tools: PHP as the program language, MySQL as database management system, JavaScript for interaction with GoogleMaps and JQueryUI for create user interface. The database is multilingual: there are association tables, which connect with elements of the database. In total the development required about 150 hours. The database still has several problems. The main problem is the reliability of the data. Actually it needs an expert system for estimation the reliability, but the elaboration of such a system would take more resources than the database itself. The second problem is the problem of stream selection - how to select the stations that are connected with each other (for example, belong to one water stream) and indicate their sequence. Currently the interface is English and Russian. However it can be easily translated to your language. But some problems we decided. For example problem "the problem of the same station" (sometimes the distance between stations is smaller, than the error of position): when you adding new station to the database our application automatically find station near this place. Also we decided problem of object and parameter type (how to regard "EC" and "electrical conductivity" as the same parameter). This problem has been solved using "associative tables". If you would like to see the interface on your language, just contact us. We should send you the list of terms and phrases for translation on your language. The main advantage of the database is that it is totally open: everybody can see, extract the data from the database and use them for non-commercial purposes with no charge. Registered users can contribute to the database without getting paid. We hope, that it will be widely used first of all for education purposes, but professional scientists could use it also.

  14. Green Map Exercises as an Avenue for Problem-Based Learning in a Data-Rich Environment

    ERIC Educational Resources Information Center

    Tulloch, David; Graff, Elizabeth

    2007-01-01

    This article describes a series of data-based Green Map learning exercises positioned within a problem-based framework and examines the appropriateness of projects like these as a form of geography education. Problem-based learning (PBL) is an educational technique that engages students in learning through activities that require creative problem…

  15. Modeling, Simulation, and Characterization of Distributed Multi-Agent Systems

    DTIC Science & Technology

    2012-01-01

    capabilities (vision, LIDAR , differential global positioning, ultrasonic proximity sensing, etc.), the agents comprising a MAS tend to have somewhat lesser...on the simultaneous localization and mapping ( SLAM ) problem [19]. SLAM acknowledges that externally-provided localization information is not...continually-updated mapping databases, generates a comprehensive representation of the spatial and spectral environment. Many times though, inherent SLAM

  16. The NavTrax fleet management system

    NASA Astrophysics Data System (ADS)

    McLellan, James F.; Krakiwsky, Edward J.; Schleppe, John B.; Knapp, Paul L.

    The NavTrax System, a dispatch-type automatic vehicle location and navigation system, is discussed. Attention is given to its positioning, communication, digital mapping, and dispatch center components. The positioning module is a robust GPS (Global Positioning System)-based system integrated with dead reckoning devices by a decentralized-federated filter, making the module fault tolerant. The error behavior and characteristics of GPS, rate gyro, compass, and odometer sensors are discussed. The communications module, as presently configured, utilizes UHF radio technology, and plans are being made to employ a digital cellular telephone system. Polling and automatic smart vehicle reporting are also discussed. The digital mapping component is an intelligent digital single line road network database stored in vector form with full connectivity and address ranges. A limited form of map matching is performed for the purposes of positioning, but its main purpose is to define location once position is determined.

  17. A Design of Irregular Grid Map for Large-Scale Wi-Fi LAN Fingerprint Positioning Systems

    PubMed Central

    Kim, Jae-Hoon; Min, Kyoung Sik; Yeo, Woon-Young

    2014-01-01

    The rapid growth of mobile communication and the proliferation of smartphones have drawn significant attention to location-based services (LBSs). One of the most important factors in the vitalization of LBSs is the accurate position estimation of a mobile device. The Wi-Fi positioning system (WPS) is a new positioning method that measures received signal strength indication (RSSI) data from all Wi-Fi access points (APs) and stores them in a large database as a form of radio fingerprint map. Because of the millions of APs in urban areas, radio fingerprints are seriously contaminated and confused. Moreover, the algorithmic advances for positioning face computational limitation. Therefore, we present a novel irregular grid structure and data analytics for efficient fingerprint map management. The usefulness of the proposed methodology is presented using the actual radio fingerprint measurements taken throughout Seoul, Korea. PMID:25302315

  18. A design of irregular grid map for large-scale Wi-Fi LAN fingerprint positioning systems.

    PubMed

    Kim, Jae-Hoon; Min, Kyoung Sik; Yeo, Woon-Young

    2014-01-01

    The rapid growth of mobile communication and the proliferation of smartphones have drawn significant attention to location-based services (LBSs). One of the most important factors in the vitalization of LBSs is the accurate position estimation of a mobile device. The Wi-Fi positioning system (WPS) is a new positioning method that measures received signal strength indication (RSSI) data from all Wi-Fi access points (APs) and stores them in a large database as a form of radio fingerprint map. Because of the millions of APs in urban areas, radio fingerprints are seriously contaminated and confused. Moreover, the algorithmic advances for positioning face computational limitation. Therefore, we present a novel irregular grid structure and data analytics for efficient fingerprint map management. The usefulness of the proposed methodology is presented using the actual radio fingerprint measurements taken throughout Seoul, Korea.

  19. Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors

    PubMed Central

    Berenguer, Yerai; Payá, Luis; Ballesta, Mónica; Reinoso, Oscar

    2015-01-01

    This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built composed of omnidirectional images that have been captured from previously-known positions. The purpose in this case consists of estimating the nearest position of the map to the current position of the robot, making use of the visual information acquired by the robot from its current (unknown) position. In the second one, we assume that we have a model of the environment composed of omnidirectional images, but with no information about the location of where the images were acquired. The purpose in this case consists of building a local map and estimating the position of the robot within this map. Both methods are tested with different databases (including virtual and real images) taking into consideration the changes of the position of different objects in the environment, different lighting conditions and occlusions. The results show the effectiveness and the robustness of both methods. PMID:26501289

  20. Bathymetry of Lake William C. Bowen and Municipal Reservoir #1, Spartanburg County, South Carolina, 2008

    USGS Publications Warehouse

    Nagle, D.D.; Campbell, B.G.; Lowery, M.A.

    2009-01-01

    The increasing use and importance of lakes for water supply to communities enhance the need for an accurate methodology to determine lake bathymetry and storage capacity. A global positioning receiver and a fathometer were used to collect position data and water depth in February 2008 at Lake William C. Bowen and Municipal Reservoir #1, Spartanburg County, South Carolina. All collected data were imported into a geographic information system database. A bathymetric surface model, contour map, and stage-area and -volume relations were created from the geographic information database.

  1. Integrated technologies for solid waste bin monitoring system.

    PubMed

    Arebey, Maher; Hannan, M A; Basri, Hassan; Begum, R A; Abdullah, Huda

    2011-06-01

    The integration of communication technologies such as radio frequency identification (RFID), global positioning system (GPS), general packet radio system (GPRS), and geographic information system (GIS) with a camera are constructed for solid waste monitoring system. The aim is to improve the way of responding to customer's inquiry and emergency cases and estimate the solid waste amount without any involvement of the truck driver. The proposed system consists of RFID tag mounted on the bin, RFID reader as in truck, GPRS/GSM as web server, and GIS as map server, database server, and control server. The tracking devices mounted in the trucks collect location information in real time via the GPS. This information is transferred continuously through GPRS to a central database. The users are able to view the current location of each truck in the collection stage via a web-based application and thereby manage the fleet. The trucks positions and trash bin information are displayed on a digital map, which is made available by a map server. Thus, the solid waste of the bin and the truck are being monitored using the developed system.

  2. De-MA: a web Database for electron Microprobe Analyses to assist EMP lab manager and users

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.

    2012-12-01

    Lab managers and users of electron microprobe (EMP) facilities require comprehensive, yet flexible documentation structures, as well as an efficient scheduling mechanism. A single on-line database system for managing reservations, and providing information on standards, quantitative and qualitative setups (element mapping, etc.), and X-ray data has been developed for this purpose. This system is particularly useful in multi-user facilities where experience ranges from beginners to the highly experienced. New users and occasional facility users will find these tools extremely useful in developing and maintaining high quality, reproducible, and efficient analyses. This user-friendly database is available through the web, and uses MySQL as a database and PHP/HTML as script language (dynamic website). The database includes several tables for standards information, X-ray lines, X-ray element mapping, PHA, element setups, and agenda. It is configurable for up to five different EMPs in a single lab, each of them having up to five spectrometers and as many diffraction crystals as required. The installation should be done on a web server supporting PHP/MySQL, although installation on a personal computer is possible using third-party freeware to create a local Apache server, and to enable PHP/MySQL. Since it is web-based, any user outside the EMP lab can access this database anytime through any web browser and on any operating system. The access can be secured using a general password protection (e.g. htaccess). The web interface consists of 6 main menus. (1) "Standards" lists standards defined in the database, and displays detailed information on each (e.g. material type, name, reference, comments, and analyses). Images such as EDS spectra or BSE can be associated with a standard. (2) "Analyses" lists typical setups to use for quantitative analyses, allows calculation of mineral composition based on a mineral formula, or calculation of mineral formula based on a fixed amount of oxygen, or of cation (using an analysis in element or oxide weight-%); this latter includes re-calculation of H2O/CO2 based on stoichiometry, and oxygen correction for F and Cl. Another option offers a list of any available standards and possible peak or background interferences for a series of elements. (3) "X-ray maps" lists the different setups recommended for element mapping using WDS, and a map calculator to facilitate maps setups and to estimate the total mapping time. (4) "X-ray data" lists all x-ray lines for a specific element (K, L, M, absorption edges, and satellite peaks) in term of energy, wavelength and peak position. A check for possible interferences on peak or background is also possible. Theoretical x-ray peak positions for each crystal are calculated based on the 2d spacing of each crystal and the wavelength of each line. (5) "Agenda" menu displays the reservation dates for each month and for each EMP lab defined. It also offers a reservation request option, this request being sent by email to the EMP manager for approval. (6) Finally, "Admin" is password restricted, and contains all necessary options to manage the database through user-friendly forms. The installation of this database is made easy and knowledge of HTML, PHP, or MySQL is unnecessary to install, configure, manage, or use it. A working database is accessible at http://cub.geoloweb.ch.

  3. Experiences with Acquiring Highly Redundant Spatial Data to Support Driverless Vehicle Technologies

    NASA Astrophysics Data System (ADS)

    Koppanyi, Z.; Toth, C. K.

    2018-05-01

    As vehicle technology is moving towards higher autonomy, the demand for highly accurate geospatial data is rapidly increasing, as accurate maps have a huge potential of increasing safety. In particular, high definition 3D maps, including road topography and infrastructure, as well as city models along the transportation corridors represent the necessary support for driverless vehicles. In this effort, a vehicle equipped with high-, medium- and low-resolution active and passive cameras acquired data in a typical traffic environment, represented here by the OSU campus, where GPS/GNSS data are available along with other navigation sensor data streams. The data streams can be used for two purposes. First, high-definition 3D maps can be created by integrating all the sensory data, and Data Analytics/Big Data methods can be tested for automatic object space reconstruction. Second, the data streams can support algorithmic research for driverless vehicle technologies, including object avoidance, navigation/positioning, detecting pedestrians and bicyclists, etc. Crucial cross-performance analyses on map database resolution and accuracy with respect to sensor performance metrics to achieve economic solution for accurate driverless vehicle positioning can be derived. These, in turn, could provide essential information on optimizing the choice of geospatial map databases and sensors' quality to support driverless vehicle technologies. The paper reviews the data acquisition and primary data processing challenges and performance results.

  4. Arterial spin labeling-based Z-maps have high specificity and positive predictive value for neurodegenerative dementia compared to FDG-PET.

    PubMed

    Fällmar, David; Haller, Sven; Lilja, Johan; Danfors, Torsten; Kilander, Lena; Tolboom, Nelleke; Egger, Karl; Kellner, Elias; Croon, Philip M; Verfaillie, Sander C J; van Berckel, Bart N M; Ossenkoppele, Rik; Barkhof, Frederik; Larsson, Elna-Marie

    2017-10-01

    Cerebral perfusion analysis based on arterial spin labeling (ASL) MRI has been proposed as an alternative to FDG-PET in patients with neurodegenerative disease. Z-maps show normal distribution values relating an image to a database of controls. They are routinely used for FDG-PET to demonstrate disease-specific patterns of hypometabolism at the individual level. This study aimed to compare the performance of Z-maps based on ASL to FDG-PET. Data were combined from two separate sites, each cohort consisting of patients with Alzheimer's disease (n = 18 + 7), frontotemporal dementia (n = 12 + 8) and controls (n = 9 + 29). Subjects underwent pseudocontinuous ASL and FDG-PET. Z-maps were created for each subject and modality. Four experienced physicians visually assessed the 166 Z-maps in random order, blinded to modality and diagnosis. Discrimination of patients versus controls using ASL-based Z-maps yielded high specificity (84%) and positive predictive value (80%), but significantly lower sensitivity compared to FDG-PET-based Z-maps (53% vs. 96%, p < 0.001). Among true-positive cases, correct diagnoses were made in 76% (ASL) and 84% (FDG-PET) (p = 0.168). ASL-based Z-maps can be used for visual assessment of neurodegenerative dementia with high specificity and positive predictive value, but with inferior sensitivity compared to FDG-PET. • ASL-based Z-maps yielded high specificity and positive predictive value in neurodegenerative dementia. • ASL-based Z-maps had significantly lower sensitivity compared to FDG-PET-based Z-maps. • FDG-PET might be reserved for ASL-negative cases where clinical suspicion persists. • Findings were similar at two study sites.

  5. The Design and Product of National 1:1000000 Cartographic Data of Topographic Map

    NASA Astrophysics Data System (ADS)

    Wang, Guizhi

    2016-06-01

    National administration of surveying, mapping and geoinformation started to launch the project of national fundamental geographic information database dynamic update in 2012. Among them, the 1:50000 database was updated once a year, furthermore the 1:250000 database was downsized and linkage-updated on the basis. In 2014, using the latest achievements of 1:250000 database, comprehensively update the 1:1000000 digital line graph database. At the same time, generate cartographic data of topographic map and digital elevation model data. This article mainly introduce national 1:1000000 cartographic data of topographic map, include feature content, database structure, Database-driven Mapping technology, workflow and so on.

  6. Retrospective Conversion of Solar Data Printed in "Synoptic Maps of the Solar Chromosphere": A Scientific and Librarianship Project

    NASA Astrophysics Data System (ADS)

    Laurenceau, A.; Aboudarham, J.; Renié, C.

    2015-04-01

    Between 1928 and 2003, the Observatoire de Paris published solar activity maps and their corresponding data tables, first in the Annals of the Meudon Observatory, then in the Synoptic Maps of the Solar Chromosphere. These maps represent the main solar structures in a single view and spread out on a complete Carrington rotation as well as tables of associated data, containing various information on these structures such as positions, length, morphological characteristics, and behavior. Since 2003, these maps and data tables have not been released in print, as they are only published on the online BASS2000 database, the solar database maintained by LESIA (Laboratory for space studies and astrophysical instruments). In order to make the first 80 years of observations which were available only in paper accessible and usable, the LESIA and the Library of the Observatory have started a project to digitize the publications, enter the data with the assistance of a specialized company, and then migrate the files obtained in BASS2000 and in the Heliophysics Features Catalog created in the framework of the European project HELIO.

  7. The global compendium of Aedes aegypti and Ae. albopictus occurrence

    NASA Astrophysics Data System (ADS)

    Kraemer, Moritz U. G.; Sinka, Marianne E.; Duda, Kirsten A.; Mylne, Adrian; Shearer, Freya M.; Brady, Oliver J.; Messina, Jane P.; Barker, Christopher M.; Moore, Chester G.; Carvalho, Roberta G.; Coelho, Giovanini E.; van Bortel, Wim; Hendrickx, Guy; Schaffner, Francis; Wint, G. R. William; Elyazar, Iqbal R. F.; Teng, Hwa-Jen; Hay, Simon I.

    2015-07-01

    Aedes aegypti and Ae. albopictus are the main vectors transmitting dengue and chikungunya viruses. Despite being pathogens of global public health importance, knowledge of their vectors’ global distribution remains patchy and sparse. A global geographic database of known occurrences of Ae. aegypti and Ae. albopictus between 1960 and 2014 was compiled. Herein we present the database, which comprises occurrence data linked to point or polygon locations, derived from peer-reviewed literature and unpublished studies including national entomological surveys and expert networks. We describe all data collection processes, as well as geo-positioning methods, database management and quality-control procedures. This is the first comprehensive global database of Ae. aegypti and Ae. albopictus occurrence, consisting of 19,930 and 22,137 geo-positioned occurrence records respectively. Both datasets can be used for a variety of mapping and spatial analyses of the vectors and, by inference, the diseases they transmit.

  8. The global compendium of Aedes aegypti and Ae. albopictus occurrence

    PubMed Central

    Kraemer, Moritz U. G.; Sinka, Marianne E.; Duda, Kirsten A.; Mylne, Adrian; Shearer, Freya M.; Brady, Oliver J.; Messina, Jane P.; Barker, Christopher M.; Moore, Chester G.; Carvalho, Roberta G.; Coelho, Giovanini E.; Van Bortel, Wim; Hendrickx, Guy; Schaffner, Francis; Wint, G. R. William; Elyazar, Iqbal R. F.; Teng, Hwa-Jen; Hay, Simon I.

    2015-01-01

    Aedes aegypti and Ae. albopictus are the main vectors transmitting dengue and chikungunya viruses. Despite being pathogens of global public health importance, knowledge of their vectors’ global distribution remains patchy and sparse. A global geographic database of known occurrences of Ae. aegypti and Ae. albopictus between 1960 and 2014 was compiled. Herein we present the database, which comprises occurrence data linked to point or polygon locations, derived from peer-reviewed literature and unpublished studies including national entomological surveys and expert networks. We describe all data collection processes, as well as geo-positioning methods, database management and quality-control procedures. This is the first comprehensive global database of Ae. aegypti and Ae. albopictus occurrence, consisting of 19,930 and 22,137 geo-positioned occurrence records respectively. Both datasets can be used for a variety of mapping and spatial analyses of the vectors and, by inference, the diseases they transmit. PMID:26175912

  9. Translation from the collaborative OSM database to cartography

    NASA Astrophysics Data System (ADS)

    Hayat, Flora

    2018-05-01

    The OpenStreetMap (OSM) database includes original items very useful for geographical analysis and for creating thematic maps. Contributors record in the open database various themes regarding amenities, leisure, transports, buildings and boundaries. The Michelin mapping department develops map prototypes to test the feasibility of mapping based on OSM. To translate the OSM database structure into a database structure fitted with Michelin graphic guidelines a research project is in development. It aims at defining the right structure for the Michelin uses. The research project relies on the analysis of semantic and geometric heterogeneities in OSM data. In that order, Michelin implements methods to transform the input geographical database into a cartographic image dedicated for specific uses (routing and tourist maps). The paper focuses on the mapping tools available to produce a personalised spatial database. Based on processed data, paper and Web maps can be displayed. Two prototypes are described in this article: a vector tile web map and a mapping method to produce paper maps on a regional scale. The vector tile mapping method offers an easy navigation within the map and within graphic and thematic guide- lines. Paper maps can be partly automatically drawn. The drawing automation and data management are part of the mapping creation as well as the final hand-drawing phase. Both prototypes have been set up using the OSM technical ecosystem.

  10. Dactyl Alphabet Gesture Recognition in a Video Sequence Using Microsoft Kinect

    NASA Astrophysics Data System (ADS)

    Artyukhin, S. G.; Mestetskiy, L. M.

    2015-05-01

    This paper presents an efficient framework for solving the problem of static gesture recognition based on data obtained from the web cameras and depth sensor Kinect (RGB-D - data). Each gesture given by a pair of images: color image and depth map. The database store gestures by it features description, genereated by frame for each gesture of the alphabet. Recognition algorithm takes as input a video sequence (a sequence of frames) for marking, put in correspondence with each frame sequence gesture from the database, or decide that there is no suitable gesture in the database. First, classification of the frame of the video sequence is done separately without interframe information. Then, a sequence of successful marked frames in equal gesture is grouped into a single static gesture. We propose a method combined segmentation of frame by depth map and RGB-image. The primary segmentation is based on the depth map. It gives information about the position and allows to get hands rough border. Then, based on the color image border is specified and performed analysis of the shape of the hand. Method of continuous skeleton is used to generate features. We propose a method of skeleton terminal branches, which gives the opportunity to determine the position of the fingers and wrist. Classification features for gesture is description of the position of the fingers relative to the wrist. The experiments were carried out with the developed algorithm on the example of the American Sign Language. American Sign Language gesture has several components, including the shape of the hand, its orientation in space and the type of movement. The accuracy of the proposed method is evaluated on the base of collected gestures consisting of 2700 frames.

  11. 77 FR 47492 - Thirteenth Meeting: RTCA Special Committee 217, Terrain and Airport Mapping Databases, Joint With...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-08

    ... Committee 217, Terrain and Airport Mapping Databases, Joint With EUROCAE WG-44 AGENCY: Federal Aviation... 217, Terrain and Airport Mapping Databases, Joint with EUROCAE WG-44. SUMMARY: The FAA is issuing this... Mapping Databases, Joint with EUROCAE WG-44. DATES: The meeting will be held September 10-14, 2012, from 9...

  12. NABIC marker database: A molecular markers information network of agricultural crops.

    PubMed

    Kim, Chang-Kug; Seol, Young-Joo; Lee, Dong-Jun; Jeong, In-Seon; Yoon, Ung-Han; Lee, Gang-Seob; Hahn, Jang-Ho; Park, Dong-Suk

    2013-01-01

    In 2013, National Agricultural Biotechnology Information Center (NABIC) reconstructs a molecular marker database for useful genetic resources. The web-based marker database consists of three major functional categories: map viewer, RSN marker and gene annotation. It provides 7250 marker locations, 3301 RSN marker property, 3280 molecular marker annotation information in agricultural plants. The individual molecular marker provides information such as marker name, expressed sequence tag number, gene definition and general marker information. This updated marker-based database provides useful information through a user-friendly web interface that assisted in tracing any new structures of the chromosomes and gene positional functions using specific molecular markers. The database is available for free at http://nabic.rda.go.kr/gere/rice/molecularMarkers/

  13. Preliminary surficial geologic map database of the Amboy 30 x 60 minute quadrangle, California

    USGS Publications Warehouse

    Bedford, David R.; Miller, David M.; Phelps, Geoffrey A.

    2006-01-01

    The surficial geologic map database of the Amboy 30x60 minute quadrangle presents characteristics of surficial materials for an area approximately 5,000 km2 in the eastern Mojave Desert of California. This map consists of new surficial mapping conducted between 2000 and 2005, as well as compilations of previous surficial mapping. Surficial geology units are mapped and described based on depositional process and age categories that reflect the mode of deposition, pedogenic effects occurring post-deposition, and, where appropriate, the lithologic nature of the material. The physical properties recorded in the database focus on those that drive hydrologic, biologic, and physical processes such as particle size distribution (PSD) and bulk density. This version of the database is distributed with point data representing locations of samples for both laboratory determined physical properties and semi-quantitative field-based information. Future publications will include the field and laboratory data as well as maps of distributed physical properties across the landscape tied to physical process models where appropriate. The database is distributed in three parts: documentation, spatial map-based data, and printable map graphics of the database. Documentation includes this file, which provides a discussion of the surficial geology and describes the format and content of the map data, a database 'readme' file, which describes the database contents, and FGDC metadata for the spatial map information. Spatial data are distributed as Arc/Info coverage in ESRI interchange (e00) format, or as tabular data in the form of DBF3-file (.DBF) file formats. Map graphics files are distributed as Postscript and Adobe Portable Document Format (PDF) files, and are appropriate for representing a view of the spatial database at the mapped scale.

  14. Lamont-Doherty Earth Observatory |

    Science.gov Websites

    ; Graduate Students Academic Calendar Contact Us LDEO Alumni News & Events Research News News Archive ; Tectonophysics Media Inquiries Publications Database Education K-12 Students Educators Undergraduate & Support Program About History of Lamont Alumni Map & Contacts Office of the Director Open Positions

  15. A Fast Approximate Algorithm for Mapping Long Reads to Large Reference Databases.

    PubMed

    Jain, Chirag; Dilthey, Alexander; Koren, Sergey; Aluru, Srinivas; Phillippy, Adam M

    2018-04-30

    Emerging single-molecule sequencing technologies from Pacific Biosciences and Oxford Nanopore have revived interest in long-read mapping algorithms. Alignment-based seed-and-extend methods demonstrate good accuracy, but face limited scalability, while faster alignment-free methods typically trade decreased precision for efficiency. In this article, we combine a fast approximate read mapping algorithm based on minimizers with a novel MinHash identity estimation technique to achieve both scalability and precision. In contrast to prior methods, we develop a mathematical framework that defines the types of mapping targets we uncover, establish probabilistic estimates of p-value and sensitivity, and demonstrate tolerance for alignment error rates up to 20%. With this framework, our algorithm automatically adapts to different minimum length and identity requirements and provides both positional and identity estimates for each mapping reported. For mapping human PacBio reads to the hg38 reference, our method is 290 × faster than Burrows-Wheeler Aligner-MEM with a lower memory footprint and recall rate of 96%. We further demonstrate the scalability of our method by mapping noisy PacBio reads (each ≥5 kbp in length) to the complete NCBI RefSeq database containing 838 Gbp of sequence and >60,000 genomes.

  16. Geologic map and map database of parts of Marin, San Francisco, Alameda, Contra Costa, and Sonoma counties, California

    USGS Publications Warehouse

    Blake, M.C.; Jones, D.L.; Graymer, R.W.; digital database by Soule, Adam

    2000-01-01

    This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (mageo.txt, mageo.pdf, or mageo.ps), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (mageo.txt, mageo.pdf, or mageo.ps), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller.

  17. Towards computational improvement of DNA database indexing and short DNA query searching.

    PubMed

    Stojanov, Done; Koceski, Sašo; Mileva, Aleksandra; Koceska, Nataša; Bande, Cveta Martinovska

    2014-09-03

    In order to facilitate and speed up the search of massive DNA databases, the database is indexed at the beginning, employing a mapping function. By searching through the indexed data structure, exact query hits can be identified. If the database is searched against an annotated DNA query, such as a known promoter consensus sequence, then the starting locations and the number of potential genes can be determined. This is particularly relevant if unannotated DNA sequences have to be functionally annotated. However, indexing a massive DNA database and searching an indexed data structure with millions of entries is a time-demanding process. In this paper, we propose a fast DNA database indexing and searching approach, identifying all query hits in the database, without having to examine all entries in the indexed data structure, limiting the maximum length of a query that can be searched against the database. By applying the proposed indexing equation, the whole human genome could be indexed in 10 hours on a personal computer, under the assumption that there is enough RAM to store the indexed data structure. Analysing the methodology proposed by Reneker, we observed that hits at starting positions [Formula: see text] are not reported, if the database is searched against a query shorter than [Formula: see text] nucleotides, such that [Formula: see text] is the length of the DNA database words being mapped and [Formula: see text] is the length of the query. A solution of this drawback is also presented.

  18. Preliminary geologic map of the Oat Mountain 7.5' quadrangle, Southern California: a digital database

    USGS Publications Warehouse

    Yerkes, R.F.; Campbell, Russell H.

    1995-01-01

    This database, identified as "Preliminary Geologic Map of the Oat Mountain 7.5' Quadrangle, southern California: A Digital Database," has been approved for release and publication by the Director of the USGS. Although this database has been reviewed and is substantially complete, the USGS reserves the right to revise the data pursuant to further analysis and review. This database is released on condition that neither the USGS nor the U. S. Government may be held liable for any damages resulting from its use. This digital map database is compiled from previously published sources combined with some new mapping and modifications in nomenclature. The geologic map database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U. S. Geological Survey. For detailed descriptions of the units, their stratigraphic relations and sources of geologic mapping consult Yerkes and Campbell (1993). More specific information about the units may be available in the original sources.

  19. Regulations in the field of Geo-Information

    NASA Astrophysics Data System (ADS)

    Felus, Y.; Keinan, E.; Regev, R.

    2013-10-01

    The geomatics profession has gone through a major revolution during the last two decades with the emergence of advanced GNSS, GIS and Remote Sensing technologies. These technologies have changed the core principles and working procedures of geomatics professionals. For this reason, surveying and mapping regulations, standards and specifications should be updated to reflect these changes. In Israel, the "Survey Regulations" is the principal document that regulates the professional activities in four key areas geodetic control, mapping, cadastre and Georaphic information systems. Licensed Surveyors and mapping professionals in Israel are required to work according to those regulations. This year a new set of regulations have been published and include a few major amendments as follows: In the Geodesy chapter, horizontal control is officially based on the Israeli network of Continuously Operating GNSS Reference Stations (CORS). The regulations were phrased in a manner that will allow minor datum changes to the CORS stations due to Earth Crustal Movements. Moreover, the regulations permit the use of GNSS for low accuracy height measurements. In the Cadastre chapter, the most critical change is the move to Coordinate Based Cadastre (CBC). Each parcel corner point is ranked according to its quality (accuracy and clarity of definition). The highest ranking for a parcel corner is 1. A point with a rank of 1 is defined by its coordinates alone. Any other contradicting evidence is inferior to the coordinates values. Cadastral Information is stored and managed via the National Cadastral Databases. In the Mapping and GIS chapter; the traditional paper maps (ranked by scale) are replaced by digital maps or spatial databases. These spatial databases are ranked by their quality level. Quality level is determined (similar to the ISO19157 Standard) by logical consistency, completeness, positional accuracy, attribute accuracy, temporal accuracy and usability. Metadata is another critical component of any spatial database. Every component in a map should have a metadata identification, even if the map was compiled from multiple resources. The regulations permit the use of advanced sensors and mapping techniques including LIDAR and digita l cameras that have been certified and meet the defined criteria. The article reviews these new regulations and the decision that led to them.

  20. Performance analysis of different database in new internet mapping system

    NASA Astrophysics Data System (ADS)

    Yao, Xing; Su, Wei; Gao, Shuai

    2017-03-01

    In the Mapping System of New Internet, Massive mapping entries between AID and RID need to be stored, added, updated, and deleted. In order to better deal with the problem when facing a large number of mapping entries update and query request, the Mapping System of New Internet must use high-performance database. In this paper, we focus on the performance of Redis, SQLite, and MySQL these three typical databases, and the results show that the Mapping System based on different databases can adapt to different needs according to the actual situation.

  1. Intrusive Rock Database for the Digital Geologic Map of Utah

    USGS Publications Warehouse

    Nutt, C.J.; Ludington, Steve

    2003-01-01

    Digital geologic maps offer the promise of rapid and powerful answers to geologic questions using Geographic Information System software (GIS). Using modern GIS and database methods, a specialized derivative map can be easily prepared. An important limitation can be shortcomings in the information provided in the database associated with the digital map, a database which is often based on the legend of the original map. The purpose of this report is to show how the compilation of additional information can, when prepared as a database that can be used with the digital map, be used to create some types of derivative maps that are not possible with the original digital map and database. This Open-file Report consists of computer files with information about intrusive rocks in Utah that can be linked to the Digital Geologic Map of Utah (Hintze et al., 2000), an explanation of how to link the databases and map, and a list of references for the databases. The digital map, which represents the 1:500,000-scale Geologic Map of Utah (Hintze, 1980), can be obtained from the Utah Geological Survey (Map 179DM). Each polygon in the map has a unique identification number. We selected the polygons identified on the geologic map as intrusive rock, and constructed a database (UT_PLUT.xls) that classifies the polygons into plutonic map units (see tables). These plutonic map units are the key information that is used to relate the compiled information to the polygons on the map. The map includes a few polygons that were coded as intrusive on the state map but are largely volcanic rock; in these cases we note the volcanic rock names (rhyolite and latite) as used in the original sources Some polygons identified on the digital state map as intrusive rock were misidentified; these polygons are noted in a separate table of the database, along with some information about their true character. Fields may be empty because of lack of information from references used or difficulty in finding information. The information in the database is from a variety of sources, including geologic maps at scales ranging from 1:500,000 to 1:24,000, and thesis monographs. The references are shown twice: alphabetically and by region. The digital geologic map of Utah (Hintze and others, 2000) classifies intrusive rocks into only 3 categories, distinguished by age. They are: Ti, Tertiary intrusive rock; Ji, Upper to Middle Jurassic granite to quartz monzonite; and pCi, Early Proterozoic to Late Archean intrusive rock. Use of the tables provided in this report will permit selection and classification of those rocks by lithology and age. This database is a pilot study by the Survey and Analysis Project of the U.S. Geological Survey to characterize igneous rocks and link them to a digital map. The database, and others like it, will evolve as the project continues and other states are completed. We release this version now as an example, as a reference, and for those interested in Utah plutonic rocks.

  2. Development of a 2001 National Land Cover Database for the United States

    USGS Publications Warehouse

    Homer, Collin G.; Huang, Chengquan; Yang, Limin; Wylie, Bruce K.; Coan, Michael

    2004-01-01

    Multi-Resolution Land Characterization 2001 (MRLC 2001) is a second-generation Federal consortium designed to create an updated pool of nation-wide Landsat 5 and 7 imagery and derive a second-generation National Land Cover Database (NLCD 2001). The objectives of this multi-layer, multi-source database are two fold: first, to provide consistent land cover for all 50 States, and second, to provide a data framework which allows flexibility in developing and applying each independent data component to a wide variety of other applications. Components in the database include the following: (1) normalized imagery for three time periods per path/row, (2) ancillary data, including a 30 m Digital Elevation Model (DEM) derived into slope, aspect and slope position, (3) perpixel estimates of percent imperviousness and percent tree canopy, (4) 29 classes of land cover data derived from the imagery, ancillary data, and derivatives, (5) classification rules, confidence estimates, and metadata from the land cover classification. This database is now being developed using a Mapping Zone approach, with 66 Zones in the continental United States and 23 Zones in Alaska. Results from three initial mapping Zones show single-pixel land cover accuracies ranging from 73 to 77 percent, imperviousness accuracies ranging from 83 to 91 percent, tree canopy accuracies ranging from 78 to 93 percent, and an estimated 50 percent increase in mapping efficiency over previous methods. The database has now entered the production phase and is being created using extensive partnering in the Federal government with planned completion by 2006.

  3. Development and characterization of a 3D high-resolution terrain database

    NASA Astrophysics Data System (ADS)

    Wilkosz, Aaron; Williams, Bryan L.; Motz, Steve

    2000-07-01

    A top-level description of methods used to generate elements of a high resolution 3D characterization database is presented. The database elements are defined as ground plane elevation map, vegetation height elevation map, material classification map, discrete man-made object map, and temperature radiance map. The paper will cover data collection by means of aerial photography, techniques of soft photogrammetry used to derive the elevation data, and the methodology followed to generate the material classification map. The discussion will feature the development of the database elements covering Fort Greely, Alaska. The developed databases are used by the US Army Aviation and Missile Command to evaluate the performance of various missile systems.

  4. Face recognition using 3D facial shape and color map information: comparison and combination

    NASA Astrophysics Data System (ADS)

    Godil, Afzal; Ressler, Sandy; Grother, Patrick

    2004-08-01

    In this paper, we investigate the use of 3D surface geometry for face recognition and compare it to one based on color map information. The 3D surface and color map data are from the CAESAR anthropometric database. We find that the recognition performance is not very different between 3D surface and color map information using a principal component analysis algorithm. We also discuss the different techniques for the combination of the 3D surface and color map information for multi-modal recognition by using different fusion approaches and show that there is significant improvement in results. The effectiveness of various techniques is compared and evaluated on a dataset with 200 subjects in two different positions.

  5. Preliminary geologic map of the Piru 7.5' quadrangle, southern California: a digital database

    USGS Publications Warehouse

    Yerkes, R.F.; Campbell, Russell H.

    1995-01-01

    This Open-File report is a digital geologic map database. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. This digital map database is compiled from previously published sources combined with some new mapping and modifications in nomenclature. The geologic map database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U. S. Geological Survey. For detailed descriptions of the units, their stratigraphic relations and sources of geologic mapping consult Yerkes and Campbell (1995). More specific information about the units may be available in the original sources.

  6. Geologic map and map database of the Palo Alto 30' x 60' quadrangle, California

    USGS Publications Warehouse

    Brabb, E.E.; Jones, D.L.; Graymer, R.W.

    2000-01-01

    This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (pamf.ps, pamf.pdf, pamf.txt), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller.

  7. Geologic map and map database of western Sonoma, northernmost Marin, and southernmost Mendocino counties, California

    USGS Publications Warehouse

    Blake, M.C.; Graymer, R.W.; Stamski, R.E.

    2002-01-01

    This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (wsomf.ps, wsomf.pdf, wsomf.txt), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller.

  8. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing

    PubMed Central

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  9. Enhancements to Demilitarization Process Maps Program (ProMap)

    DTIC Science & Technology

    2016-10-14

    map tool, ProMap, was improved by implementing new features, and sharing data with MIDAS and AMDIT databases . Specifically, process efficiency was...improved by 1) providing access to APE information contained in the AMDIT database directly from inside ProMap when constructing a process map, 2...what equipment can be efficiently used to demil a particular munition. Associated with this task was the upgrade of the AMDIT database so that

  10. Quaternary Geology and Liquefaction Susceptibility, San Francisco, California 1:100,000 Quadrangle: A Digital Database

    USGS Publications Warehouse

    Knudsen, Keith L.; Noller, Jay S.; Sowers, Janet M.; Lettis, William R.

    1997-01-01

    This Open-File report is a digital geologic map database. This pamphlet serves to introduce and describe the digital data. There are no paper maps included in the Open-File report. The report does include, however, PostScript plot files containing the images of the geologic map sheets with explanations, as well as the accompanying text describing the geology of the area. For those interested in a paper plot of information contained in the database or in obtaining the PostScript plot files, please see the section entitled 'For Those Who Aren't Familiar With Digital Geologic Map Databases' below. This digital map database, compiled from previously unpublished data, and new mapping by the authors, represents the general distribution of surficial deposits in the San Francisco bay region. Together with the accompanying text file (sf_geo.txt or sf_geo.pdf), it provides current information on Quaternary geology and liquefaction susceptibility of the San Francisco, California, 1:100,000 quadrangle. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:100,000 or smaller. The content and character of the database, as well as three methods of obtaining the database, are described below.

  11. MAPU: Max-Planck Unified database of organellar, cellular, tissue and body fluid proteomes

    PubMed Central

    Zhang, Yanling; Zhang, Yong; Adachi, Jun; Olsen, Jesper V.; Shi, Rong; de Souza, Gustavo; Pasini, Erica; Foster, Leonard J.; Macek, Boris; Zougman, Alexandre; Kumar, Chanchal; Wiśniewski, Jacek R.; Jun, Wang; Mann, Matthias

    2007-01-01

    Mass spectrometry (MS)-based proteomics has become a powerful technology to map the protein composition of organelles, cell types and tissues. In our department, a large-scale effort to map these proteomes is complemented by the Max-Planck Unified (MAPU) proteome database. MAPU contains several body fluid proteomes; including plasma, urine, and cerebrospinal fluid. Cell lines have been mapped to a depth of several thousand proteins and the red blood cell proteome has also been analyzed in depth. The liver proteome is represented with 3200 proteins. By employing high resolution MS and stringent validation criteria, false positive identification rates in MAPU are lower than 1:1000. Thus MAPU datasets can serve as reference proteomes in biomarker discovery. MAPU contains the peptides identifying each protein, measured masses, scores and intensities and is freely available at using a clickable interface of cell or body parts. Proteome data can be queried across proteomes by protein name, accession number, sequence similarity, peptide sequence and annotation information. More than 4500 mouse and 2500 human proteins have already been identified in at least one proteome. Basic annotation information and links to other public databases are provided in MAPU and we plan to add further analysis tools. PMID:17090601

  12. Preliminary Integrated Geologic Map Databases for the United States: Connecticut, Maine, Massachusetts, New Hampshire, New Jersey, Rhode Island and Vermont

    USGS Publications Warehouse

    Nicholson, Suzanne W.; Dicken, Connie L.; Horton, John D.; Foose, Michael P.; Mueller, Julia A.L.; Hon, Rudi

    2006-01-01

    The rapid growth in the use of Geographic Information Systems (GIS) has highlighted the need for regional and national scale digital geologic maps that have standardized information about geologic age and lithology. Such maps can be conveniently used to generate derivative maps for manifold special purposes such as mineral-resource assessment, metallogenic studies, tectonic studies, and environmental research. Although two digital geologic maps (Schruben and others, 1994; Reed and Bush, 2004) of the United States currently exist, their scales (1:2,500,000 and 1:5,000,000) are too general for many regional applications. Most states have digital geologic maps at scales of about 1:500,000, but the databases are not comparably structured and, thus, it is difficult to use the digital database for more than one state at a time. This report describes the result for a seven state region of an effort by the U.S. Geological Survey to produce a series of integrated and standardized state geologic map databases that cover the entire United States. In 1997, the United States Geological Survey's Mineral Resources Program initiated the National Surveys and Analysis (NSA) Project to develop national digital databases. One primary activity of this project was to compile a national digital geologic map database, utilizing state geologic maps, to support studies in the range of 1:250,000- to 1:1,000,000-scale. To accomplish this, state databases were prepared using a common standard for the database structure, fields, attribution, and data dictionaries. For Alaska and Hawaii new state maps are being prepared and the preliminary work for Alaska is being released as a series of 1:250,000 scale quadrangle reports. This document provides background information and documentation for the integrated geologic map databases of this report. This report is one of a series of such reports releasing preliminary standardized geologic map databases for the United States. The data products of the project consist of two main parts, the spatial databases and a set of supplemental tables relating to geologic map units. The datasets serve as a data resource to generate a variety of stratigraphic, age, and lithologic maps. This documentation is divided into four main sections: (1) description of the set of data files provided in this report, (2) specifications of the spatial databases, (3) specifications of the supplemental tables, and (4) an appendix containing the data dictionaries used to populate some fields of the spatial database and supplemental tables.

  13. Perils of using speed zone data to assess real-world compliance to speed limits.

    PubMed

    Chevalier, Anna; Clarke, Elizabeth; Chevalier, Aran John; Brown, Julie; Coxon, Kristy; Ivers, Rebecca; Keay, Lisa

    2017-11-17

    Real-world driving studies, including those involving speeding alert devices and autonomous vehicles, can gauge an individual vehicle's speeding behavior by comparing measured speed with mapped speed zone data. However, there are complexities with developing and maintaining a database of mapped speed zones over a large geographic area that may lead to inaccuracies within the data set. When this approach is applied to large-scale real-world driving data or speeding alert device data to determine speeding behavior, these inaccuracies may result in invalid identification of speeding. We investigated speeding events based on service provider speed zone data. We compared service provider speed zone data (Speed Alert by Smart Car Technologies Pty Ltd., Ultimo, NSW, Australia) against a second set of speed zone data (Google Maps Application Programming Interface [API] mapped speed zones). We found a systematic error in the zones where speed limits of 50-60 km/h, typical of local roads, were allocated to high-speed motorways, which produced false speed limits in the speed zone database. The result was detection of false-positive high-range speeding. Through comparison of the service provider speed zone data against a second set of speed zone data, we were able to identify and eliminate data most affected by this systematic error, thereby establishing a data set of speeding events with a high level of sensitivity (a true positive rate of 92% or 6,412/6,960). Mapped speed zones can be a source of error in real-world driving when examining vehicle speed. We explored the types of inaccuracies found within speed zone data and recommend that a second set of speed zone data be utilized when investigating speeding behavior or developing mapped speed zone data to minimize inaccuracy in estimates of speeding.

  14. Algorithms and methodology used in constructing high-resolution terrain databases

    NASA Astrophysics Data System (ADS)

    Williams, Bryan L.; Wilkosz, Aaron

    1998-07-01

    This paper presents a top-level description of methods used to generate high-resolution 3D IR digital terrain databases using soft photogrammetry. The 3D IR database is derived from aerial photography and is made up of digital ground plane elevation map, vegetation height elevation map, material classification map, object data (tanks, buildings, etc.), and temperature radiance map. Steps required to generate some of these elements are outlined. The use of metric photogrammetry is discussed in the context of elevation map development; and methods employed to generate the material classification maps are given. The developed databases are used by the US Army Aviation and Missile Command to evaluate the performance of various missile systems. A discussion is also presented on database certification which consists of validation, verification, and accreditation procedures followed to certify that the developed databases give a true representation of the area of interest, and are fully compatible with the targeted digital simulators.

  15. rasbhari: Optimizing Spaced Seeds for Database Searching, Read Mapping and Alignment-Free Sequence Comparison.

    PubMed

    Hahn, Lars; Leimeister, Chris-André; Ounit, Rachid; Lonardi, Stefano; Morgenstern, Burkhard

    2016-10-01

    Many algorithms for sequence analysis rely on word matching or word statistics. Often, these approaches can be improved if binary patterns representing match and don't-care positions are used as a filter, such that only those positions of words are considered that correspond to the match positions of the patterns. The performance of these approaches, however, depends on the underlying patterns. Herein, we show that the overlap complexity of a pattern set that was introduced by Ilie and Ilie is closely related to the variance of the number of matches between two evolutionarily related sequences with respect to this pattern set. We propose a modified hill-climbing algorithm to optimize pattern sets for database searching, read mapping and alignment-free sequence comparison of nucleic-acid sequences; our implementation of this algorithm is called rasbhari. Depending on the application at hand, rasbhari can either minimize the overlap complexity of pattern sets, maximize their sensitivity in database searching or minimize the variance of the number of pattern-based matches in alignment-free sequence comparison. We show that, for database searching, rasbhari generates pattern sets with slightly higher sensitivity than existing approaches. In our Spaced Words approach to alignment-free sequence comparison, pattern sets calculated with rasbhari led to more accurate estimates of phylogenetic distances than the randomly generated pattern sets that we previously used. Finally, we used rasbhari to generate patterns for short read classification with CLARK-S. Here too, the sensitivity of the results could be improved, compared to the default patterns of the program. We integrated rasbhari into Spaced Words; the source code of rasbhari is freely available at http://rasbhari.gobics.de/.

  16. Procedural Documentation and Accuracy Assessment of Bathymetric Maps and Area/Capacity Tables for Small Reservoirs

    USGS Publications Warehouse

    Wilson, Gary L.; Richards, Joseph M.

    2006-01-01

    Because of the increasing use and importance of lakes for water supply to communities, a repeatable and reliable procedure to determine lake bathymetry and capacity is needed. A method to determine the accuracy of the procedure will help ensure proper collection and use of the data and resulting products. It is important to clearly define the intended products and desired accuracy before conducting the bathymetric survey to ensure proper data collection. A survey-grade echo sounder and differential global positioning system receivers were used to collect water-depth and position data in December 2003 at Sugar Creek Lake near Moberly, Missouri. Data were collected along planned transects, with an additional set of quality-assurance data collected for use in accuracy computations. All collected data were imported into a geographic information system database. A bathymetric surface model, contour map, and area/capacity tables were created from the geographic information system database. An accuracy assessment was completed on the collected data, bathymetric surface model, area/capacity table, and contour map products. Using established vertical accuracy standards, the accuracy of the collected data, bathymetric surface model, and contour map product was 0.67 foot, 0.91 foot, and 1.51 feet at the 95 percent confidence level. By comparing results from different transect intervals with the quality-assurance transect data, it was determined that a transect interval of 1 percent of the longitudinal length of Sugar Creek Lake produced nearly as good results as 0.5 percent transect interval for the bathymetric surface model, area/capacity table, and contour map products.

  17. Quality Analysis of Open Street Map Data

    NASA Astrophysics Data System (ADS)

    Wang, M.; Li, Q.; Hu, Q.; Zhou, M.

    2013-05-01

    Crowd sourcing geographic data is an opensource geographic data which is contributed by lots of non-professionals and provided to the public. The typical crowd sourcing geographic data contains GPS track data like OpenStreetMap, collaborative map data like Wikimapia, social websites like Twitter and Facebook, POI signed by Jiepang user and so on. These data will provide canonical geographic information for pubic after treatment. As compared with conventional geographic data collection and update method, the crowd sourcing geographic data from the non-professional has characteristics or advantages of large data volume, high currency, abundance information and low cost and becomes a research hotspot of international geographic information science in the recent years. Large volume crowd sourcing geographic data with high currency provides a new solution for geospatial database updating while it need to solve the quality problem of crowd sourcing geographic data obtained from the non-professionals. In this paper, a quality analysis model for OpenStreetMap crowd sourcing geographic data is proposed. Firstly, a quality analysis framework is designed based on data characteristic analysis of OSM data. Secondly, a quality assessment model for OSM data by three different quality elements: completeness, thematic accuracy and positional accuracy is presented. Finally, take the OSM data of Wuhan for instance, the paper analyses and assesses the quality of OSM data with 2011 version of navigation map for reference. The result shows that the high-level roads and urban traffic network of OSM data has a high positional accuracy and completeness so that these OSM data can be used for updating of urban road network database.

  18. Digital database of the geologic map of the island of Hawai'i [Hawaii

    USGS Publications Warehouse

    Trusdell, Frank A.; Wolfe, Edward W.; Morris, Jean

    2006-01-01

    This online publication (DS 144) provides the digital database for the printed map by Edward W. Wolfe and Jean Morris (I-2524-A; 1996). This digital database contains all the information used to publish U.S. Geological Survey Geologic Investigations Series I-2524-A (available only in paper form; see http://pubs.er.usgs.gov/pubs/i/i2524A). The database contains the distribution and relationships of volcanic and surficial-sedimentary deposits on the island of Hawai‘i. This dataset represents the geologic history for the five volcanoes that comprise the Island of Hawai'i. The volcanoes are Kohala, Mauna Kea, Hualalai, Mauna Loa and Kīlauea.This database of the geologic map contributes to understanding the geologic history of the Island of Hawai‘i and provides the basis for understanding long-term volcanic processes in an intra-plate ocean island volcanic system. In addition the database also serves as a basis for producing volcanic hazards assessment for the island of Hawai‘i. Furthermore it serves as a base layer to be used for interdisciplinary research.This online publication consists of a digital database of the geologic map, an explanatory pamphlet, description of map units, correlation of map units diagram, and images for plotting. Geologic mapping was compiled at a scale of 1:100,000 for the entire mapping area. The geologic mapping was compiled as a digital geologic database in ArcInfo GIS format.

  19. Recalculation of regional and detailed gravity database from Slovak Republic and qualitative interpretation of new generation Bouguer anomaly map

    NASA Astrophysics Data System (ADS)

    Pasteka, Roman; Zahorec, Pavol; Mikuska, Jan; Szalaiova, Viktoria; Papco, Juraj; Krajnak, Martin; Kusnirak, David; Panisova, Jaroslava; Vajda, Peter; Bielik, Miroslav

    2014-05-01

    In this contribution results of the running project "Bouguer anomalies of new generation and the gravimetrical model of Western Carpathians (APVV-0194-10)" are presented. The existing homogenized regional database (212478 points) was enlarged by approximately 107 500 archive detailed gravity measurements. These added gravity values were measured since the year 1976 to the present, therefore they need to be unified and reprocessed. The improved positions of more than 8500 measured points were acquired by digitizing of archive maps (we recognized some local errors within particular data sets). Besides the local errors (due to the wrong positions, heights or gravity of measured points) we have found some areas of systematic errors probably due to the gravity measurement or processing errors. Some of them were confirmed and consequently corrected by field measurements within the frame of current project. Special attention is paid to the recalculation of the terrain corrections - we have used a new developed software as well as the latest version of digital terrain model of Slovakia DMR-3. Main improvement of the new terrain corrections evaluation algorithm is the possibility to calculate it in the real gravimeter position and involving of 3D polyhedral bodies approximation (accepting the spherical approximation of Earth's curvature). We have realized several tests by means of the introduction of non-standard distant relief effects introduction. A new complete Bouguer anomalies map was constructed and transformed by means of higher derivatives operators (tilt derivatives, TDX, theta-derivatives and the new TDXAS transformation), using the regularization approach. A new interesting regional lineament of probably neotectonic character was recognized in the new map of complete Bouguer anomalies and it was confirmed also by realized in-situ field measurements.

  20. 75 FR 10552 - Sixth Meeting-RTCA Special Committee 217: Joint With EUROCAE WG-44 Terrain and Airport Mapping...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-08

    ... 217: Joint With EUROCAE WG- 44 Terrain and Airport Mapping Databases AGENCY: Federal Aviation... Airport Mapping Databases. SUMMARY: The FAA is issuing this notice to advise the public of a meeting of RTCA Special Committee 217: Joint with EUROCAE WG-44 Terrain and Airport Mapping Databases. DATES: The...

  1. 76 FR 27744 - Eighth Meeting-RTCA Special Committee 217: Joint With EUROCAE WG-44 Terrain and Airport Mapping...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-12

    ... Committee 217: Joint With EUROCAE WG-44 Terrain and Airport Mapping Databases AGENCY: Federal Aviation... Airport Mapping Databases. SUMMARY: The FAA is issuing this notice to advise the public of a meeting of RTCA Special Committee 217: Joint with EUROCAE WG-44 Terrain and Airport Mapping Databases. DATES: The...

  2. 76 FR 54527 - Ninth Meeting-RTCA Special Committee 217: Joint With EUROCAE WG-44 Terrain and Airport Mapping...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-01

    ... 217: Joint With EUROCAE WG- 44 Terrain and Airport Mapping Databases AGENCY: Federal Aviation... Airport Mapping Databases. SUMMARY: The FAA is issuing this notice to advise the public of a meeting of RTCA Special Committee 217: Joint with EUROCAE WG-44 Terrain and Airport Mapping Databases. DATES: The...

  3. 76 FR 6179 - Eighth Meeting-RTCA Special Committee 217: Joint With EUROCAE WG-44 Terrain and Airport Mapping...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-03

    ... Committee 217: Joint With EUROCAE WG-44 Terrain and Airport Mapping Databases AGENCY: Federal Aviation... Airport Mapping Databases. SUMMARY: The FAA is issuing this notice to advise the public of a meeting of RTCA Special Committee 217: Joint with EUROCAE WG-44 Terrain and Airport Mapping Databases. DATES: The...

  4. Geologic map and map database of northeastern San Francisco Bay region, California, [including] most of Solano County and parts of Napa, Marin, Contra Costa, San Joaquin, Sacramento, Yolo, and Sonoma Counties

    USGS Publications Warehouse

    Graymer, Russell Walter; Jones, David Lawrence; Brabb, Earl E.

    2002-01-01

    This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (nesfmf.ps, nesfmf.pdf, nesfmf.txt), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller.

  5. Geology of Point Reyes National Seashore and vicinity, California: a digital database

    USGS Publications Warehouse

    Clark, Jospeh C.; Brabb, Earl E.

    1997-01-01

    This Open-File report is a digital geologic map database. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. The report does include, however, a PostScript plot file containing an image of the geologic map sheet with explanation, as well as the accompanying text describing the geology of the area. For those interested in a paper plot of information contained in the database or in obtaining the PostScript plot files, please see the section entitled 'For Those Who Aren't Familiar With Digital Geologic Map Databases' below. This digital map database, compiled from previously published and unpublished data and new mapping by the authors, represents the general distribution of surficial deposits and rock units in Point Reyes and surrounding areas. Together with the accompanying text file (pr-geo.txt or pr-geo.ps), it provides current information on the stratigraphy and structural geology of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:48,000 or smaller.

  6. On the origins of logarithmic number-to-position mapping.

    PubMed

    Dotan, Dror; Dehaene, Stanislas

    2016-11-01

    The number-to-position task, in which children and adults are asked to place numbers on a spatial number line, has become a classic measure of number comprehension. We present a detailed experimental and theoretical dissection of the processing stages that underlie this task. We used a continuous finger-tracking technique, which provides detailed information about the time course of processing stages. When adults map the position of 2-digit numbers onto a line, their final mapping is essentially linear, but intermediate finger location show a transient logarithmic mapping. We identify the origins of this log effect: Small numbers are processed faster than large numbers, so the finger deviates toward the target position earlier for small numbers than for large numbers. When the trajectories are aligned on the finger deviation onset, the log effect disappears. The small-number advantage and the log effect are enhanced in dual-task setting and are further enhanced when the delay between the 2 tasks is shortened, suggesting that these effects originate from a central stage of quantification and decision making. We also report cases of logarithmic mapping-by children and by a brain-injured individual-which cannot be explained by faster responding to small numbers. We show that these findings are captured by an ideal-observer model of the number-to-position mapping task, comprising 3 distinct stages: a quantification stage, whose duration is influenced by both exact and approximate representations of numerical quantity; a Bayesian accumulation-of-evidence stage, leading to a decision about the target location; and a pointing stage. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. PoMaMo--a comprehensive database for potato genome data.

    PubMed

    Meyer, Svenja; Nagel, Axel; Gebhardt, Christiane

    2005-01-01

    A database for potato genome data (PoMaMo, Potato Maps and More) was established. The database contains molecular maps of all twelve potato chromosomes with about 1000 mapped elements, sequence data, putative gene functions, results from BLAST analysis, SNP and InDel information from different diploid and tetraploid potato genotypes, publication references, links to other public databases like GenBank (http://www.ncbi.nlm.nih.gov/) or SGN (Solanaceae Genomics Network, http://www.sgn.cornell.edu/), etc. Flexible search and data visualization interfaces enable easy access to the data via internet (https://gabi.rzpd.de/PoMaMo.html). The Java servlet tool YAMB (Yet Another Map Browser) was designed to interactively display chromosomal maps. Maps can be zoomed in and out, and detailed information about mapped elements can be obtained by clicking on an element of interest. The GreenCards interface allows a text-based data search by marker-, sequence- or genotype name, by sequence accession number, gene function, BLAST Hit or publication reference. The PoMaMo database is a comprehensive database for different potato genome data, and to date the only database containing SNP and InDel data from diploid and tetraploid potato genotypes.

  8. PoMaMo—a comprehensive database for potato genome data

    PubMed Central

    Meyer, Svenja; Nagel, Axel; Gebhardt, Christiane

    2005-01-01

    A database for potato genome data (PoMaMo, Potato Maps and More) was established. The database contains molecular maps of all twelve potato chromosomes with about 1000 mapped elements, sequence data, putative gene functions, results from BLAST analysis, SNP and InDel information from different diploid and tetraploid potato genotypes, publication references, links to other public databases like GenBank (http://www.ncbi.nlm.nih.gov/) or SGN (Solanaceae Genomics Network, http://www.sgn.cornell.edu/), etc. Flexible search and data visualization interfaces enable easy access to the data via internet (https://gabi.rzpd.de/PoMaMo.html). The Java servlet tool YAMB (Yet Another Map Browser) was designed to interactively display chromosomal maps. Maps can be zoomed in and out, and detailed information about mapped elements can be obtained by clicking on an element of interest. The GreenCards interface allows a text-based data search by marker-, sequence- or genotype name, by sequence accession number, gene function, BLAST Hit or publication reference. The PoMaMo database is a comprehensive database for different potato genome data, and to date the only database containing SNP and InDel data from diploid and tetraploid potato genotypes. PMID:15608284

  9. Integrating Databases with Maps: The Delivery of Cultural Data through TimeMap.

    ERIC Educational Resources Information Center

    Johnson, Ian

    TimeMap is a unique integration of database management, metadata and interactive maps, designed to contextualise and deliver cultural data through maps. TimeMap extends conventional maps with the time dimension, creating and animating maps "on-the-fly"; delivers them as a kiosk application or embedded in Web pages; links flexibly to…

  10. 76 FR 70531 - Tenth Meeting: RTCA Special Committee 217/EUROCAE WG-44: Terrain and Airport Mapping Databases

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-14

    ... 217/EUROCAE WG-44: Terrain and Airport Mapping Databases AGENCY: Federal Aviation Administration (FAA... Databases: For the tenth meeting DATES: The meeting will be held December 6-9, 2011, from 9 a.m. to 5 p.m... Mapping Databases. The agenda will include the following: December 6, 2011 Open Plenary Session. Chairman...

  11. Aerial Magnetic, Electromagnetic, and Gamma-ray Survey, Berrien County, Michigan

    USGS Publications Warehouse

    Duval, Joseph S.; Pierce, Herbert A.; Daniels, David L.; Mars, John L.; Webring, Michael W.; Hildenbrand, Thomas G.

    2002-01-01

    This publication includes maps, grids, and flightline databases of a detailed aerial survey and maps and grids of satellite data in Berrien County, Michigan. The purpose of the survey was to map aquifers in glacial terrains. This was accomplished by using a DIGHEMVRES mufti-coil, mufti-frequency electromagnetic system supplemented by a high sensitivity cesium magnetometer and 256-channel spectrometer. The information from these sensors was processed to produce maps, which display the conductive, magnetic and radioactive properties of the survey area. A GPS electronic navigation system ensured accurate positioning of the geophysical data. This report also includes data from the advanced spaceborne thermal emission and reflection (ASTER) radiometer. ASTER measures thermal emission and reflection data for 14 bands of the spectrum.

  12. Digital Geological Map for Marie Byrd Land, West Antarctica: A resource for investigation of geotectonic frameworks and future glaciological change

    NASA Astrophysics Data System (ADS)

    Siddoway, C. S.; White, T.; Elkind, S.; Cox, S. C.; Lyttle, B. S.; Morin, P. J.

    2016-12-01

    Bedrock exposures are relatively sparse in Marie Byrd Land (MBL), where rock is concealed by the West Antarctic ice sheet, but they provide direct insight into the geological evolution and glacial history of West Antarctica. MBL is tectonically active, as evidenced by Late Pleistocene to Holocene volcanism and 2012 seismicity (3 events, M4.4 to M5.5) at sites beside Ross Sea. There are geological influences upon the ice sheet, namely, subglacial volcanism and associated geothermal flux, fault zone alteration/mineralization, and bedrock roughess. The former may influence the position and velocity of outlet glaciers and the latter may anchor or accelerate sectors of the ice sheet. To make MBL's geological framework accessible to investigators with diverse research priorities, we are preparing the first digital geological map of MBL by compiling ground-based geological data, incorporating firsthand observations, published geological maps and literature. The map covers an on-continent coastal area of 900 000 km2 between 090°E to 160°E, from 72°S to 80°S, at 1:250 000 scale or better. Exposed rock is delimited by 1976 polygons, occupying 410 km2. Supraglacial features and glacial till, seasonal water and blue ice, are also mapped, as a baseline for past and future glaciological change. Rendered in the ArcMap GIS software by Esri©, the database employs international GeoSciML data protocols for feature classification and description of rock and moraine polygons from the Antarctic Digital Database (www.add.scar.org), with shape and location adjusted to align with features in Landsat Image Mosaic of Antarctica imagery (lima.usgs.gov), where necessary. The GIS database is attribute-rich and queriable; including links to bibliographic source files for primary literature and published maps. It will soon be available as GoogleEarth kmz files and an ArcGIS online map service. An initial application is to the interpretation of sub-ice geology for a subglacial geotectonic map of this active region. This is undertaken as part of ROSETTA-Ice, an integrated systems science investigation of the Ross Ice Shelf that commenced in 2015. The next phases of MBL database development will assess icesheet-ocean interactions near grounding line, environmental domain analysis and ecological research.

  13. Spatial digital database for the tectonic map of Southeast Arizona

    USGS Publications Warehouse

    map by Drewes, Harald; digital database by Fields, Robert A.; Hirschberg, Douglas M.; Bolm, Karen S.

    2002-01-01

    A spatial database was created for Drewes' (1980) tectonic map of southeast Arizona: this database supercedes Drewes and others (2001, ver. 1.0). Staff and a contractor at the U.S. Geological Survey in Tucson, Arizona completed an interim digital geologic map database for the east part of the map in 2001, made revisions to the previously released digital data for the west part of the map (Drewes and others, 2001, ver. 1.0), merged data files for the east and west parts, and added additional data not previously captured. Digital base map data files (such as topography, roads, towns, rivers and lakes) are not included: they may be obtained from a variety of commercial and government sources. This digital geospatial database is one of many being created by the U.S. Geological Survey as an ongoing effort to provide geologic information in a geographic information system (GIS) for use in spatial analysis. The resulting digital geologic map database can be queried in many ways to produce a variety of geologic maps and derivative products. Because Drewes' (1980) map sheets include additional text and graphics that were not included in this report, scanned images of his maps (i1109_e.jpg, i1109_w.jpg) are included as a courtesy to the reader. This database should not be used or displayed at any scale larger than 1:125,000 (for example, 1:100,000 or 1:24,000). The digital geologic map plot files (i1109_e.pdf and i1109_w.pdf) that are provided herein are representations of the database (see Appendix A). The map area is located in southeastern Arizona (fig. 1). This report describes the map units (from Drewes, 1980), the methods used to convert the geologic map data into a digital format, the ArcInfo GIS file structures and relationships, and explains how to download the digital files from the U.S. Geological Survey public access World Wide Web site on the Internet. The manuscript and digital data review by Helen Kayser (Information Systems Support, Inc.) is greatly appreciated.

  14. [Multiplexing mapping of human cDNAs]. Final report, September 1, 1991--February 28, 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Using PCR with automated product analysis, 329 human brain cDNA sequences have been assigned to individual human chromosomes. Primers were designed from single-pass cDNA sequences expressed sequence tags (ESTs). Primers were used in PCR reactions with DNA from somatic cell hybrid mapping panels as templates, often with multiplexing. Many ESTs mapped match sequence database records. To evaluate of these matches, the position of the primers relative to the matching region (In), the BLAST scores and the Poisson probability values of the EST/sequence record match were determined. In cases where the gene product was stringently identified by the sequence match hadmore » already been mapped, the gene locus determined by EST was consistent with the previous position which strongly supports the validity of assigning unknown genes to human chromosomes based on the EST sequence matches. In the present cases mapping the ESTs to a chromosome can also be considered to have mapped the known gene product: rolipram-sensitive cAMP phosphodiesterase, chromosome 1; protein phosphatase 2A{beta}, chromosome 4; alpha-catenin, chromosome 5; the ELE1 oncogene, chromosome 10q11.2 or q2.1-q23; MXII protein, chromosome l0q24-qter; ribosomal protein L18a homologue, chromosome 14; ribosomal protein L3, chromosome 17; and moesin, Xp11-cen. There were also ESTs mapped that were closely related to non-human sequence records. These matches therefore can be considered to identify human counterparts of known gene products, or members of known gene families. Examples of these include membrane proteins, translation-associated proteins, structural proteins, and enzymes. These data then demonstrate that single pass sequence information is sufficient to design PCR primers useful for assigning cDNA sequences to human chromosomes. When the EST sequence matches previous sequence database records, the chromosome assignments of the EST can be used to make preliminary assignments of the human gene to a chromosome.« less

  15. MAPU: Max-Planck Unified database of organellar, cellular, tissue and body fluid proteomes.

    PubMed

    Zhang, Yanling; Zhang, Yong; Adachi, Jun; Olsen, Jesper V; Shi, Rong; de Souza, Gustavo; Pasini, Erica; Foster, Leonard J; Macek, Boris; Zougman, Alexandre; Kumar, Chanchal; Wisniewski, Jacek R; Jun, Wang; Mann, Matthias

    2007-01-01

    Mass spectrometry (MS)-based proteomics has become a powerful technology to map the protein composition of organelles, cell types and tissues. In our department, a large-scale effort to map these proteomes is complemented by the Max-Planck Unified (MAPU) proteome database. MAPU contains several body fluid proteomes; including plasma, urine, and cerebrospinal fluid. Cell lines have been mapped to a depth of several thousand proteins and the red blood cell proteome has also been analyzed in depth. The liver proteome is represented with 3200 proteins. By employing high resolution MS and stringent validation criteria, false positive identification rates in MAPU are lower than 1:1000. Thus MAPU datasets can serve as reference proteomes in biomarker discovery. MAPU contains the peptides identifying each protein, measured masses, scores and intensities and is freely available at http://www.mapuproteome.com using a clickable interface of cell or body parts. Proteome data can be queried across proteomes by protein name, accession number, sequence similarity, peptide sequence and annotation information. More than 4500 mouse and 2500 human proteins have already been identified in at least one proteome. Basic annotation information and links to other public databases are provided in MAPU and we plan to add further analysis tools.

  16. Light Detection and Ranging-Based Terrain Navigation: A Concept Exploration

    NASA Technical Reports Server (NTRS)

    Campbell, Jacob; UijtdeHaag, Maarten; vanGraas, Frank; Young, Steve

    2003-01-01

    This paper discusses the use of Airborne Light Detection And Ranging (LiDAR) equipment for terrain navigation. Airborne LiDAR is a relatively new technology used primarily by the geo-spatial mapping community to produce highly accurate and dense terrain elevation maps. In this paper, the term LiDAR refers to a scanning laser ranger rigidly mounted to an aircraft, as opposed to an integrated sensor system that consists of a scanning laser ranger integrated with Global Positioning System (GPS) and Inertial Measurement Unit (IMU) data. Data from the laser range scanner and IMU will be integrated with a terrain database to estimate the aircraft position and data from the laser range scanner will be integrated with GPS to estimate the aircraft attitude. LiDAR data was collected using NASA Dryden's DC-8 flying laboratory in Reno, NV and was used to test the proposed terrain navigation system. The results of LiDAR-based terrain navigation shown in this paper indicate that airborne LiDAR is a viable technology enabler for fully autonomous aircraft navigation. The navigation performance is highly dependent on the quality of the terrain databases used for positioning and therefore high-resolution (2 m post-spacing) data was used as the terrain reference.

  17. Geographic information systems for mapping the National Exam Result of Junior High School in 2014 at West Java Province

    NASA Astrophysics Data System (ADS)

    Setiawan Abdullah, Atje; Nurani Ruchjana, Budi; Rejito, Juli; Rosadi, Rudi; Candra Permana, Fahmi

    2017-10-01

    National Exam level of schooling is implemented by the Ministry of Education and Culture for the development of education in Indonesia. The national examinations are centrally evaluated by the National Education Standards Agency, and the expected implementation of the national exams can describe the successful implementation of education at the district, municipal, provincial, or national level. In this study, we evaluate, analyze, and explore the implementation of the national exam database of the results of the Junior High School in 2014, with the Junior High School (SMP/MTs) as the smallest unit of analysis at the district level. The method used in this study is a data mining approach using the methodology of Knowledge Discovery in Databases (KDD) using descriptive analysis and spatial mapping of national examinations. The results of the classification of the data mining process to national exams of Junior High School in 2014 using data 6,878 SMP/MTs in West Java showed that 81.01 % were at moderate levels. While the results of the spatial mapping for SMP/MTs in West Java can be explained 36,99 % at the unfavorable level. The evaluation results visualization in graphic is done using ArcGIS to provide position information quality of education in municipal, provincial or national level. The results of this study can be used by management to make decision to improve educational services based on the national exam database in West Java. Keywords: KDD, spatial mapping, national exam.

  18. Geologic Map of the Tucson and Nogales Quadrangles, Arizona (Scale 1:250,000): A Digital Database

    USGS Publications Warehouse

    Peterson, J.A.; Berquist, J.R.; Reynolds, S.J.; Page-Nedell, S. S.; Digital database by Oland, Gustav P.; Hirschberg, Douglas M.

    2001-01-01

    The geologic map of the Tucson-Nogales 1:250,000 scale quadrangle (Peterson and others, 1990) was digitized by U.S. Geological Survey staff and University of Arizona contractors at the Southwest Field Office, Tucson, Arizona, in 2000 for input into a geographic information system (GIS). The database was created for use as a basemap in a decision support system designed by the National Industrial Minerals and Surface Processes project. The resulting digital geologic map database can be queried in many ways to produce a variety of geologic maps. Digital base map data files (topography, roads, towns, rivers and lakes, etc.) are not included; they may be obtained from a variety of commercial and government sources. Additionally, point features, such as strike and dip, were not captured from the original paper map and are not included in the database. This database is not meant to be used or displayed at any scale larger than 1:250,000 (for example, 1:100,000 or 1:24,000). The digital geologic map graphics and plot files that are provided in the digital package are representations of the digital database. They are not designed to be cartographic products.

  19. 77 FR 14584 - Eleventh Meeting: RTCA Special Committee 217, Joint With EUROCAE Working Group-44, Terrain and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-12

    ... Committee 217, Joint With EUROCAE Working Group--44, Terrain and Airport Mapping Databases AGENCY: Federal... Special Committee 217, Joint with EUROCAE Working Group--44, Terrain and Airport Mapping Databases... Committee 217, Joint with EUROCAE Working Group--44, Terrain and Airport Mapping Databases. DATES: The...

  20. Geologic Map and Map Database of Eastern Sonoma and Western Napa Counties, California

    USGS Publications Warehouse

    Graymer, R.W.; Brabb, E.E.; Jones, D.L.; Barnes, J.; Nicholson, R.S.; Stamski, R.E.

    2007-01-01

    Introduction This report contains a new 1:100,000-scale geologic map, derived from a set of geologic map databases (Arc-Info coverages) containing information at 1:62,500-scale resolution, and a new description of the geologic map units and structural relations in the map area. Prepared as part of the San Francisco Bay Region Mapping Project, the study area includes the north-central part of the San Francisco Bay region, and forms the final piece of the effort to generate new, digital geologic maps and map databases for an area which includes Alameda, Contra Costa, Marin, Napa, San Francisco, San Mateo, Santa Clara, Santa Cruz, Solano, and Sonoma Counties. Geologic mapping in Lake County in the north-central part of the map extent was not within the scope of the Project. The map and map database integrates both previously published reports and new geologic mapping and field checking by the authors (see Sources of Data index map on the map sheet or the Arc-Info coverage eswn-so and the textfile eswn-so.txt). This report contains new ideas about the geologic structures in the map area, including the active San Andreas Fault system, as well as the geologic units and their relations. Together, the map (or map database) and the unit descriptions in this report describe the composition, distribution, and orientation of geologic materials and structures within the study area at regional scale. Regional geologic information is important for analysis of earthquake shaking, liquifaction susceptibility, landslide susceptibility, engineering materials properties, mineral resources and hazards, as well as groundwater resources and hazards. These data also assist in answering questions about the geologic history and development of the California Coast Ranges.

  1. Geologic map of the Grand Canyon 30' x 60' quadrangle, Coconino and Mohave Counties, northwestern Arizona

    USGS Publications Warehouse

    Billingsley, G.H.

    2000-01-01

    This digital map database, compiled from previously published and unpublished data as well as new mapping by the author, represents the general distribution of bedrock and surficial deposits in the map area. Together with the accompanying pamphlet, it provides current information on the geologic structure and stratigraphy of the Grand Canyon area. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:100,000 or smaller.

  2. Building MapObjects attribute field in cadastral database based on the method of Jackson system development

    NASA Astrophysics Data System (ADS)

    Chen, Zhu-an; Zhang, Li-ting; Liu, Lu

    2009-10-01

    ESRI's GIS components MapObjects are applied in many cadastral information system because of its miniaturization and flexibility. Some cadastral information was saved in cadastral database directly by MapObjects's Shape file format in this cadastral information system. However, MapObjects didn't provide the function of building attribute field for map layer's attribute data file in cadastral database and user cann't save the result of analysis. This present paper designed and realized the function of building attribute field in MapObjects based on the method of Jackson's system development.

  3. A test of the circumvention-of-limits hypothesis in scientific problem solving: the case of geological bedrock mapping.

    PubMed

    Hambrick, David Z; Libarkin, Julie C; Petcovic, Heather L; Baker, Kathleen M; Elkins, Joe; Callahan, Caitlin N; Turner, Sheldon P; Rench, Tara A; Ladue, Nicole D

    2012-08-01

    Sources of individual differences in scientific problem solving were investigated. Participants representing a wide range of experience in geology completed tests of visuospatial ability and geological knowledge, and performed a geological bedrock mapping task, in which they attempted to infer the geological structure of an area in the Tobacco Root Mountains of Montana. A Visuospatial Ability × Geological Knowledge interaction was found, such that visuospatial ability positively predicted mapping performance at low, but not high, levels of geological knowledge. This finding suggests that high levels of domain knowledge may sometimes enable circumvention of performance limitations associated with cognitive abilities. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  4. GLIMS Glacier Database: Status and Challenges

    NASA Astrophysics Data System (ADS)

    Raup, B. H.; Racoviteanu, A.; Khalsa, S. S.; Armstrong, R.

    2008-12-01

    GLIMS (Global Land Ice Measurements from Space) is an international initiative to map the world's glaciers and to build a GIS database that is usable via the World Wide Web. The GLIMS programme includes 70 institutions, and 25 Regional Centers (RCs), who analyze satellite imagery to map glaciers in their regions of expertise. The analysis results are collected at the National Snow and Ice Data Center (NSIDC) and ingested into the GLIMS Glacier Database. The database contains approximately 80 000 glacier outlines, half the estimated total on Earth. In addition, the database contains metadata on approximately 200 000 ASTER images acquired over glacierized terrain. Glacier data and the ASTER metadata can be viewed and searched via interactive maps at http://glims.org/. As glacier mapping with GLIMS has progressed, various hurdles have arisen that have required solutions. For example, the GLIMS community has formulated definitions for how to delineate glaciers with different complicated morphologies and how to deal with debris cover. Experiments have been carried out to assess the consistency of the database, and protocols have been defined for the RCs to follow in their mapping. Hurdles still remain. In June 2008, a workshop was convened in Boulder, Colorado to address issues such as mapping debris-covered glaciers, mapping ice divides, and performing change analysis using two different glacier inventories. This contribution summarizes the status of the GLIMS Glacier Database and steps taken to ensure high data quality.

  5. The State Geologic Map Compilation (SGMC) geodatabase of the conterminous United States

    USGS Publications Warehouse

    Horton, John D.; San Juan, Carma A.; Stoeser, Douglas B.

    2017-06-30

    The State Geologic Map Compilation (SGMC) geodatabase of the conterminous United States (https://doi. org/10.5066/F7WH2N65) represents a seamless, spatial database of 48 State geologic maps that range from 1:50,000 to 1:1,000,000 scale. A national digital geologic map database is essential in interpreting other datasets that support numerous types of national-scale studies and assessments, such as those that provide geochemistry, remote sensing, or geophysical data. The SGMC is a compilation of the individual U.S. Geological Survey releases of the Preliminary Integrated Geologic Map Databases for the United States. The SGMC geodatabase also contains updated data for seven States and seven entirely new State geologic maps that have been added since the preliminary databases were published. Numerous errors have been corrected and enhancements added to the preliminary datasets using thorough quality assurance/quality control procedures. The SGMC is not a truly integrated geologic map database because geologic units have not been reconciled across State boundaries. However, the geologic data contained in each State geologic map have been standardized to allow spatial analyses of lithology, age, and stratigraphy at a national scale.

  6. Regional Geologic Map of San Andreas and Related Faults in Carrizo Plain, Temblor, Caliente and La Panza Ranges and Vicinity, California; A Digital Database

    USGS Publications Warehouse

    Dibblee, T. W.; Digital database compiled by Graham, S. E.; Mahony, T.M.; Blissenbach, J.L.; Mariant, J.J.; Wentworth, C.M.

    1999-01-01

    This Open-File Report is a digital geologic map database. The report serves to introduce and describe the digital data. There is no paper map included in the Open-File Report. The report includes PostScript and PDF plot files that can be used to plot images of the geologic map sheet and explanation sheet. This digital map database is prepared from a previously published map by Dibblee (1973). The geologic map database delineates map units that are identified by general age, lithology, and clast size following the stratigraphic nomenclature of the U.S. Geological Survey. For descriptions of the units, their stratigraphic relations, and sources of geologic mapping, consult the explanation sheet (of99-14_4b.ps or of99-14_4d.pdf), or the original published paper map (Dibblee, 1973). The scale of the source map limits the spatial resolution (scale) of the database to 1:125,000 or smaller. For those interested in the geology of Carrizo Plain and vicinity who do not use an ARC/INFO compatible Geographic Information System (GIS), but would like to obtain a paper map and explanation, PDF and PostScript plot files containing map images of the data in the digital database, as well as PostScript and PDF plot files of the explanation sheet and explanatory text, have been included in the database package (please see the section 'Digital Plot Files', page 5). The PostScript plot files require a gzip utility to access them. For those without computer capability, we can provide users with the PostScript or PDF files on tape that can be taken to a vendor for plotting. Paper plots can also be ordered directly from the USGS (please see the section 'Obtaining Plots from USGS Open-File Services', page 5). The content and character of the database, methods of obtaining it, and processes of extracting the map database from the tar (tape archive) file are described herein. The map database itself, consisting of six ARC/INFO coverages, can be obtained over the Internet or by magnetic tape copy as described below. The database was compiled using ARC/INFO, a commercial Geographic Information System (Environmental Systems Research Institute, Redlands, California), with version 3.0 of the menu interface ALACARTE (Fitzgibbon and Wentworth, 1991, Fitzgibbon, 1991, Wentworth and Fitzgibbon, 1991). The ARC/INFO coverages are stored in uncompressed ARC export format (ARC/INFO version 7.x). All data files have been compressed, and may be uncompressed with gzip, which is available free of charge over the Internet via links from the USGS Public Domain Software page (http://edcwww.cr.usgs.gov/doc/edchome/ndcdb/public.html). ARC/INFO export files (files with the .e00 extension) can be converted into ARC/INFO coverages in ARC/INFO (see below) and can be read by some other Geographic Information Systems, such as MapInfo via ArcLink and ESRI's ArcView.

  7. Spatial Digital Database for the Geologic Map of Oregon

    USGS Publications Warehouse

    Walker, George W.; MacLeod, Norman S.; Miller, Robert J.; Raines, Gary L.; Connors, Katherine A.

    2003-01-01

    Introduction This report describes and makes available a geologic digital spatial database (orgeo) representing the geologic map of Oregon (Walker and MacLeod, 1991). The original paper publication was printed as a single map sheet at a scale of 1:500,000, accompanied by a second sheet containing map unit descriptions and ancillary data. A digital version of the Walker and MacLeod (1991) map was included in Raines and others (1996). The dataset provided by this open-file report supersedes the earlier published digital version (Raines and others, 1996). This digital spatial database is one of many being created by the U.S. Geological Survey as an ongoing effort to provide geologic information for use in spatial analysis in a geographic information system (GIS). This database can be queried in many ways to produce a variety of geologic maps. This database is not meant to be used or displayed at any scale larger than 1:500,000 (for example, 1:100,000). This report describes the methods used to convert the geologic map data into a digital format, describes the ArcInfo GIS file structures and relationships, and explains how to download the digital files from the U.S. Geological Survey public access World Wide Web site on the Internet. Scanned images of the printed map (Walker and MacLeod, 1991), their correlation of map units, and their explanation of map symbols are also available for download.

  8. Geologic database for digital geology of California, Nevada, and Utah: an application of the North American Data Model

    USGS Publications Warehouse

    Bedford, David R.; Ludington, Steve; Nutt, Constance M.; Stone, Paul A.; Miller, David M.; Miller, Robert J.; Wagner, David L.; Saucedo, George J.

    2003-01-01

    The USGS is creating an integrated national database for digital state geologic maps that includes stratigraphic, age, and lithologic information. The majority of the conterminous 48 states have digital geologic base maps available, often at scales of 1:500,000. This product is a prototype, and is intended to demonstrate the types of derivative maps that will be possible with the national integrated database. This database permits the creation of a number of types of maps via simple or sophisticated queries, maps that may be useful in a number of areas, including mineral-resource assessment, environmental assessment, and regional tectonic evolution. This database is distributed with three main parts: a Microsoft Access 2000 database containing geologic map attribute data, an Arc/Info (Environmental Systems Research Institute, Redlands, California) Export format file containing points representing designation of stratigraphic regions for the Geologic Map of Utah, and an ArcView 3.2 (Environmental Systems Research Institute, Redlands, California) project containing scripts and dialogs for performing a series of generalization and mineral resource queries. IMPORTANT NOTE: Spatial data for the respective stage geologic maps is not distributed with this report. The digital state geologic maps for the states involved in this report are separate products, and two of them are produced by individual state agencies, which may be legally and/or financially responsible for this data. However, the spatial datasets for maps discussed in this report are available to the public. Questions regarding the distribution, sale, and use of individual state geologic maps should be sent to the respective state agency. We do provide suggestions for obtaining and formatting the spatial data to make it compatible with data in this report. See section ‘Obtaining and Formatting Spatial Data’ in the PDF version of the report.

  9. Creating a literature database of low-calorie sweeteners and health studies: evidence mapping.

    PubMed

    Wang, Ding Ding; Shams-White, Marissa; Bright, Oliver John M; Parrott, J Scott; Chung, Mei

    2016-01-05

    Evidence mapping is an emerging tool used to systematically identify, organize and summarize the quantity and focus of scientific evidence on a broad topic, but there are currently no methodological standards. Using the topic of low-calorie sweeteners (LCS) and selected health outcomes, we describe the process of creating an evidence-map database and demonstrate several example descriptive analyses using this database. The process of creating an evidence-map database is described in detail. The steps include: developing a comprehensive literature search strategy, establishing study eligibility criteria and a systematic study selection process, extracting data, developing outcome groups with input from expert stakeholders and tabulating data using descriptive analyses. The database was uploaded onto SRDR™ (Systematic Review Data Repository), an open public data repository. Our final LCS evidence-map database included 225 studies, of which 208 were interventional studies and 17 were cohort studies. An example bubble plot was produced to display the evidence-map data and visualize research gaps according to four parameters: comparison types, population baseline health status, outcome groups, and study sample size. This plot indicated a lack of studies assessing appetite and dietary intake related outcomes using LCS with a sugar intake comparison in people with diabetes. Evidence mapping is an important tool for the contextualization of in-depth systematic reviews within broader literature and identifies gaps in the evidence base, which can be used to inform future research. An open evidence-map database has the potential to promote knowledge translation from nutrition science to policy.

  10. 77 FR 29749 - Twelfth Meeting: RTCA Special Committee 217, Joint with EUROCAE WG-44, Terrain and Airport...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-18

    ... Committee 217, Joint with EUROCAE WG-44, Terrain and Airport Mapping Databases AGENCY: Federal Aviation... 217, Joint with EUROCAE WG-44, Terrain and Airport Mapping Databases. SUMMARY: The FAA is issuing this..., Terrain and Airport Mapping Databases. DATES: The meeting will be held June 18-22, 2012, from 9:00 a.m.-5...

  11. Database for the geologic map of the Mount Baker 30- by 60-minute quadrangle, Washington (I-2660)

    USGS Publications Warehouse

    Tabor, R.W.; Haugerud, R.A.; Hildreth, Wes; Brown, E.H.

    2006-01-01

    This digital map database has been prepared by R.W. Tabor from the published Geologic map of the Mount Baker 30- by 60-Minute Quadrangle, Washington. Together with the accompanying text files as PDF, it provides information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The authors mapped most of the geology at 1:100,000. The Quaternary contacts and structural data have been much simplified for the 1:100,000-scale map and database. The spatial resolution (scale) of the database is 1:100,000 or smaller. This database depicts the distribution of geologic materials and structures at a regional (1:100,000) scale. The report is intended to provide geologic information for the regional study of materials properties, earthquake shaking, landslide potential, mineral hazards, seismic velocity, and earthquake faults. In addition, the report contains information and interpretations about the regional geologic history and framework. However, the regional scale of this report does not provide sufficient detail for site development purposes.

  12. Database for the geologic map of the Chelan 30-minute by 60-minute quadrangle, Washington (I-1661)

    USGS Publications Warehouse

    Tabor, R.W.; Frizzell, V.A.; Whetten, J.T.; Waitt, R.B.; Swanson, D.A.; Byerly, G.R.; Booth, D.B.; Hetherington, M.J.; Zartman, R.E.

    2006-01-01

    This digital map database has been prepared by R. W. Tabor from the published Geologic map of the Chelan 30-Minute Quadrangle, Washington. Together with the accompanying text files as PDF, it provides information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The authors mapped most of the bedrock geology at 1:100,000 scale, but compiled Quaternary units at 1:24,000 scale. The Quaternary contacts and structural data have been much simplified for the 1:100,000-scale map and database. The spatial resolution (scale) of the database is 1:100,000 or smaller. This database depicts the distribution of geologic materials and structures at a regional (1:100,000) scale. The report is intended to provide geologic information for the regional study of materials properties, earthquake shaking, landslide potential, mineral hazards, seismic velocity, and earthquake faults. In addition, the report contains information and interpretations about the regional geologic history and framework. However, the regional scale of this report does not provide sufficient detail for site development purposes.

  13. Database for the geologic map of the Snoqualmie Pass 30-minute by 60-minute quadrangle, Washington (I-2538)

    USGS Publications Warehouse

    Tabor, R.W.; Frizzell, V.A.; Booth, D.B.; Waitt, R.B.

    2006-01-01

    This digital map database has been prepared by R.W. Tabor from the published Geologic map of the Snoqualmie Pass 30' X 60' Quadrangle, Washington. Together with the accompanying text files as PDF, it provides information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The authors mapped most of the bedrock geology at 1:100,000 scale, but compiled Quaternary units at 1:24,000 scale. The Quaternary contacts and structural data have been much simplified for the 1:100,000-scale map and database. The spatial resolution (scale) of the database is 1:100,000 or smaller. This database depicts the distribution of geologic materials and structures at a regional (1:100,000) scale. The report is intended to provide geologic information for the regional study of materials properties, earthquake shaking, landslide potential, mineral hazards, seismic velocity, and earthquake faults. In addition, the report contains information and interpretations about the regional geologic history and framework. However, the regional scale of this report does not provide sufficient detail for site development purposes.

  14. Geologic Map of the Wenatchee 1:100,000 Quadrangle, Central Washington: A Digital Database

    USGS Publications Warehouse

    Tabor, R.W.; Waitt, R.B.; Frizzell, V.A.; Swanson, D.A.; Byerly, G.R.; Bentley, R.D.

    2005-01-01

    This digital map database has been prepared by R.W. Tabor from the published Geologic map of the Wenatchee 1:100,000 Quadrangle, Central Washington. Together with the accompanying text files as PDF, it provides information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The authors mapped most of the bedrock geology at 1:100,000 scale, but compiled Quaternary units at 1:24,000 scale. The Quaternary contacts and structural data have been much simplified for the 1:100,000-scale map and database. The spatial resolution (scale) of the database is 1:100,000 or smaller. This database depicts the distribution of geologic materials and structures at a regional (1:100,000) scale. The report is intended to provide geologic information for the regional study of materials properties, earthquake shaking, landslide potential, mineral hazards, seismic velocity, and earthquake faults. In addition, the report contains information and interpretations about the regional geologic history and framework. However, the regional scale of this report does not provide sufficient detail for site development purposes.

  15. EPA Tribal Areas (4 of 4): Alaska Native Allotments

    EPA Pesticide Factsheets

    This dataset is a spatial representation of the Public Land Survey System (PLSS) in Alaska, generated from land survey records. The data represents a seamless spatial portrayal of native allotment land parcels, their legal descriptions, corner positioning and markings, and survey measurements. This data is intended for mapping purposes only and is not a substitute or replacement for the legal land survey records or other legal documents.Measurement and attribute data are collected from survey records using data entry screens into a relational database. The database design is based upon the FGDC Cadastral Content Data Standard. Corner positions are derived by geodetic calculations using measurement records. Closure and edgematching are applied to produce a seamless dataset. The resultant features do not preserve the original geometry of survey measurements, but the record measurements are reported as attributes. Additional boundary data are derived by spatial capture, protraction and GIS processing. The spatial features are stored and managed within the relational database, with active links to the represented measurement and attribute data.

  16. Preliminary surficial geologic map of the Newberry Springs 30' x 60' quadrangle, California

    USGS Publications Warehouse

    Phelps, G.A.; Bedford, D.R.; Lidke, D.J.; Miller, D.M.; Schmidt, K.M.

    2012-01-01

    The Newberry Springs 30' x 60' quadrangle is located in the central Mojave Desert of southern California. It is split approximately into northern and southern halves by I-40, with the city of Barstow at its western edge and the town of Ludlow near its eastern edge. The map area spans lat 34°30 to 35° N. to long -116 °to -117° W. and covers over 1,000 km2. We integrate the results of surficial geologic mapping conducted during 2002-2005 with compilations of previous surficial mapping and bedrock geologic mapping. Quaternary units are subdivided in detail on the map to distinguish variations in age, process of formation, pedogenesis, lithology, and spatial interdependency, whereas pre-Quaternary bedrock units are grouped into generalized assemblages that emphasize their attributes as hillslope-forming materials and sources of parent material for the Quaternary units. The spatial information in this publication is presented in two forms: a spatial database and a geologic map. The geologic map is a view (the display of an extracted subset of the database at a given time) of the spatial database; it highlights key aspects of the database and necessarily does not show all of the data contained therein. The database contains detailed information about Quaternary geologic unit composition, authorship, and notes regarding geologic units, faults, contacts, and local vegetation. The amount of information contained in the database is too large to show on a single map, so a restricted subset of the information was chosen to summarize the overall nature of the geology. Refer to the database for additional information. Accompanying the spatial data are the map documentation and spatial metadata. The map documentation (this document) describes the geologic setting and history of the Newberry Springs map sheet, summarizes the age and physical character of each map unit, and describes principal faults and folds. The Federal Geographic Data Committee (FGDC) compliant metadata provides detailed information about the digital files and file structure of the spatial data.

  17. Fast vessel segmentation in retinal images using multi-scale enhancement and second-order local entropy

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Zamora, G.; Bauman, W.; Soliz, P.

    2012-03-01

    Retinal vasculature is one of the most important anatomical structures in digital retinal photographs. Accurate segmentation of retinal blood vessels is an essential task in automated analysis of retinopathy. This paper presents a new and effective vessel segmentation algorithm that features computational simplicity and fast implementation. This method uses morphological pre-processing to decrease the disturbance of bright structures and lesions before vessel extraction. Next, a vessel probability map is generated by computing the eigenvalues of the second derivatives of Gaussian filtered image at multiple scales. Then, the second order local entropy thresholding is applied to segment the vessel map. Lastly, a rule-based decision step, which measures the geometric shape difference between vessels and lesions is applied to reduce false positives. The algorithm is evaluated on the low-resolution DRIVE and STARE databases and the publicly available high-resolution image database from Friedrich-Alexander University Erlangen-Nuremberg, Germany). The proposed method achieved comparable performance to state of the art unsupervised vessel segmentation methods with a competitive faster speed on the DRIVE and STARE databases. For the high resolution fundus image database, the proposed algorithm outperforms an existing approach both on performance and speed. The efficiency and robustness make the blood vessel segmentation method described here suitable for broad application in automated analysis of retinal images.

  18. Geologic map of Yosemite National Park and vicinity, California

    USGS Publications Warehouse

    Huber, N.K.; Bateman, P.C.; Wahrhaftig, Clyde

    1989-01-01

    This digital map database represents the general distribution of bedrock and surficial deposits of the Yosemite National Park vicinity. It was produced directly from the file used to create the print version in 1989. The Yosemite National Park region is comprised of portions of 15 7.5 minute quadrangles. The original publication of the map in 1989 included the map, described map units and provided correlations, as well as a geologic summary and references, all on the same sheet. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:125,000 or smaller.

  19. Digital release of the Alaska Quaternary fault and fold database

    NASA Astrophysics Data System (ADS)

    Koehler, R. D.; Farrell, R.; Burns, P.; Combellick, R. A.; Weakland, J. R.

    2011-12-01

    The Alaska Division of Geological & Geophysical Surveys (DGGS) has designed a Quaternary fault and fold database for Alaska in conformance with standards defined by the U.S. Geological Survey for the National Quaternary fault and fold database. Alaska is the most seismically active region of the United States, however little information exists on the location, style of deformation, and slip rates of Quaternary faults. Thus, to provide an accurate, user-friendly, reference-based fault inventory to the public, we are producing a digital GIS shapefile of Quaternary fault traces and compiling summary information on each fault. Here, we present relevant information pertaining to the digital GIS shape file and online access and availability of the Alaska database. This database will be useful for engineering geologic studies, geologic, geodetic, and seismic research, and policy planning. The data will also contribute to the fault source database being constructed by the Global Earthquake Model (GEM), Faulted Earth project, which is developing tools to better assess earthquake risk. We derived the initial list of Quaternary active structures from The Neotectonic Map of Alaska (Plafker et al., 1994) and supplemented it with more recent data where available. Due to the limited level of knowledge on Quaternary faults in Alaska, pre-Quaternary fault traces from the Plafker map are shown as a layer in our digital database so users may view a more accurate distribution of mapped faults and to suggest the possibility that some older traces may be active yet un-studied. The database will be updated as new information is developed. We selected each fault by reviewing the literature and georegistered the faults from 1:250,000-scale paper maps contained in 1970's vintage and earlier bedrock maps. However, paper map scales range from 1:20,000 to 1:500,000. Fault parameters in our GIS fault attribute tables include fault name, age, slip rate, slip sense, dip direction, fault line type (i.e., well constrained, moderately constrained, or inferred), and mapped scale. Each fault is assigned a three-integer CODE, based upon age, slip rate, and how well the fault is located. This CODE dictates the line-type for the GIS files. To host the database, we are developing an interactive web-map application with ArcGIS for Server and the ArcGIS API for JavaScript from Environmental Systems Research Institute, Inc. (Esri). The web-map application will present the database through a visible scale range with each fault displayed at the resolution of the original map. Application functionality includes: search by name or location, identification of fault by manual selection, and choice of base map. Base map options include topographic, satellite imagery, and digital elevation maps available from ArcGIS on-line. We anticipate that the database will be publically accessible from a portal embedded on the DGGS website by the end of 2011.

  20. Database of the Geologic Map of North America - Adapted from the Map by J.C. Reed, Jr. and others (2005)

    USGS Publications Warehouse

    Garrity, Christopher P.; Soller, David R.

    2009-01-01

    The Geological Society of America's (GSA) Geologic Map of North America (Reed and others, 2005; 1:5,000,000) shows the geology of a significantly large area of the Earth, centered on North and Central America and including the submarine geology of parts of the Atlantic and Pacific Oceans. This map is now converted to a Geographic Information System (GIS) database that contains all geologic and base-map information shown on the two printed map sheets and the accompanying explanation sheet. We anticipate this map database will be revised at some unspecified time in the future, likely through the actions of a steering committee managed by the Geological Society of America (GSA) and staffed by scientists from agencies including, but not limited to, those responsible for the original map compilation (U.S. Geological Survey, Geological Survey of Canada, and Woods Hole Oceanographic Institute). Regarding the use of this product, as noted by the map's compilers: 'The Geologic Map of North America is an essential educational tool for teaching the geology of North America to university students and for the continuing education of professional geologists in North America and elsewhere. In addition, simplified maps derived from the Geologic Map of North America are useful for enlightening younger students and the general public about the geology of the continent.' With publication of this database, the preparation of any type of simplified map is made significantly easier. More important perhaps, the database provides a more accessible means to explore the map information and to compare and analyze it in conjunction with other types of information (for example, land use, soils, biology) to better understand the complex interrelations among factors that affect Earth resources, hazards, ecosystems, and climate.

  1. National Map Data Base On Landslide Prerequisites In Clay and Silt Areas - Development of Prototype

    NASA Astrophysics Data System (ADS)

    Viberg, Leif

    Swedish geotechnical institute, SGI, has in co-operation with Swedish geologic survey, Lantmateriet (land surveying) and Swedish Rescue Service developed a theme database on landslide prerequisites in clay and silt areas. The work is carried out on commission of the Swedish government. A report with suggestions for production of the database has been delivered to the government. The database is a prototype, which has been tested in an area in northern Sweden. Recommended presentation map scale is about 1:50 000. Distribution of the database via Internet is discussed. The aim of the database is to use it as a modern planning tool in combination with other databases, e g databases on flooding prognoses. The main use is supposed to be in early planning stages, e g for new building and infrastructure development and for risk analyses. The database can also be used in more acute cases, e g for risk analyses and rescue operations in connection with flooding over large areas. Users are supposed to be municipal and county planners and rescue services, infrastructure planners, consultants and assurance companies. The database is constructed by combination of two existing databases: Elevation data and soil map data. The investigation area is divided into three zones with different stability criteria: 1. Clay and silt in sloping ground or adjoining water. 2. Clay and silt in flat ground. 3. Rock and other soils than clay and silt. The geometrical and soil criteria for the zones are specified in an algoritm, that will do the job to sort out the different zones. The algoritm is thereby using data from the elevation and soil databases. The investigation area is divided into cells (raster format) with 5 x 5 m side length. Different algoritms had to be developed before reasonable calculation time was reached. The theme may be presented on screen or as a map plot. A prototype map has been produced for the test area. A description is accompanying the map. The database is suggested to be produced in landslide prone areas in Sweden and approximately 200-300 map sheets (25 x 25 km) are required.

  2. A simple method for serving Web hypermaps with dynamic database drill-down

    PubMed Central

    Boulos, Maged N Kamel; Roudsari, Abdul V; Carson, Ewart R

    2002-01-01

    Background HealthCyberMap aims at mapping parts of health information cyberspace in novel ways to deliver a semantically superior user experience. This is achieved through "intelligent" categorisation and interactive hypermedia visualisation of health resources using metadata, clinical codes and GIS. HealthCyberMap is an ArcView 3.1 project. WebView, the Internet extension to ArcView, publishes HealthCyberMap ArcView Views as Web client-side imagemaps. The basic WebView set-up does not support any GIS database connection, and published Web maps become disconnected from the original project. A dedicated Internet map server would be the best way to serve HealthCyberMap database-driven interactive Web maps, but is an expensive and complex solution to acquire, run and maintain. This paper describes HealthCyberMap simple, low-cost method for "patching" WebView to serve hypermaps with dynamic database drill-down functionality on the Web. Results The proposed solution is currently used for publishing HealthCyberMap GIS-generated navigational information maps on the Web while maintaining their links with the underlying resource metadata base. Conclusion The authors believe their map serving approach as adopted in HealthCyberMap has been very successful, especially in cases when only map attribute data change without a corresponding effect on map appearance. It should be also possible to use the same solution to publish other interactive GIS-driven maps on the Web, e.g., maps of real world health problems. PMID:12437788

  3. A Foot-Mounted Inertial Measurement Unit (IMU) Positioning Algorithm Based on Magnetic Constraint

    PubMed Central

    Zou, Jiaheng

    2018-01-01

    With the development of related applications, indoor positioning techniques have been more and more widely developed. Based on Wi-Fi, Bluetooth low energy (BLE) and geomagnetism, indoor positioning techniques often rely on the physical location of fingerprint information. The focus and difficulty of establishing the fingerprint database are in obtaining a relatively accurate physical location with as little given information as possible. This paper presents a foot-mounted inertial measurement unit (IMU) positioning algorithm under the loop closure constraint based on magnetic information. It can provide relatively reliable position information without maps and geomagnetic information and provides a relatively accurate coordinate for the collection of a fingerprint database. In the experiment, the features extracted by the multi-level Fourier transform method proposed in this paper are validated and the validity of loop closure matching is tested with a RANSAC-based method. Moreover, the loop closure detection results show that the cumulative error of the trajectory processed by the graph optimization algorithm is significantly suppressed, presenting a good accuracy. The average error of the trajectory under loop closure constraint is controlled below 2.15 m. PMID:29494542

  4. A Foot-Mounted Inertial Measurement Unit (IMU) Positioning Algorithm Based on Magnetic Constraint.

    PubMed

    Wang, Yan; Li, Xin; Zou, Jiaheng

    2018-03-01

    With the development of related applications, indoor positioning techniques have been more and more widely developed. Based on Wi-Fi, Bluetooth low energy (BLE) and geomagnetism, indoor positioning techniques often rely on the physical location of fingerprint information. The focus and difficulty of establishing the fingerprint database are in obtaining a relatively accurate physical location with as little given information as possible. This paper presents a foot-mounted inertial measurement unit (IMU) positioning algorithm under the loop closure constraint based on magnetic information. It can provide relatively reliable position information without maps and geomagnetic information and provides a relatively accurate coordinate for the collection of a fingerprint database. In the experiment, the features extracted by the multi-level Fourier transform method proposed in this paper are validated and the validity of loop closure matching is tested with a RANSAC-based method. Moreover, the loop closure detection results show that the cumulative error of the trajectory processed by the graph optimization algorithm is significantly suppressed, presenting a good accuracy. The average error of the trajectory under loop closure constraint is controlled below 2.15 m.

  5. Error and Uncertainty in the Accuracy Assessment of Land Cover Maps

    NASA Astrophysics Data System (ADS)

    Sarmento, Pedro Alexandre Reis

    Traditionally the accuracy assessment of land cover maps is performed through the comparison of these maps with a reference database, which is intended to represent the "real" land cover, being this comparison reported with the thematic accuracy measures through confusion matrixes. Although, these reference databases are also a representation of reality, containing errors due to the human uncertainty in the assignment of the land cover class that best characterizes a certain area, causing bias in the thematic accuracy measures that are reported to the end users of these maps. The main goal of this dissertation is to develop a methodology that allows the integration of human uncertainty present in reference databases in the accuracy assessment of land cover maps, and analyse the impacts that uncertainty may have in the thematic accuracy measures reported to the end users of land cover maps. The utility of the inclusion of human uncertainty in the accuracy assessment of land cover maps is investigated. Specifically we studied the utility of fuzzy sets theory, more precisely of fuzzy arithmetic, for a better understanding of human uncertainty associated to the elaboration of reference databases, and their impacts in the thematic accuracy measures that are derived from confusion matrixes. For this purpose linguistic values transformed in fuzzy intervals that address the uncertainty in the elaboration of reference databases were used to compute fuzzy confusion matrixes. The proposed methodology is illustrated using a case study in which the accuracy assessment of a land cover map for Continental Portugal derived from Medium Resolution Imaging Spectrometer (MERIS) is made. The obtained results demonstrate that the inclusion of human uncertainty in reference databases provides much more information about the quality of land cover maps, when compared with the traditional approach of accuracy assessment of land cover maps. None

  6. Plant Genome Resources at the National Center for Biotechnology Information

    PubMed Central

    Wheeler, David L.; Smith-White, Brian; Chetvernin, Vyacheslav; Resenchuk, Sergei; Dombrowski, Susan M.; Pechous, Steven W.; Tatusova, Tatiana; Ostell, James

    2005-01-01

    The National Center for Biotechnology Information (NCBI) integrates data from more than 20 biological databases through a flexible search and retrieval system called Entrez. A core Entrez database, Entrez Nucleotide, includes GenBank and is tightly linked to the NCBI Taxonomy database, the Entrez Protein database, and the scientific literature in PubMed. A suite of more specialized databases for genomes, genes, gene families, gene expression, gene variation, and protein domains dovetails with the core databases to make Entrez a powerful system for genomic research. Linked to the full range of Entrez databases is the NCBI Map Viewer, which displays aligned genetic, physical, and sequence maps for eukaryotic genomes including those of many plants. A specialized plant query page allow maps from all plant genomes covered by the Map Viewer to be searched in tandem to produce a display of aligned maps from several species. PlantBLAST searches against the sequences shown in the Map Viewer allow BLAST alignments to be viewed within a genomic context. In addition, precomputed sequence similarities, such as those for proteins offered by BLAST Link, enable fluid navigation from unannotated to annotated sequences, quickening the pace of discovery. NCBI Web pages for plants, such as Plant Genome Central, complete the system by providing centralized access to NCBI's genomic resources as well as links to organism-specific Web pages beyond NCBI. PMID:16010002

  7. Mapping the influence of the deep Nazca slab on the geometry of the 660-km discontinuity beneath stable South America

    NASA Astrophysics Data System (ADS)

    Bianchi, M. B. D.; Assumpcao, M.; Julià, J.

    2017-12-01

    The fate of the deep Nazca subducted plate is poorly mapped under stable South America. Transition zone thickness and position is greatly dependent on mantle temperature and so is influenced by the colder Nazca plate position. We use a database of 35,000 LQT deconvolved receiver function traces to image the mantle transition zone and other upper mantle discontinuities under different terranes of stable South American continent. Data from the entire Brazilian Seismographic Network database, consisting of more than 80 broadband stations supplemented by 35 temporary stations deployed in west Brazil, Argentina, Paraguay, Bolivia and Uruguay were processed. Our results indicates that upper mantle velocities are faster than average under stable cratons and that most of the discontinuities are positioned with small variations in respect to nominal depths, except in places were the Nazca plate interacts with the transition zone. Under the Chaco-Pantanal basin the Nazca plate appears to be trapped in the transition zone for more than 1000 km with variations of up to 30 km in 660 km discontinuity topography under this region consistent with global tomographic models. Additional results obtained from SS precursor analysis of South Sandwich Islands teleseismic events recorded at USArray stations indicates that variations of transition zones thickness occur where the Nazca plate interacts with the upper mantle discontinuities in the northern part of Stable South American continent.

  8. Surficial geologic map of the Amboy 30' x 60' quadrangle, San Bernardino County, California

    USGS Publications Warehouse

    Bedford, David R.; Miller, David M.; Phelps, Geoffrey A.

    2010-01-01

    The surficial geologic map of the Amboy 30' x 60' quadrangle presents characteristics of surficial materials for an area of approximately 5,000 km2 in the eastern Mojave Desert of southern California. This map consists of new surficial mapping conducted between 2000 and 2007, as well as compilations from previous surficial mapping. Surficial geologic units are mapped and described based on depositional process and age categories that reflect the mode of deposition, pedogenic effects following deposition, and, where appropriate, the lithologic nature of the material. Many physical properties were noted and measured during the geologic mapping. This information was used to classify surficial deposits and to understand their ecological importance. We focus on physical properties that drive hydrologic, biologic, and physical processes such as particle-size distribution (PSD) and bulk density. The database contains point data representing locations of samples for both laboratory determined physical properties and semiquantitative field-based information in the database. We include the locations of all field observations and note the type of information collected in the field to help assist in assessing the quality of the mapping. The publication is separated into three parts: documentation, spatial data, and printable map graphics of the database. Documentation includes this pamphlet, which provides a discussion of the surficial geology and units and the map. Spatial data are distributed as ArcGIS Geodatabase in Microsoft Access format and are accompanied by a readme file, which describes the database contents, and FGDC metadata for the spatial map information. Map graphics files are distributed as Postscript and Adobe Portable Document Format (PDF) files that provide a view of the spatial database at the mapped scale.

  9. A blue carbon soil database: Tidal wetland stocks for the US National Greenhouse Gas Inventory

    NASA Astrophysics Data System (ADS)

    Feagin, R. A.; Eriksson, M.; Hinson, A.; Najjar, R. G.; Kroeger, K. D.; Herrmann, M.; Holmquist, J. R.; Windham-Myers, L.; MacDonald, G. M.; Brown, L. N.; Bianchi, T. S.

    2015-12-01

    Coastal wetlands contain large reservoirs of carbon, and in 2015 the US National Greenhouse Gas Inventory began the work of placing blue carbon within the national regulatory context. The potential value of a wetland carbon stock, in relation to its location, soon could be influential in determining governmental policy and management activities, or in stimulating market-based CO2 sequestration projects. To meet the national need for high-resolution maps, a blue carbon stock database was developed linking National Wetlands Inventory datasets with the USDA Soil Survey Geographic Database. Users of the database can identify the economic potential for carbon conservation or restoration projects within specific estuarine basins, states, wetland types, physical parameters, and land management activities. The database is geared towards both national-level assessments and local-level inquiries. Spatial analysis of the stocks show high variance within individual estuarine basins, largely dependent on geomorphic position on the landscape, though there are continental scale trends to the carbon distribution as well. Future plans including linking this database with a sedimentary accretion database to predict carbon flux in US tidal wetlands.

  10. Database for the geologic map of the Sauk River 30-minute by 60-minute quadrangle, Washington (I-2592)

    USGS Publications Warehouse

    Tabor, R.W.; Booth, D.B.; Vance, J.A.; Ford, A.B.

    2006-01-01

    This digital map database has been prepared by R.W. Tabor from the published Geologic map of the Sauk River 30- by 60 Minute Quadrangle, Washington. Together with the accompanying text files as PDF, it provides information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The authors mapped most of the bedrock geology at 1:100,000 scale, but compiled most Quaternary units at 1:24,000 scale. The Quaternary contacts and structural data have been much simplified for the 1:100,000-scale map and database. The spatial resolution (scale) of the database is 1:100,000 or smaller. This database depicts the distribution of geologic materials and structures at a regional (1:100,000) scale. The report is intended to provide geologic information for the regional study of materials properties, earthquake shaking, landslide potential, mineral hazards, seismic velocity, and earthquake faults. In addition, the report contains information and interpretations about the regional geologic history and framework. However, the regional scale of this report does not provide sufficient detail for site development purposes.

  11. USGS national surveys and analysis projects: Preliminary compilation of integrated geological datasets for the United States

    USGS Publications Warehouse

    Nicholson, Suzanne W.; Stoeser, Douglas B.; Wilson, Frederic H.; Dicken, Connie L.; Ludington, Steve

    2007-01-01

    The growth in the use of Geographic nformation Systems (GS) has highlighted the need for regional and national digital geologic maps attributed with age and rock type information. Such spatial data can be conveniently used to generate derivative maps for purposes that include mineral-resource assessment, metallogenic studies, tectonic studies, human health and environmental research. n 1997, the United States Geological Survey’s Mineral Resources Program initiated an effort to develop national digital databases for use in mineral resource and environmental assessments. One primary activity of this effort was to compile a national digital geologic map database, utilizing state geologic maps, to support mineral resource studies in the range of 1:250,000- to 1:1,000,000-scale. Over the course of the past decade, state databases were prepared using a common standard for the database structure, fields, attributes, and data dictionaries. As of late 2006, standardized geological map databases for all conterminous (CONUS) states have been available on-line as USGS Open-File Reports. For Alaska and Hawaii, new state maps are being prepared, and the preliminary work for Alaska is being released as a series of 1:500,000-scale regional compilations. See below for a list of all published databases.

  12. A spatial national health facility database for public health sector planning in Kenya in 2008.

    PubMed

    Noor, Abdisalan M; Alegana, Victor A; Gething, Peter W; Snow, Robert W

    2009-03-06

    Efforts to tackle the enormous burden of ill-health in low-income countries are hampered by weak health information infrastructures that do not support appropriate planning and resource allocation. For health information systems to function well, a reliable inventory of health service providers is critical. The spatial referencing of service providers to allow their representation in a geographic information system is vital if the full planning potential of such data is to be realized. A disparate series of contemporary lists of health service providers were used to update a public health facility database of Kenya last compiled in 2003. These new lists were derived primarily through the national distribution of antimalarial and antiretroviral commodities since 2006. A combination of methods, including global positioning systems, was used to map service providers. These spatially-referenced data were combined with high-resolution population maps to analyze disparity in geographic access to public health care. The updated 2008 database contained 5,334 public health facilities (67% ministry of health; 28% mission and nongovernmental organizations; 2% local authorities; and 3% employers and other ministries). This represented an overall increase of 1,862 facilities compared to 2003. Most of the additional facilities belonged to the ministry of health (79%) and the majority were dispensaries (91%). 93% of the health facilities were spatially referenced, 38% using global positioning systems compared to 21% in 2003. 89% of the population was within 5 km Euclidean distance to a public health facility in 2008 compared to 71% in 2003. Over 80% of the population outside 5 km of public health service providers was in the sparsely settled pastoralist areas of the country. We have shown that, with concerted effort, a relatively complete inventory of mapped health services is possible with enormous potential for improving planning. Expansion in public health care in Kenya has resulted in significant increases in geographic access although several areas of the country need further improvements. This information is key to future planning and with this paper we have released the digital spatial database in the public domain to assist the Kenyan Government and its partners in the health sector.

  13. GIS Methodic and New Database for Magmatic Rocks. Application for Atlantic Oceanic Magmatism.

    NASA Astrophysics Data System (ADS)

    Asavin, A. M.

    2001-12-01

    There are several geochemical Databases in INTERNET available now. There one of the main peculiarities of stored geochemical information is geographical coordinates of each samples in those Databases. As rule the software of this Database use spatial information only for users interface search procedures. In the other side, GIS-software (Geographical Information System software),for example ARC/INFO software which using for creation and analyzing special geological, geochemical and geophysical e-map, have been deeply involved with geographical coordinates for of samples. We join peculiarities GIS systems and relational geochemical Database from special software. Our geochemical information system created in Vernadsky Geological State Museum and institute of Geochemistry and Analytical Chemistry from Moscow. Now we tested system with data of geochemistry oceanic rock from Atlantic and Pacific oceans, about 10000 chemical analysis. GIS information content consist from e-map covers Wold Globes. Parts of these maps are Atlantic ocean covers gravica map (with grid 2''), oceanic bottom hot stream, altimeteric maps, seismic activity, tectonic map and geological map. Combination of this information content makes possible created new geochemical maps and combination of spatial analysis and numerical geochemical modeling of volcanic process in ocean segment. Now we tested information system on thick client technology. Interface between GIS system Arc/View and Database resides in special multiply SQL-queries sequence. The result of the above gueries were simple DBF-file with geographical coordinates. This file act at the instant of creation geochemical and other special e-map from oceanic region. We used more complex method for geophysical data. From ARC\\View we created grid cover for polygon spatial geophysical information.

  14. Database for the geologic map of upper Eocene to Holocene volcanic and related rocks in the Cascade Range, Washington

    USGS Publications Warehouse

    Barron, Andrew D.; Ramsey, David W.; Smith, James G.

    2014-01-01

    This digital database contains information used to produce the geologic map published as Sheet 1 in U.S. Geological Survey Miscellaneous Investigations Series Map I-2005. (Sheet 2 of Map I-2005 shows sources of geologic data used in the compilation and is available separately). Sheet 1 of Map I-2005 shows the distribution and relations of volcanic and related rock units in the Cascade Range of Washington at a scale of 1:500,000. This digital release is produced from stable materials originally compiled at 1:250,000 scale that were used to publish Sheet 1. The database therefore contains more detailed geologic information than is portrayed on Sheet 1. This is most noticeable in the database as expanded polygons of surficial units and the presence of additional strands of concealed faults. No stable compilation materials exist for Sheet 1 at 1:500,000 scale. The main component of this digital release is a spatial database prepared using geographic information systems (GIS) applications. This release also contains links to files to view or print the map sheet, main report text, and accompanying mapping reference sheet from Map I-2005. For more information on volcanoes in the Cascade Range in Washington, Oregon, or California, please refer to the U.S. Geological Survey Volcano Hazards Program website.

  15. Visualizing the semantic content of large text databases using text maps

    NASA Technical Reports Server (NTRS)

    Combs, Nathan

    1993-01-01

    A methodology for generating text map representations of the semantic content of text databases is presented. Text maps provide a graphical metaphor for conceptualizing and visualizing the contents and data interrelationships of large text databases. Described are a set of experiments conducted against the TIPSTER corpora of Wall Street Journal articles. These experiments provide an introduction to current work in the representation and visualization of documents by way of their semantic content.

  16. Cartographic analyses of geographic information available on Google Earth Images

    NASA Astrophysics Data System (ADS)

    Oliveira, J. C.; Ramos, J. R.; Epiphanio, J. C.

    2011-12-01

    The propose was to evaluate planimetric accuracy of satellite images available on database of Google Earth. These images are referents to the vicinities of the Federal Univertisity of Viçosa, Minas Gerais - Brazil. The methodology developed evaluated the geographical information of three groups of images which were in accordance to the level of detail presented in the screen images (zoom). These groups of images were labeled to Zoom 1000 (a single image for the entire study area), Zoom 100 (formed by a mosaic of 73 images) and Zoom 100 with geometric correction (this mosaic is like before, however, it was applied a geometric correction through control points). In each group of image was measured the Cartographic Accuracy based on statistical analyses and brazilian's law parameters about planimetric mapping. For this evaluation were identified 22 points in each group of image, where the coordinates of each point were compared to the coordinates of the field obtained by GPS (Global Positioning System). The Table 1 show results related to accuracy (based on a threshold equal to 0.5 mm * mapping scale) and tendency (abscissa and ordinate) between the coordinates of the image and the coordinates of field. Table 1 The geometric correction applied to the Group Zoom 100 reduced the trends identified earlier, and the statistical tests pointed a usefulness of the data for a mapping at a scale of 1/5000 with error minor than 0.5 mm * scale. The analyses proved the quality of cartographic data provided by Google, as well as the possibility of reduce the divergences of positioning present on the data. It can be concluded that it is possible to obtain geographic information database available on Google Earth, however, the level of detail (zoom) used at the time of viewing and capturing information on the screen influences the quality cartographic of the mapping. Although cartographic and thematic potential present in the database, it is important to note that both the software as data distributed by Google Earth has policies for use and distribution.
    Table 1 - PLANIMETRIC ANALYSIS

  17. Spatial digital database of the geologic map of Catalina Core Complex and San Pedro Trough, Pima, Pinal, Gila, Graham, and Cochise counties, Arizona

    USGS Publications Warehouse

    Dickinson, William R.; digital database by Hirschberg, Douglas M.; Pitts, G. Stephen; Bolm, Karen S.

    2002-01-01

    The geologic map of Catalina Core Complex and San Pedro Trough by Dickinson (1992) was digitized for input into a geographic information system (GIS) by the U.S. Geological Survey staff and contractors in 2000-2001. This digital geospatial database is one of many being created by the U.S. Geological Survey as an ongoing effort to provide geologic information in a geographic information system (GIS) for use in spatial analysis. The resulting digital geologic map database data can be queried in many ways to produce a variety of geologic maps and derivative products. Digital base map data (topography, roads, towns, rivers, lakes, and so forth) are not included; they may be obtained from a variety of commercial and government sources. This database is not meant to be used or displayed at any scale larger than 1:125,000 (for example, 1:100,000 or 1:24,000). The digital geologic map plot files that are provided herein are representations of the database. The map area is located in southern Arizona. This report lists the geologic map units, the methods used to convert the geologic map data into a digital format, the ArcInfo GIS file structures and relationships, and explains how to download the digital files from the U.S. Geological Survey public access World Wide Web site on the Internet. The manuscript and digital data review by Lorre Moyer (USGS) is greatly appreciated.

  18. Terrestrial Sediments of the Earth: Development of a Global Unconsolidated Sediments Map Database (GUM)

    NASA Astrophysics Data System (ADS)

    Börker, J.; Hartmann, J.; Amann, T.; Romero-Mujalli, G.

    2018-04-01

    Mapped unconsolidated sediments cover half of the global land surface. They are of considerable importance for many Earth surface processes like weathering, hydrological fluxes or biogeochemical cycles. Ignoring their characteristics or spatial extent may lead to misinterpretations in Earth System studies. Therefore, a new Global Unconsolidated Sediments Map database (GUM) was compiled, using regional maps specifically representing unconsolidated and quaternary sediments. The new GUM database provides insights into the regional distribution of unconsolidated sediments and their properties. The GUM comprises 911,551 polygons and describes not only sediment types and subtypes, but also parameters like grain size, mineralogy, age and thickness where available. Previous global lithological maps or databases lacked detail for reported unconsolidated sediment areas or missed large areas, and reported a global coverage of 25 to 30%, considering the ice-free land area. Here, alluvial sediments cover about 23% of the mapped total ice-free area, followed by aeolian sediments (˜21%), glacial sediments (˜20%), and colluvial sediments (˜16%). A specific focus during the creation of the database was on the distribution of loess deposits, since loess is highly reactive and relevant to understand geochemical cycles related to dust deposition and weathering processes. An additional layer compiling pyroclastic sediment is added, which merges consolidated and unconsolidated pyroclastic sediments. The compilation shows latitudinal abundances of sediment types related to climate of the past. The GUM database is available at the PANGAEA database (https://doi.org/10.1594/PANGAEA.884822).

  19. Digital geomorphological landslide hazard mapping of the Alpago area, Italy

    NASA Astrophysics Data System (ADS)

    van Westen, Cees J.; Soeters, Rob; Sijmons, Koert

    Large-scale geomorphological maps of mountainous areas are traditionally made using complex symbol-based legends. They can serve as excellent "geomorphological databases", from which an experienced geomorphologist can extract a large amount of information for hazard mapping. However, these maps are not designed to be used in combination with a GIS, due to their complex cartographic structure. In this paper, two methods are presented for digital geomorphological mapping at large scales using GIS and digital cartographic software. The methods are applied to an area with a complex geomorphological setting on the Borsoia catchment, located in the Alpago region, near Belluno in the Italian Alps. The GIS database set-up is presented with an overview of the data layers that have been generated and how they are interrelated. The GIS database was also converted into a paper map, using a digital cartographic package. The resulting largescale geomorphological hazard map is attached. The resulting GIS database and cartographic product can be used to analyse the hazard type and hazard degree for each polygon, and to find the reasons for the hazard classification.

  20. Landscape features, standards, and semantics in U.S. national topographic mapping databases

    USGS Publications Warehouse

    Varanka, Dalia

    2009-01-01

    The objective of this paper is to examine the contrast between local, field-surveyed topographical representation and feature representation in digital, centralized databases and to clarify their ontological implications. The semantics of these two approaches are contrasted by examining the categorization of features by subject domains inherent to national topographic mapping. When comparing five USGS topographic mapping domain and feature lists, results indicate that multiple semantic meanings and ontology rules were applied to the initial digital database, but were lost as databases became more centralized at national scales, and common semantics were replaced by technological terms.

  1. Identification of human candidate genes for male infertility by digital differential display.

    PubMed

    Olesen, C; Hansen, C; Bendsen, E; Byskov, A G; Schwinger, E; Lopez-Pajares, I; Jensen, P K; Kristoffersson, U; Schubert, R; Van Assche, E; Wahlstroem, J; Lespinasse, J; Tommerup, N

    2001-01-01

    Evidence for the importance of genetic factors in male fertility is accumulating. In the literature and the Mendelian Cytogenetics Network database, 265 cases of infertile males with balanced reciprocal translocations have been described. The candidacy for infertility of 14 testis-expressed transcripts (TETs) were examined by comparing their chromosomal mapping position to the position of balanced reciprocal translocation breakpoints found in the 265 infertile males. The 14 TETs were selected by using digital differential display (electronic subtraction) to search for apparently testis-specific transcripts in the TIGR database. The testis specificity of the 14 TETs was further examined by reverse transcription-polymerase chain reaction (RT-PCR) on adult and fetal tissues showing that four TETs (TET1 to TET4) were testis-expressed only, six TETs (TET5 to TET10) appeared to be differentially expressed and the remaining four TETs (TET11 to TET14) were ubiquitously expressed. Interestingly, the two tesis expressed-only transcripts, TET1 and TET2, mapped to chromosomal regions where seven and six translocation breakpoints have been reported in infertile males respectively. Furthermore, one ubiquitously, but predominantly testis-expressed, transcript, TET11, mapped to 1p32-33, where 13 translocation breakpoints have been found in infertile males. Interestingly, the mouse mutation, skeletal fusions with sterility, sks, maps to the syntenic region in the mouse genome. Another transcript, TET7, was the human homologue of rat Tpx-1, which functions in the specific interaction of spermatogenic cells with Sertoli cells. TPX-1 maps to 6p21 where three cases of chromosomal breakpoints in infertile males have been reported. Finally, TET8 was a novel transcript which in the fetal stage is testis-specific, but in the adult is expressed in multiple tissues, including testis. We named this novel transcript fetal and adult testis-expressed transcript (FATE).

  2. Preliminary Geologic Map of the Topanga 7.5' Quadrangle, Southern California: A Digital Database

    USGS Publications Warehouse

    Yerkes, R.F.; Campbell, R.H.

    1995-01-01

    INTRODUCTION This Open-File report is a digital geologic map database. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. This digital map database is compiled from previously published sources combined with some new mapping and modifications in nomenclature. The geologic map database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U. S. Geological Survey. For detailed descriptions of the units, their stratigraphic relations and sources of geologic mapping consult Yerkes and Campbell (1994). More specific information about the units may be available in the original sources. The content and character of the database and methods of obtaining it are described herein. The geologic map database itself, consisting of three ARC coverages and one base layer, can be obtained over the Internet or by magnetic tape copy as described below. The processes of extracting the geologic map database from the tar file, and importing the ARC export coverages (procedure described herein), will result in the creation of an ARC workspace (directory) called 'topnga.' The database was compiled using ARC/INFO version 7.0.3, a commercial Geographic Information System (Environmental Systems Research Institute, Redlands, California), with version 3.0 of the menu interface ALACARTE (Fitzgibbon and Wentworth, 1991, Fitzgibbon, 1991, Wentworth and Fitzgibbon, 1991). It is stored in uncompressed ARC export format (ARC/INFO version 7.x) in a compressed UNIX tar (tape archive) file. The tar file was compressed with gzip, and may be uncompressed with gzip, which is available free of charge via the Internet from the gzip Home Page (http://w3.teaser.fr/~jlgailly/gzip). A tar utility is required to extract the database from the tar file. This utility is included in most UNIX systems, and can be obtained free of charge via the Internet from Internet Literacy's Common Internet File Formats Webpage http://www.matisse.net/files/formats.html). ARC/INFO export files (files with the .e00 extension) can be converted into ARC/INFO coverages in ARC/INFO (see below) and can be read by some other Geographic Information Systems, such as MapInfo via ArcLink and ESRI's ArcView (version 1.0 for Windows 3.1 to 3.11 is available for free from ESRI's web site: http://www.esri.com). 1. Different base layer - The original digital database included separates clipped out of the Los Angeles 1:100,000 sheet. This release includes a vectorized scan of a scale-stable negative of the Topanga 7.5 minute quadrangle. 2. Map projection - The files in the original release were in polyconic projection. The projection used in this release is state plane, which allows for the tiling of adjacent quadrangles. 3. File compression - The files in the original release were compressed with UNIX compression. The files in this release are compressed with gzip.

  3. A New Global Open Source Marine Hydrocarbon Emission Site Database

    NASA Astrophysics Data System (ADS)

    Onyia, E., Jr.; Wood, W. T.; Barnard, A.; Dada, T.; Qazzaz, M.; Lee, T. R.; Herrera, E.; Sager, W.

    2017-12-01

    Hydrocarbon emission sites (e.g. seeps) discharge large volumes of fluids and gases into the oceans that are not only important for biogeochemical budgets, but also support abundant chemosynthetic communities. Documenting the locations of modern emissions is a first step towards understanding and monitoring how they affect the global state of the seafloor and oceans. Currently, no global open source (i.e. non-proprietry) detailed maps of emissions sites are available. As a solution, we have created a database that is housed within an Excel spreadsheet and use the latest versions of Earthpoint and Google Earth for position coordinate conversions and data mapping, respectively. To date, approximately 1,000 data points have been collected from referenceable sources across the globe, and we are continualy expanding the dataset. Due to the variety of spatial extents encountered, to identify each site we used two different methods: 1) point (x, y, z) locations for individual sites and; 2) delineation of areas where sites are clustered. Certain well-known areas, such as the Gulf of Mexico and the Mediterranean Sea, have a greater abundance of information; whereas significantly less information is available in other regions due to the absence of emission sites, lack of data, or because the existing data is proprietary. Although the geographical extent of the data is currently restricted to regions where the most data is publicly available, as the database matures, we expect to have more complete coverage of the world's oceans. This database is an information resource that consolidates and organizes the existing literature on hydrocarbons released into the marine environment, thereby providing a comprehensive reference for future work. We expect that the availability of seafloor hydrocarbon emission maps will benefit scientific understanding of hydrocarbon rich areas as well as potentially aiding hydrocarbon exploration and environmental impact assessements.

  4. LMSD: LIPID MAPS structure database

    PubMed Central

    Sud, Manish; Fahy, Eoin; Cotter, Dawn; Brown, Alex; Dennis, Edward A.; Glass, Christopher K.; Merrill, Alfred H.; Murphy, Robert C.; Raetz, Christian R. H.; Russell, David W.; Subramaniam, Shankar

    2007-01-01

    The LIPID MAPS Structure Database (LMSD) is a relational database encompassing structures and annotations of biologically relevant lipids. Structures of lipids in the database come from four sources: (i) LIPID MAPS Consortium's core laboratories and partners; (ii) lipids identified by LIPID MAPS experiments; (iii) computationally generated structures for appropriate lipid classes; (iv) biologically relevant lipids manually curated from LIPID BANK, LIPIDAT and other public sources. All the lipid structures in LMSD are drawn in a consistent fashion. In addition to a classification-based retrieval of lipids, users can search LMSD using either text-based or structure-based search options. The text-based search implementation supports data retrieval by any combination of these data fields: LIPID MAPS ID, systematic or common name, mass, formula, category, main class, and subclass data fields. The structure-based search, in conjunction with optional data fields, provides the capability to perform a substructure search or exact match for the structure drawn by the user. Search results, in addition to structure and annotations, also include relevant links to external databases. The LMSD is publicly available at PMID:17098933

  5. A Vision System For A Mars Rover

    NASA Astrophysics Data System (ADS)

    Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.

    1987-01-01

    A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.

  6. A vision system for a Mars rover

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.

    1988-01-01

    A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.

  7. OriDB, the DNA replication origin database updated and extended.

    PubMed

    Siow, Cheuk C; Nieduszynska, Sian R; Müller, Carolin A; Nieduszynski, Conrad A

    2012-01-01

    OriDB (http://www.oridb.org/) is a database containing collated genome-wide mapping studies of confirmed and predicted replication origin sites. The original database collated and curated Saccharomyces cerevisiae origin mapping studies. Here, we report that the OriDB database and web site have been revamped to improve user accessibility to curated data sets, to greatly increase the number of curated origin mapping studies, and to include the collation of replication origin sites in the fission yeast Schizosaccharomyces pombe. The revised database structure underlies these improvements and will facilitate further expansion in the future. The updated OriDB for S. cerevisiae is available at http://cerevisiae.oridb.org/ and for S. pombe at http://pombe.oridb.org/.

  8. An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.

    PubMed

    Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang

    2016-01-28

    In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.

  9. An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database

    PubMed Central

    Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang

    2016-01-01

    In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. PMID:26828496

  10. An affinity-structure database of helix-turn-helix: DNA complexes with a universal coordinate system.

    PubMed

    AlQuraishi, Mohammed; Tang, Shengdong; Xia, Xide

    2015-11-19

    Molecular interactions between proteins and DNA molecules underlie many cellular processes, including transcriptional regulation, chromosome replication, and nucleosome positioning. Computational analyses of protein-DNA interactions rely on experimental data characterizing known protein-DNA interactions structurally and biochemically. While many databases exist that contain either structural or biochemical data, few integrate these two data sources in a unified fashion. Such integration is becoming increasingly critical with the rapid growth of structural and biochemical data, and the emergence of algorithms that rely on the synthesis of multiple data types to derive computational models of molecular interactions. We have developed an integrated affinity-structure database in which the experimental and quantitative DNA binding affinities of helix-turn-helix proteins are mapped onto the crystal structures of the corresponding protein-DNA complexes. This database provides access to: (i) protein-DNA structures, (ii) quantitative summaries of protein-DNA binding affinities using position weight matrices, and (iii) raw experimental data of protein-DNA binding instances. Critically, this database establishes a correspondence between experimental structural data and quantitative binding affinity data at the single basepair level. Furthermore, we present a novel alignment algorithm that structurally aligns the protein-DNA complexes in the database and creates a unified residue-level coordinate system for comparing the physico-chemical environments at the interface between complexes. Using this unified coordinate system, we compute the statistics of atomic interactions at the protein-DNA interface of helix-turn-helix proteins. We provide an interactive website for visualization, querying, and analyzing this database, and a downloadable version to facilitate programmatic analysis. This database will facilitate the analysis of protein-DNA interactions and the development of programmatic computational methods that capitalize on integration of structural and biochemical datasets. The database can be accessed at http://ProteinDNA.hms.harvard.edu.

  11. Assessment and application of national environmental databases and mapping tools at the local level to two community case studies.

    PubMed

    Hammond, Davyda; Conlon, Kathryn; Barzyk, Timothy; Chahine, Teresa; Zartarian, Valerie; Schultz, Brad

    2011-03-01

    Communities are concerned over pollution levels and seek methods to systematically identify and prioritize the environmental stressors in their communities. Geographic information system (GIS) maps of environmental information can be useful tools for communities in their assessment of environmental-pollution-related risks. Databases and mapping tools that supply community-level estimates of ambient concentrations of hazardous pollutants, risk, and potential health impacts can provide relevant information for communities to understand, identify, and prioritize potential exposures and risk from multiple sources. An assessment of existing databases and mapping tools was conducted as part of this study to explore the utility of publicly available databases, and three of these databases were selected for use in a community-level GIS mapping application. Queried data from the U.S. EPA's National-Scale Air Toxics Assessment, Air Quality System, and National Emissions Inventory were mapped at the appropriate spatial and temporal resolutions for identifying risks of exposure to air pollutants in two communities. The maps combine monitored and model-simulated pollutant and health risk estimates, along with local survey results, to assist communities with the identification of potential exposure sources and pollution hot spots. Findings from this case study analysis will provide information to advance the development of new tools to assist communities with environmental risk assessments and hazard prioritization. © 2010 Society for Risk Analysis.

  12. BoreholeAR: A mobile tablet application for effective borehole database visualization using an augmented reality technology

    NASA Astrophysics Data System (ADS)

    Lee, Sangho; Suh, Jangwon; Park, Hyeong-Dong

    2015-03-01

    Boring logs are widely used in geological field studies since the data describes various attributes of underground and surface environments. However, it is difficult to manage multiple boring logs in the field as the conventional management and visualization methods are not suitable for integrating and combining large data sets. We developed an iPad application to enable its user to search the boring log rapidly and visualize them using the augmented reality (AR) technique. For the development of the application, a standard borehole database appropriate for a mobile-based borehole database management system was designed. The application consists of three modules: an AR module, a map module, and a database module. The AR module superimposes borehole data on camera imagery as viewed by the user and provides intuitive visualization of borehole locations. The map module shows the locations of corresponding borehole data on a 2D map with additional map layers. The database module provides data management functions for large borehole databases for other modules. Field survey was also carried out using more than 100,000 borehole data.

  13. The GLIMS Glacier Database

    NASA Astrophysics Data System (ADS)

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2007-12-01

    The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), MapInfo, GML (Geography Markup Language) and GMT (Generic Mapping Tools). This "clip-and-ship" function allows users to download only the data they are interested in. Our flexible web interfaces to the database, which includes various support layers (e.g. a layer to help collaborators identify satellite imagery over their region of expertise) will facilitate enhanced analysis to be undertaken on glacier systems, their distribution, and their impacts on other Earth systems.

  14. Preliminary surficial geologic map of a Calico Mountains piedmont and part of Coyote Lake, Mojave desert, San Bernardino County, California

    USGS Publications Warehouse

    Dudash, Stephanie L.

    2006-01-01

    This 1:24,000 scale detailed surficial geologic map and digital database of a Calico Mountains piedmont and part of Coyote Lake in south-central California depicts surficial deposits and generalized bedrock units. The mapping is part of a USGS project to investigate the spatial distribution of deposits linked to changes in climate, to provide framework geology for land use management (http://deserts.wr.usgs.gov), to understand the Quaternary tectonic history of the Mojave Desert, and to provide additional information on the history of Lake Manix, of which Coyote Lake is a sub-basin. Mapping is displayed on parts of four USGS 7.5 minute series topographic maps. The map area lies in the central Mojave Desert of California, northeast of Barstow, Calif. and south of Fort Irwin, Calif. and covers 258 sq.km. (99.5 sq.mi.). Geologic deposits in the area consist of Paleozoic metamorphic rocks, Mesozoic plutonic rocks, Miocene volcanic rocks, Pliocene-Pleistocene basin fill, and Quaternary surficial deposits. McCulloh (1960, 1965) conducted bedrock mapping and a generalized version of his maps are compiled into this map. McCulloh's maps contain many bedrock structures within the Calico Mountains that are not shown on the present map. This study resulted in several new findings, including the discovery of previously unrecognized faults, one of which is the Tin Can Alley fault. The north-striking Tin Can Alley fault is part of the Paradise fault zone (Miller and others, 2005), a potentially important feature for studying neo-tectonic strain in the Mojave Desert. Additionally, many Anodonta shells were collected in Coyote Lake lacustrine sediments for radiocarbon dating. Preliminary results support some of Meek's (1999) conclusions on the timing of Mojave River inflow into the Coyote Basin. The database includes information on geologic deposits, samples, and geochronology. The database is distributed in three parts: spatial map-based data, documentation, and printable map graphics of the database. Spatial data are distributed as an ArcInfo personal geodatabase, or as tabular data in the form of Microsoft Access Database (MDB) or dBase Format (DBF) file formats. Documentation includes this file, which provides a discussion of the surficial geology and describes the format and content of the map data, and Federal Geographic Data Committee (FGDC) metadata for the spatial map information. Map graphics files are distributed as Postscript and Adobe Acrobat Portable Document Format (PDF) files, and are appropriate for representing a view of the spatial database at the mapped scale.

  15. UET: a database of evolutionarily-predicted functional determinants of protein sequences that cluster as functional sites in protein structures.

    PubMed

    Lua, Rhonald C; Wilson, Stephen J; Konecki, Daniel M; Wilkins, Angela D; Venner, Eric; Morgan, Daniel H; Lichtarge, Olivier

    2016-01-04

    The structure and function of proteins underlie most aspects of biology and their mutational perturbations often cause disease. To identify the molecular determinants of function as well as targets for drugs, it is central to characterize the important residues and how they cluster to form functional sites. The Evolutionary Trace (ET) achieves this by ranking the functional and structural importance of the protein sequence positions. ET uses evolutionary distances to estimate functional distances and correlates genotype variations with those in the fitness phenotype. Thus, ET ranks are worse for sequence positions that vary among evolutionarily closer homologs but better for positions that vary mostly among distant homologs. This approach identifies functional determinants, predicts function, guides the mutational redesign of functional and allosteric specificity, and interprets the action of coding sequence variations in proteins, people and populations. Now, the UET database offers pre-computed ET analyses for the protein structure databank, and on-the-fly analysis of any protein sequence. A web interface retrieves ET rankings of sequence positions and maps results to a structure to identify functionally important regions. This UET database integrates several ways of viewing the results on the protein sequence or structure and can be found at http://mammoth.bcm.tmc.edu/uet/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. The Protein Identifier Cross-Referencing (PICR) service: reconciling protein identifiers across multiple source databases.

    PubMed

    Côté, Richard G; Jones, Philip; Martens, Lennart; Kerrien, Samuel; Reisinger, Florian; Lin, Quan; Leinonen, Rasko; Apweiler, Rolf; Hermjakob, Henning

    2007-10-18

    Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. We have created the Protein Identifier Cross-Reference (PICR) service, a web application that provides interactive and programmatic (SOAP and REST) access to a mapping algorithm that uses the UniProt Archive (UniParc) as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV) or Microsoft Excel (XLS) files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR interface, documentation and code examples are available at http://www.ebi.ac.uk/Tools/picr.

  17. The Protein Identifier Cross-Referencing (PICR) service: reconciling protein identifiers across multiple source databases

    PubMed Central

    Côté, Richard G; Jones, Philip; Martens, Lennart; Kerrien, Samuel; Reisinger, Florian; Lin, Quan; Leinonen, Rasko; Apweiler, Rolf; Hermjakob, Henning

    2007-01-01

    Background Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. Results We have created the Protein Identifier Cross-Reference (PICR) service, a web application that provides interactive and programmatic (SOAP and REST) access to a mapping algorithm that uses the UniProt Archive (UniParc) as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV) or Microsoft Excel (XLS) files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. Conclusion We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR interface, documentation and code examples are available at . PMID:17945017

  18. Digital version of "Open-File Report 92-179: Geologic map of the Cow Cove Quadrangle, San Bernardino County, California"

    USGS Publications Warehouse

    Wilshire, Howard G.; Bedford, David R.; Coleman, Teresa

    2002-01-01

    3. Plottable map representations of the database at 1:24,000 scale in PostScript and Adobe PDF formats. The plottable files consist of a color geologic map derived from the spatial database, composited with a topographic base map in the form of the USGS Digital Raster Graphic for the map area. Color symbology from each of these datasets is maintained, which can cause plot file sizes to be large.

  19. Geologic and aeromagnetic maps of the Fossil Ridge area and vicinity, Gunnison County, Colorado

    USGS Publications Warehouse

    DeWitt, Ed; Zech, R.S.; Chase, C.G.; Zartman, R.E.; Kucks, R.P.; Bartelson, Bruce; Rosenlund, G.C.; Earley, Drummond

    2002-01-01

    This data set includes a GIS geologic map database of an Early Proterozoic metavolcanic and metasedimentary terrane extensively intruded by Early and Middle Proterozoic granitic plutons. Laramide to Tertiary deformation and intrusion of felsic plutons have created numerous small mineral deposits that are described in the tables and are shown on the figures in the accompanying text pamphlet. Also included in the pamphlet are numerous chemical analyses of igneous and meta-igneous bodies of all ages in tables and in summary geochemical diagrams. The text pamphlet also contains a detailed description of map units and discussions of the aeromagnetic survey, igneous and metmorphic rocks, and mineral deposits. The printed map sheet and browse graphic pdf file include the aeromagnetic map of the study area, as well as figures and photographs. Purpose: This GIS geologic map database is provided to facilitate the presentation and analysis of earth-science data for this region of Colorado. This digital map database may be displayed at any scale or projection. However, the geologic data in this coverage are not intended for use at a scale other than 1:30,000. Supplemental useful data accompanying the database are extensive geochemical and mineral deposits data, as well as an aeromagnetic map.

  20. A mapping review of the literature on UK-focused health and social care databases.

    PubMed

    Cooper, Chris; Rogers, Morwenna; Bethel, Alison; Briscoe, Simon; Lowe, Jenny

    2015-03-01

    Bibliographic databases are a day-to-day tool of the researcher: they offer the researcher easy and organised access to knowledge, but how much is actually known about the databases on offer? The focus of this paper is UK health and social care databases. These databases are often small, specialised by topic, and provide a complementary literature to the large, international databases. There is, however, good evidence that these databases are overlooked in systematic reviews, perhaps because little is known about what they can offer. To systematically locate and map, published and unpublished literature on the key UK health and social care bibliographic databases. Systematic searching and mapping. Two hundred and forty-two items were identified which specifically related to the 24 of the 34 databases under review. There is little published or unpublished literature specifically analysing the key UK health and social care databases. Since several UK databases have closed, others are at risk, and some are overlooked in reviews, better information is required to enhance our knowledge. Further research on UK health and social care databases is required. This paper suggests the need to develop the evidence base through a series of case studies on each of the databases. © 2014 The authors. Health Information and Libraries Journal © 2014 Health Libraries Journal.

  1. Data management with a landslide inventory of the Franconian Alb (Germany) using a spatial database and GIS tools

    NASA Astrophysics Data System (ADS)

    Bemm, Stefan; Sandmeier, Christine; Wilde, Martina; Jaeger, Daniel; Schwindt, Daniel; Terhorst, Birgit

    2014-05-01

    The area of the Swabian-Franconian cuesta landscape (Southern Germany) is highly prone to landslides. This was apparent in the late spring of 2013, when numerous landslides occurred as a consequence of heavy and long-lasting rainfalls. The specific climatic situation caused numerous damages with serious impact on settlements and infrastructure. Knowledge on spatial distribution of landslides, processes and characteristics are important to evaluate the potential risk that can occur from mass movements in those areas. In the frame of two projects about 400 landslides were mapped and detailed data sets were compiled during years 2011 to 2014 at the Franconian Alb. The studies are related to the project "Slope stability and hazard zones in the northern Bavarian cuesta" (DFG, German Research Foundation) as well as to the LfU (The Bavarian Environment Agency) within the project "Georisks and climate change - hazard indication map Jura". The central goal of the present study is to create a spatial database for landslides. The database should contain all fundamental parameters to characterize the mass movements and should provide the potential for secure data storage and data management, as well as statistical evaluations. The spatial database was created with PostgreSQL, an object-relational database management system and PostGIS, a spatial database extender for PostgreSQL, which provides the possibility to store spatial and geographic objects and to connect to several GIS applications, like GRASS GIS, SAGA GIS, QGIS and GDAL, a geospatial library (Obe et al. 2011). Database access for querying, importing, and exporting spatial and non-spatial data is ensured by using GUI or non-GUI connections. The database allows the use of procedural languages for writing advanced functions in the R, Python or Perl programming languages. It is possible to work directly with the (spatial) data entirety of the database in R. The inventory of the database includes (amongst others), informations on location, landslide types and causes, geomorphological positions, geometries, hazards and damages, as well as assessments related to the activity of landslides. Furthermore, there are stored spatial objects, which represent the components of a landslide, in particular the scarps and the accumulation areas. Besides, waterways, map sheets, contour lines, detailed infrastructure data, digital elevation models, aspect and slope data are included. Examples of spatial queries to the database are intersections of raster and vector data for calculating values for slope gradients or aspects of landslide areas and for creating multiple, overlaying sections for the comparison of slopes, as well as distances to the infrastructure or to the next receiving drainage. Furthermore, getting informations on landslide magnitudes, distribution and clustering, as well as potential correlations concerning geomorphological or geological conditions. The data management concept in this study can be implemented for any academic, public or private use, because it is independent from any obligatory licenses. The created spatial database offers a platform for interdisciplinary research and socio-economic questions, as well as for landslide susceptibility and hazard indication mapping. Obe, R.O., Hsu, L.S. 2011. PostGIS in action. - pp 492, Manning Publications, Stamford

  2. Geologic map database of the El Mirage Lake area, San Bernardino and Los Angeles Counties, California

    USGS Publications Warehouse

    Miller, David M.; Bedford, David R.

    2000-01-01

    This geologic map database for the El Mirage Lake area describes geologic materials for the dry lake, parts of the adjacent Shadow Mountains and Adobe Mountain, and much of the piedmont extending south from the lake upward toward the San Gabriel Mountains. This area lies within the western Mojave Desert of San Bernardino and Los Angeles Counties, southeastern California. The area is traversed by a few paved highways that service the community of El Mirage, and by numerous dirt roads that lead to outlying properties. An off-highway vehicle area established by the Bureau of Land Management encompasses the dry lake and much of the land north and east of the lake. The physiography of the area consists of the dry lake, flanking mud and sand flats and alluvial piedmonts, and a few sharp craggy mountains. This digital geologic map database, intended for use at 1:24,000-scale, describes and portrays the rock units and surficial deposits of the El Mirage Lake area. The map database was prepared to aid in a water-resource assessment of the area by providing surface geologic information with which deepergroundwater-bearing units may be understood. The area mapped covers the Shadow Mountains SE and parts of the Shadow Mountains, Adobe Mountain, and El Mirage 7.5-minute quadrangles. The map includes detailed geology of surface and bedrock deposits, which represent a significant update from previous bedrock geologic maps by Dibblee (1960) and Troxel and Gunderson (1970), and the surficial geologic map of Ponti and Burke (1980); it incorporates a fringe of the detailed bedrock mapping in the Shadow Mountains by Martin (1992). The map data were assembled as a digital database using ARC/INFO to enable wider applications than traditional paper-product geologic maps and to provide for efficient meshing with other digital data bases prepared by the U.S. Geological Survey's Southern California Areal Mapping Project.

  3. Revealing phenotype-associated functional differences by genome-wide scan of ancient haplotype blocks

    PubMed Central

    Onuki, Ritsuko; Yamaguchi, Rui; Shibuya, Tetsuo; Kanehisa, Minoru; Goto, Susumu

    2017-01-01

    Genome-wide scans for positive selection have become important for genomic medicine, and many studies aim to find genomic regions affected by positive selection that are associated with risk allele variations among populations. Most such studies are designed to detect recent positive selection. However, we hypothesize that ancient positive selection is also important for adaptation to pathogens, and has affected current immune-mediated common diseases. Based on this hypothesis, we developed a novel linkage disequilibrium-based pipeline, which aims to detect regions associated with ancient positive selection across populations from single nucleotide polymorphism (SNP) data. By applying this pipeline to the genotypes in the International HapMap project database, we show that genes in the detected regions are enriched in pathways related to the immune system and infectious diseases. The detected regions also contain SNPs reported to be associated with cancers and metabolic diseases, obesity-related traits, type 2 diabetes, and allergic sensitization. These SNPs were further mapped to biological pathways to determine the associations between phenotypes and molecular functions. Assessments of candidate regions to identify functions associated with variations in incidence rates of these diseases are needed in the future. PMID:28445522

  4. [Open access to academic scholarship as a public policy resource: a study of the Capes database on Brazilian theses and dissertations].

    PubMed

    da Silva Rosa, Teresa; Carneiro, Maria José

    2010-12-01

    Access to scientific knowledge is a valuable resource than can inform and validate positions taken in formulating public policy. But access to this knowledge can be challenging, given the diversity and breadth of available scholarship. Communication between the fields of science and of politics requires the dissemination of scholarship and access to it. We conducted a study using an open-access search tool in order to map existent knowledge on a specific topic: agricultural contributions to the preservation of biodiversity. The present article offers a critical view of access to the information available through the Capes database on Brazilian theses and dissertations.

  5. RatMap--rat genome tools and data.

    PubMed

    Petersen, Greta; Johnson, Per; Andersson, Lars; Klinga-Levan, Karin; Gómez-Fabre, Pedro M; Ståhl, Fredrik

    2005-01-01

    The rat genome database RatMap (http://ratmap.org or http://ratmap.gen.gu.se) has been one of the main resources for rat genome information since 1994. The database is maintained by CMB-Genetics at Goteborg University in Sweden and provides information on rat genes, polymorphic rat DNA-markers and rat quantitative trait loci (QTLs), all curated at RatMap. The database is under the supervision of the Rat Gene and Nomenclature Committee (RGNC); thus much attention is paid to rat gene nomenclature. RatMap presents information on rat idiograms, karyotypes and provides a unified presentation of the rat genome sequence and integrated rat linkage maps. A set of tools is also available to facilitate the identification and characterization of rat QTLs, as well as the estimation of exon/intron number and sizes in individual rat genes. Furthermore, comparative gene maps of rat in regard to mouse and human are provided.

  6. In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla.

    PubMed

    Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart

    2015-03-01

    Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1(-)) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla

    NASA Astrophysics Data System (ADS)

    Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart

    2015-03-01

    Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1-) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI.

  8. MapApp: A Java(TM) Applet for Accessing Geographic Databases

    NASA Astrophysics Data System (ADS)

    Haxby, W.; Carbotte, S.; Ryan, W. B.; OHara, S.

    2001-12-01

    MapApp (http://coast.ldeo.columbia.edu/help/MapApp.html) is a prototype Java(TM) applet that is intended to give easy and versatile access to geographic data sets through a web browser. It was developed initially to interface with the RIDGE Multibeam Synthesis. Subsequently, interfaces with other geophysical databases were added. At present, multibeam bathymetry grids, underway geophysics along ship tracks, and the LDEO Borehole Research Group's ODP well logging database are accessible through MapApp. We plan to add an interface with the Ridge Petrology Database in the near future. The central component of MapApp is a world physiographic map. Users may navigate around the map (zoom/pan) without waiting for HTTP requests to a remote server to be processed. A focus request loads image tiles from the server to compose a new map at the current viewing resolution. Areas in which multibeam grids are available may be focused to a pixel resolution of about 200 m. These areas may be identified by toggling a mask. Databases may be accessed through menus, and selected data objects may be loaded into MapApp by selecting items from tables. Once loaded, a bathymetry grid may be contoured or used to create bathymetric profiles; ship tracks and ODP sites may be overlain on the map and their geophysical data plotted in X-Y graphs. The advantage of applets over traditional web pages is that they permit dynamic interaction with data sets, while limiting time consuming interaction with a remote server. Users may customize the graphics display by modifying the scale, or the symbol or line characteristics of rendered data, contour interval, etc. The ease with which users can select areas, view the physiography of areas, and preview data sets and evaluate them for quality and applicability, makes MapApp a valuable tool for education and research.

  9. Coastal resource and sensitivity mapping of Vietnam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odin, L.M.

    1997-08-01

    This paper describes a project to establish a relationship between environmental sensitivity (primarily to oil pollution) and response planning and prevention priorities for Vietnamese coastal regions. An inventory of coastal environmental sensitivity and the creation of index mapping was performed. Satellite and geographical information system data were integrated and used for database creation. The database was used to create a coastal resource map, coastal sensitivity map, and a field inventory base map. The final coastal environment sensitivity classification showed that almost 40 percent of the 7448 km of mapped shoreline has a high to medium high sensitivity to oil pollution.

  10. Geologic map of the eastern part of the Challis National Forest and vicinity, Idaho

    USGS Publications Warehouse

    Wilson, A.B.; Skipp, B.A.

    1994-01-01

    The paper version of the Geologic Map of the eastern part of the Challis National Forest and vicinity, Idaho was compiled by Anna Wilson and Betty Skipp in 1994. The geology was compiled on a 1:250,000 scale topographic base map. TechniGraphic System, Inc. of Fort Collins Colorado digitized this map under contract for N.Shock. G.Green edited and prepared the digital version for publication as a GIS database. The digital geologic map database can be queried in many ways to produce a variety of geologic maps.

  11. 'The surface management system' (SuMS) database: a surface-based database to aid cortical surface reconstruction, visualization and analysis

    NASA Technical Reports Server (NTRS)

    Dickson, J.; Drury, H.; Van Essen, D. C.

    2001-01-01

    Surface reconstructions of the cerebral cortex are increasingly widely used in the analysis and visualization of cortical structure, function and connectivity. From a neuroinformatics perspective, dealing with surface-related data poses a number of challenges. These include the multiplicity of configurations in which surfaces are routinely viewed (e.g. inflated maps, spheres and flat maps), plus the diversity of experimental data that can be represented on any given surface. To address these challenges, we have developed a surface management system (SuMS) that allows automated storage and retrieval of complex surface-related datasets. SuMS provides a systematic framework for the classification, storage and retrieval of many types of surface-related data and associated volume data. Within this classification framework, it serves as a version-control system capable of handling large numbers of surface and volume datasets. With built-in database management system support, SuMS provides rapid search and retrieval capabilities across all the datasets, while also incorporating multiple security levels to regulate access. SuMS is implemented in Java and can be accessed via a Web interface (WebSuMS) or using downloaded client software. Thus, SuMS is well positioned to act as a multiplatform, multi-user 'surface request broker' for the neuroscience community.

  12. Candidate gene database and transcript map for peach, a model species for fruit trees.

    PubMed

    Horn, Renate; Lecouls, Anne-Claire; Callahan, Ann; Dandekar, Abhaya; Garay, Lilibeth; McCord, Per; Howad, Werner; Chan, Helen; Verde, Ignazio; Main, Doreen; Jung, Sook; Georgi, Laura; Forrest, Sam; Mook, Jennifer; Zhebentyayeva, Tatyana; Yu, Yeisoo; Kim, Hye Ran; Jesudurai, Christopher; Sosinski, Bryon; Arús, Pere; Baird, Vance; Parfitt, Dan; Reighard, Gregory; Scorza, Ralph; Tomkins, Jeffrey; Wing, Rod; Abbott, Albert Glenn

    2005-05-01

    Peach (Prunus persica) is a model species for the Rosaceae, which includes a number of economically important fruit tree species. To develop an extensive Prunus expressed sequence tag (EST) database for identifying and cloning the genes important to fruit and tree development, we generated 9,984 high-quality ESTs from a peach cDNA library of developing fruit mesocarp. After assembly and annotation, a putative peach unigene set consisting of 3,842 ESTs was defined. Gene ontology (GO) classification was assigned based on the annotation of the single "best hit" match against the Swiss-Prot database. No significant homology could be found in the GenBank nr databases for 24.3% of the sequences. Using core markers from the general Prunus genetic map, we anchored bacterial artificial chromosome (BAC) clones on the genetic map, thereby providing a framework for the construction of a physical and transcript map. A transcript map was developed by hybridizing 1,236 ESTs from the putative peach unigene set and an additional 68 peach cDNA clones against the peach BAC library. Hybridizing ESTs to genetically anchored BACs immediately localized 11.2% of the ESTs on the genetic map. ESTs showed a clustering of expressed genes in defined regions of the linkage groups. [The data were built into a regularly updated Genome Database for Rosaceae (GDR), available at (http://www.genome.clemson.edu/gdr/).].

  13. Basic and advanced numerical performances relate to mathematical expertise but are fully mediated by visuospatial skills.

    PubMed

    Sella, Francesco; Sader, Elie; Lolliot, Simon; Cohen Kadosh, Roi

    2016-09-01

    Recent studies have highlighted the potential role of basic numerical processing in the acquisition of numerical and mathematical competences. However, it is debated whether high-level numerical skills and mathematics depends specifically on basic numerical representations. In this study mathematicians and nonmathematicians performed a basic number line task, which required mapping positive and negative numbers on a physical horizontal line, and has been shown to correlate with more advanced numerical abilities and mathematical achievement. We found that mathematicians were more accurate compared with nonmathematicians when mapping positive, but not negative numbers, which are considered numerical primitives and cultural artifacts, respectively. Moreover, performance on positive number mapping could predict whether one is a mathematician or not, and was mediated by more advanced mathematical skills. This finding might suggest a link between basic and advanced mathematical skills. However, when we included visuospatial skills, as measured by block design subtest, the mediation analysis revealed that the relation between the performance in the number line task and the group membership was explained by non-numerical visuospatial skills. These results demonstrate that relation between basic, even specific, numerical skills and advanced mathematical achievement can be artifactual and explained by visuospatial processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. User identified positive outcome expectancies of electronic cigarette use: A concept mapping study.

    PubMed

    Soule, Eric K; Maloney, Sarah F; Guy, Mignonne C; Eissenberg, Thomas; Fagan, Pebbles

    2017-05-01

    Electronic cigarette (ECIG) use is growing in popularity, but little is known about the perceived positive outcomes of ECIG use. This study used concept mapping (CM) to examine positive ECIG outcome expectancies. Sixty-three past 30-day ECIG users (38.1% female) between the ages of 18 and 64 (M = 37.8, SD = 13.3) completed a CM module. In an online program, participants provided statements that completed a prompt: "A specific positive, enjoyable, or exciting effect (i.e., physical or psychological) that I have experienced WHILE USING or IMMEDIATELY AFTER USING an electronic cigarette/electronic vaping device is. . . ." Participants (n = 35) sorted 123 statements into "piles" of similar content and rated (n = 43) each statement on a 7-point scale (1 = Definitely NOT a positive effect to 7 = Definitely a positive effect). A cluster map was created using data from the sorting task, and analysis indicated a 7 cluster model of positive ECIG use outcome expectancies: Therapeutic/Affect Regulation, High/Euphoria, Sensation Enjoyment, Perceived Health Effects, Benefits of Decreased Cigarette Use, Convenience, and Social Impacts. The Perceived Health Effects cluster was rated highest, although all mean ratings were greater than 4.69. Mean cluster ratings were compared, and females, younger adults, past 30-day cigarette smokers, users of more "advanced" ECIG devices, and nonlifetime (less than 100 lifetime cigarettes) participants rated certain clusters higher than comparison groups (ps < 0.05). ECIG users associate positive outcomes with ECIG use. ECIG outcome expectancies may affect product appeal and tobacco use behaviors and should be examined further to inform regulatory policies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Spatial digital database for the geologic map of the east part of the Pullman 1° x 2° quadrangle, Idaho

    USGS Publications Warehouse

    Rember, William C.; Bennett, Earl H.

    2001-01-01

    he paper geologic map of the east part of the Pullman 1·x 2· degree quadrangle, Idaho (Rember and Bennett, 1979) was scanned and initially attributed by Optronics Specialty Co., Inc. (Northridge, CA) and remitted to the U.S. Geological Survey for further attribution and publication of the geospatial digital files. The resulting digital geologic map GIS can be queried in many ways to produce a variety of geologic maps. This digital geospatial database is one of many being created by the U.S. Geological Survey as an ongoing effort to provide geologic information in a geographic information system (GIS) for use in spatial analysis. Digital base map data files (topography, roads, towns, rivers and lakes, and others.) are not included: they may be obtained from a variety of commercial and government sources. This database is not meant to be used or displayed at any scale larger than 1:250,000 (for example, 1:100,000 or 1:24,000). The digital geologic map graphics and plot files (pull250k.gra/.hp /.eps) that are provided in the digital package are representations of the digital database.

  16. The development and mapping of functional markers in Fragaria and their transferability and potential for mapping in other genera.

    PubMed

    Sargent, D J; Rys, A; Nier, S; Simpson, D W; Tobutt, K R

    2007-01-01

    We have developed 46 primer pairs from exon sequences flanking polymorphic introns of 23 Fragaria gene sequences and one Malus sequence deposited in the EMBL database. Sequencing of a set of the PCR products amplified with the novel primer pairs in diploid Fragaria showed the products to be homologous to the sequences from which the primers were originally designed. By scoring the segregation of the 24 genes in two diploid Fragaria progenies FV x FN (F. vesca x F. nubicola F(2)) and 815 x 903BC (F. vesca x F. viridis BC(1)) 29 genetic loci at discrete positions on the seven linkage groups previously characterised could be mapped, bringing to 35 the total number of known function genes mapped in Fragaria. Twenty primer pairs, representing 14 genes, amplified a product of the expected size in both Malus and Prunus. To demonstrate the applicability of these gene-specific loci to comparative mapping in Rosaceae, five markers that displayed clear polymorphism between the parents of a Malus and a Prunus mapping population were selected. The markers were then scored and mapped in at least one of the two additional progenies.

  17. Robust detection of heart beats in multimodal records using slope- and peak-sensitive band-pass filters.

    PubMed

    Pangerc, Urška; Jager, Franc

    2015-08-01

    In this work, we present the development, architecture and evaluation of a new and robust heart beat detector in multimodal records. The detector uses electrocardiogram (ECG) signals, and/or pulsatile (P) signals, such as: blood pressure, artery blood pressure and pulmonary artery pressure, if present. The base approach behind the architecture of the detector is collecting signal energy (differentiating and low-pass filtering, squaring, integrating). To calculate the detection and noise functions, simple and fast slope- and peak-sensitive band-pass digital filters were designed. By using morphological smoothing, the detection functions were further improved and noise intervals were estimated. The detector looks for possible pacemaker heart rate patterns and repairs the ECG signals and detection functions. Heart beats are detected in each of the ECG and P signals in two steps: a repetitive learning phase and a follow-up detecting phase. The detected heart beat positions from the ECG signals are merged into a single stream of detected ECG heart beat positions. The merged ECG heart beat positions and detected heart beat positions from the P signals are verified for their regularity regarding the expected heart rate. The detected heart beat positions of a P signal with the best match to the merged ECG heart beat positions are selected for mapping into the noise and no-signal intervals of the record. The overall evaluation scores in terms of average sensitivity and positive predictive values obtained on databases that are freely available on the Physionet website were as follows: the MIT-BIH Arrhythmia database (99.91%), the MGH/MF Waveform database (95.14%), the augmented training set of the follow-up phase of the PhysioNet/Computing in Cardiology Challenge 2014 (97.67%), and the Challenge test set (93.64%).

  18. Collaborative and Multilingual Approach to Learn Database Topics Using Concept Maps

    PubMed Central

    Calvo, Iñaki

    2014-01-01

    Authors report on a study using the concept mapping technique in computer engineering education for learning theoretical introductory database topics. In addition, the learning of multilingual technical terminology by means of the collaborative drawing of a concept map is also pursued in this experiment. The main characteristics of a study carried out in the database subject at the University of the Basque Country during the 2011/2012 course are described. This study contributes to the field of concept mapping as these kinds of cognitive tools have proved to be valid to support learning in computer engineering education. It contributes to the field of computer engineering education, providing a technique that can be incorporated with several educational purposes within the discipline. Results reveal the potential that a collaborative concept map editor offers to fulfil the above mentioned objectives. PMID:25538957

  19. PGMapper: a web-based tool linking phenotype to genes.

    PubMed

    Xiong, Qing; Qiu, Yuhui; Gu, Weikuan

    2008-04-01

    With the availability of whole genome sequence in many species, linkage analysis, positional cloning and microarray are gradually becoming powerful tools for investigating the links between phenotype and genotype or genes. However, in these methods, causative genes underlying a quantitative trait locus, or a disease, are usually located within a large genomic region or a large set of genes. Examining the function of every gene is very time consuming and needs to retrieve and integrate the information from multiple databases or genome resources. PGMapper is a software tool for automatically matching phenotype to genes from a defined genome region or a group of given genes by combining the mapping information from the Ensembl database and gene function information from the OMIM and PubMed databases. PGMapper is currently available for candidate gene search of human, mouse, rat, zebrafish and 12 other species. Available online at http://www.genediscovery.org/pgmapper/index.jsp.

  20. Digital geologic map of the Coeur d'Alene 1:100,000 quadrangle, Idaho and Montana

    USGS Publications Warehouse

    digital compilation by Munts, Steven R.

    2000-01-01

    Between 1961 and 1969, Alan Griggs and others conducted fieldwork to prepare a geologic map of the Spokane 1:250,000 map (Griggs, 1973). Their field observations were posted on paper copies of 15-minute quadrangle maps. In 1999, the USGS contracted with the Idaho Geological Survey to prepare a digital version of the Coeur d’Alene 1:100,000 quadrangle. To facilitate this work, the USGS obtained the field maps prepared by Griggs and others from the USGS Field Records Library in Denver, Colorado. The Idaho Geological Survey (IGS) digitized these maps and used them in their mapping program. The mapping focused on field checks to resolve problems in poorly known areas and in areas of disagreement between adjoining maps. The IGS is currently in the process of preparing a final digital spatial database for the Coeur d’Alene 1:100,000 quadrangle. However, there was immediate need for a digital version of the geologic map of the Coeur d’Alene 1:100,000 quadrangle and the data from the field sheets along with several other sources were assembled to produce this interim product. This interim product is the digital geologic map of the Coeur d’Alene 1:100,000 quadrangle, Idaho and Montana. It was compiled from the preliminary digital files prepared by the Idaho Geological, and supplemented by data from Griggs (1973) and from digital databases by Bookstrom and others (1999) and Derkey and others (1996). The resulting digital geologic map (GIS) database can be queried in many ways to produce a variety of geologic maps. Digital base map data files (topography, roads, towns, rivers and lakes, etc.) are not included: they may be obtained from a variety of commercial and government sources. This database is not meant to be used or displayed at any scale larger than 1:100,000 (e.g., 1:62,500 or 1:24,000). The digital geologic map graphics (of00-135_map.pdf) that are provided are representations of the digital database. The map area is located in north Idaho. This open-file report describes the geologic map units, the methods used to convert the geologic map data into a digital format, the ArcInfo GIS file structures and relationships, and explains how to download the digital files from the U.S. Geological Survey public access World Wide Web site on the Internet.

  1. Researchermap: a tool for visualizing author locations using Google maps.

    PubMed

    Rastegar-Mojarad, Majid; Bales, Michael E; Yu, Hong

    2013-01-01

    We hereby present ResearcherMap, a tool to visualize locations of authors of scholarly papers. In response to a query, the system returns a map of author locations. To develop the system we first populated a database of author locations, geocoding institution locations for all available institutional affiliation data in our database. The database includes all authors of Medline papers from 1990 to 2012. We conducted a formative heuristic usability evaluation of the system and measured the system's accuracy and performance. The accuracy of finding the accurate address is 97.5% in our system.

  2. Bedrock geologic map of the Worcester South quadrangle, Worcester County, Massachusetts

    USGS Publications Warehouse

    Walsh, Gregory J.; Merschat, Arthur J.

    2015-09-29

    The bedrock geology was mapped to study the tectonic history of the area and to provide a framework for ongoing hydrogeologic characterization of the fractured bedrock of Massachusetts. This report presents mapping by Gregory J. Walsh and Arthur J. Merschat from 2008 to 2010. The report consists of a map and GIS database, both of which are available for download at http://dx.doi.org/ 10.3133/sim3345. The database includes contacts of bedrock geologic units, faults, outcrop locations, structural information, and photographs.

  3. Mapping as a Spatial Data Source

    NASA Astrophysics Data System (ADS)

    Hudecová, Ľubica

    2013-03-01

    The basic database for a geographic information system (BD GIS) forms the core of a national spatial data infrastructure. Nowadays decisions are being made about the potential data sources for additional data updates and refinement of the BD GIS. Will the data from departmental or other information system administrators serve for this purpose? This paper gives an answer as to whether it is advisable to use "geodetic mapping" (the results realized in the process of land consolidation) or "cadastral mapping" (the results realized in the process of the renewal of cadastral documentation by new mapping) for additional data updates. In our analysis we focus on the quality parameters at the individual data element level, namely the positional accuracy, attribute accuracy, logical consistency, and data resolution. The results of the analysis are compared with the contents of the Object Class Catalog of BD GIS (OCC), which describes the group of objects managed by BD GIS and defines the data collection methods, types of geometry and its properties.

  4. BAID: The Barrow Area Information Database - an interactive web mapping portal and cyberinfrastructure for scientific activities in the vicinity of Barrow, Alaska

    NASA Astrophysics Data System (ADS)

    Cody, R. P.; Kassin, A.; Gaylord, A. G.; Tweedie, C. E.

    2013-12-01

    In 2013, the Barrow Area Information Database (BAID, www.baid.utep.edu) project resumed field operations in Barrow, AK. The Barrow area of northern Alaska is one of the most intensely researched locations in the Arctic. BAID is a cyberinfrastructure (CI) that details much of the historic and extant research undertaken within in the Barrow region in a suite of interactive web-based mapping and information portals (geobrowsers). The BAID user community and target audience for BAID is diverse and includes research scientists, science logisticians, land managers, educators, students, and the general public. BAID contains information on more than 11,000 Barrow area research sites that extend back to the 1940's and more than 640 remote sensing images and geospatial datasets. In a web-based setting, users can zoom, pan, query, measure distance, and save or print maps and query results. Data are described with metadata that meet Federal Geographic Data Committee standards and are archived at the University Corporation for Atmospheric Research Earth Observing Laboratory (EOL) where non-proprietary BAID data can be freely downloaded. Highlights for the 2013 season include the addition of more than 2000 additional research sites, providing differential global position system (dGPS) support to visiting scientists, surveying over 80 miles of coastline to document rates of erosion, training of local GIS personal, deployment of a wireless sensor network, and substantial upgrades to the BAID website and web mapping applications.

  5. DRUMS: Disk Repository with Update Management and Select option for high throughput sequencing data.

    PubMed

    Nettling, Martin; Thieme, Nils; Both, Andreas; Grosse, Ivo

    2014-02-04

    New technologies for analyzing biological samples, like next generation sequencing, are producing a growing amount of data together with quality scores. Moreover, software tools (e.g., for mapping sequence reads), calculating transcription factor binding probabilities, estimating epigenetic modification enriched regions or determining single nucleotide polymorphism increase this amount of position-specific DNA-related data even further. Hence, requesting data becomes challenging and expensive and is often implemented using specialised hardware. In addition, picking specific data as fast as possible becomes increasingly important in many fields of science. The general problem of handling big data sets was addressed by developing specialized databases like HBase, HyperTable or Cassandra. However, these database solutions require also specialized or distributed hardware leading to expensive investments. To the best of our knowledge, there is no database capable of (i) storing billions of position-specific DNA-related records, (ii) performing fast and resource saving requests, and (iii) running on a single standard computer hardware. Here, we present DRUMS (Disk Repository with Update Management and Select option), satisfying demands (i)-(iii). It tackles the weaknesses of traditional databases while handling position-specific DNA-related data in an efficient manner. DRUMS is capable of storing up to billions of records. Moreover, it focuses on optimizing relating single lookups as range request, which are needed permanently for computations in bioinformatics. To validate the power of DRUMS, we compare it to the widely used MySQL database. The test setting considers two biological data sets. We use standard desktop hardware as test environment. DRUMS outperforms MySQL in writing and reading records by a factor of two up to a factor of 10000. Furthermore, it can work with significantly larger data sets. Our work focuses on mid-sized data sets up to several billion records without requiring cluster technology. Storing position-specific data is a general problem and the concept we present here is a generalized approach. Hence, it can be easily applied to other fields of bioinformatics.

  6. The status of soil mapping for the Idaho National Engineering Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olson, G.L.; Lee, R.D.; Jeppesen, D.J.

    This report discusses the production of a revised version of the general soil map of the 2304-km{sup 2} (890-mi{sup 2}) Idaho National Engineering Laboratory (INEL) site in southeastern Idaho and the production of a geographic information system (GIS) soil map and supporting database. The revised general soil map replaces an INEL soil map produced in 1978 and incorporates the most current information on INEL soils. The general soil map delineates large soil associations based on National Resources Conservation Services [formerly the Soil Conservation Service (SCS)] principles of soil mapping. The GIS map incorporates detailed information that could not be presentedmore » on the general soil map and is linked to a database that contains the soil map unit descriptions, surficial geology codes, and other pertinent information.« less

  7. Prioritizing PubMed articles for the Comparative Toxicogenomic Database utilizing semantic information

    PubMed Central

    Wilbur, W. John

    2012-01-01

    The Comparative Toxicogenomics Database (CTD) contains manually curated literature that describes chemical–gene interactions, chemical–disease relationships and gene–disease relationships. Finding articles containing this information is the first and an important step to assist manual curation efficiency. However, the complex nature of named entities and their relationships make it challenging to choose relevant articles. In this article, we introduce a machine learning framework for prioritizing CTD-relevant articles based on our prior system for the protein–protein interaction article classification task in BioCreative III. To address new challenges in the CTD task, we explore a new entity identification method for genes, chemicals and diseases. In addition, latent topics are analyzed and used as a feature type to overcome the small size of the training set. Applied to the BioCreative 2012 Triage dataset, our method achieved 0.8030 mean average precision (MAP) in the official runs, resulting in the top MAP system among participants. Integrated with PubTator, a Web interface for annotating biomedical literature, the proposed system also received a positive review from the CTD curation team. PMID:23160415

  8. Prioritizing PubMed articles for the Comparative Toxicogenomic Database utilizing semantic information.

    PubMed

    Kim, Sun; Kim, Won; Wei, Chih-Hsuan; Lu, Zhiyong; Wilbur, W John

    2012-01-01

    The Comparative Toxicogenomics Database (CTD) contains manually curated literature that describes chemical-gene interactions, chemical-disease relationships and gene-disease relationships. Finding articles containing this information is the first and an important step to assist manual curation efficiency. However, the complex nature of named entities and their relationships make it challenging to choose relevant articles. In this article, we introduce a machine learning framework for prioritizing CTD-relevant articles based on our prior system for the protein-protein interaction article classification task in BioCreative III. To address new challenges in the CTD task, we explore a new entity identification method for genes, chemicals and diseases. In addition, latent topics are analyzed and used as a feature type to overcome the small size of the training set. Applied to the BioCreative 2012 Triage dataset, our method achieved 0.8030 mean average precision (MAP) in the official runs, resulting in the top MAP system among participants. Integrated with PubTator, a Web interface for annotating biomedical literature, the proposed system also received a positive review from the CTD curation team.

  9. RatMap—rat genome tools and data

    PubMed Central

    Petersen, Greta; Johnson, Per; Andersson, Lars; Klinga-Levan, Karin; Gómez-Fabre, Pedro M.; Ståhl, Fredrik

    2005-01-01

    The rat genome database RatMap (http://ratmap.org or http://ratmap.gen.gu.se) has been one of the main resources for rat genome information since 1994. The database is maintained by CMB–Genetics at Göteborg University in Sweden and provides information on rat genes, polymorphic rat DNA-markers and rat quantitative trait loci (QTLs), all curated at RatMap. The database is under the supervision of the Rat Gene and Nomenclature Committee (RGNC); thus much attention is paid to rat gene nomenclature. RatMap presents information on rat idiograms, karyotypes and provides a unified presentation of the rat genome sequence and integrated rat linkage maps. A set of tools is also available to facilitate the identification and characterization of rat QTLs, as well as the estimation of exon/intron number and sizes in individual rat genes. Furthermore, comparative gene maps of rat in regard to mouse and human are provided. PMID:15608244

  10. Preliminary Geologic Map of the Buxton 7.5' Quadrangle, Washington County, Oregon

    USGS Publications Warehouse

    Dinterman, Philip A.; Duvall, Alison R.

    2009-01-01

    This map, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits of the Buxton 7.5-minute quadrangle. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:24,000 or smaller. This plot file and accompanying database depict the distribution of geologic materials and structures at a regional (1:24,000) scale. The report is intended to provide geologic information for the regional study of materials properties, earthquake shaking, landslide potential, mineral hazards, seismic velocity, and earthquake faults. In addition, the report contains new information and interpretations about the regional geologic history and framework. However, the regional scale of this report does not provide sufficient detail for site development purposes.

  11. TFBSshape: a motif database for DNA shape features of transcription factor binding sites.

    PubMed

    Yang, Lin; Zhou, Tianyin; Dror, Iris; Mathelier, Anthony; Wasserman, Wyeth W; Gordân, Raluca; Rohs, Remo

    2014-01-01

    Transcription factor binding sites (TFBSs) are most commonly characterized by the nucleotide preferences at each position of the DNA target. Whereas these sequence motifs are quite accurate descriptions of DNA binding specificities of transcription factors (TFs), proteins recognize DNA as a three-dimensional object. DNA structural features refine the description of TF binding specificities and provide mechanistic insights into protein-DNA recognition. Existing motif databases contain extensive nucleotide sequences identified in binding experiments based on their selection by a TF. To utilize DNA shape information when analysing the DNA binding specificities of TFs, we developed a new tool, the TFBSshape database (available at http://rohslab.cmb.usc.edu/TFBSshape/), for calculating DNA structural features from nucleotide sequences provided by motif databases. The TFBSshape database can be used to generate heat maps and quantitative data for DNA structural features (i.e., minor groove width, roll, propeller twist and helix twist) for 739 TF datasets from 23 different species derived from the motif databases JASPAR and UniPROBE. As demonstrated for the basic helix-loop-helix and homeodomain TF families, our TFBSshape database can be used to compare, qualitatively and quantitatively, the DNA binding specificities of closely related TFs and, thus, uncover differential DNA binding specificities that are not apparent from nucleotide sequence alone.

  12. TFBSshape: a motif database for DNA shape features of transcription factor binding sites

    PubMed Central

    Yang, Lin; Zhou, Tianyin; Dror, Iris; Mathelier, Anthony; Wasserman, Wyeth W.; Gordân, Raluca; Rohs, Remo

    2014-01-01

    Transcription factor binding sites (TFBSs) are most commonly characterized by the nucleotide preferences at each position of the DNA target. Whereas these sequence motifs are quite accurate descriptions of DNA binding specificities of transcription factors (TFs), proteins recognize DNA as a three-dimensional object. DNA structural features refine the description of TF binding specificities and provide mechanistic insights into protein–DNA recognition. Existing motif databases contain extensive nucleotide sequences identified in binding experiments based on their selection by a TF. To utilize DNA shape information when analysing the DNA binding specificities of TFs, we developed a new tool, the TFBSshape database (available at http://rohslab.cmb.usc.edu/TFBSshape/), for calculating DNA structural features from nucleotide sequences provided by motif databases. The TFBSshape database can be used to generate heat maps and quantitative data for DNA structural features (i.e., minor groove width, roll, propeller twist and helix twist) for 739 TF datasets from 23 different species derived from the motif databases JASPAR and UniPROBE. As demonstrated for the basic helix-loop-helix and homeodomain TF families, our TFBSshape database can be used to compare, qualitatively and quantitatively, the DNA binding specificities of closely related TFs and, thus, uncover differential DNA binding specificities that are not apparent from nucleotide sequence alone. PMID:24214955

  13. A geographic information system applied to a malaria field study in western Kenya.

    PubMed

    Hightower, A W; Ombok, M; Otieno, R; Odhiambo, R; Oloo, A J; Lal, A A; Nahlen, B L; Hawley, W A

    1998-03-01

    This paper describes use of the global positioning system (GPS) in differential mode (DGPS) to obtain highly accurate longitudes, latitudes, and altitudes of 1,169 houses, 15 schools, 40 churches, four health care centers, 48 major mosquito breeding sites, 10 borehole wells, seven shopping areas, major roads, streams, the shore of Lake Victoria, and other geographic features of interest associated with a longitudinal study of malaria in 15 villages in western Kenya. The area mapped encompassed approximately 70 km2 and included 42.0 km of roads, 54.3 km of streams, and 15.0 km of lake shore. Location data were entered into a geographic information system for map production and linkage with various databases for spatial analyses. Spatial analyses using parasitologic and entomologic data are presented as examples. Background information on DGPS is presented along with estimates of effort and expense to produce the map information.

  14. Geologic and structure map of the Choteau 1 degree by 2 degrees Quadrangle, western Montana

    USGS Publications Warehouse

    Mudge, Melville R.; Earhart, Robert L.; Whipple, James W.; Harrison, Jack E.

    1982-01-01

    The geologic and structure map of Choteau 1 x 2 degree quadrangle (Mudge and others, 1982) was originally converted to a digital format by Jeff Silkwood (U.S. Forest Service and completed by the U.S. Geological Survey staff and contractor at the Spokane Field Office (WA) in 2000 for input into a geographic information system (GIS). The resulting digital geologic map (GIS) database can be queried in many ways to produce a variey of geologic maps. Digital base map data files (topography, roads, towns, rivers and lakes, etc.) are not included: they may be obtained from a variety of commercial and government sources. This database is not meant to be used or displayed at any scale larger than 1:250,000 (e.g. 1:100,000 or 1:24,000. The digital geologic map graphics and plot files (chot250k.gra/.hp/.eps and chot-map.pdf) that are provided in the digital package are representations of the digital database. They are not designed to be cartographic products.

  15. Large-scale mass spectrometric detection of variant peptides resulting from non-synonymous nucleotide differences

    PubMed Central

    Sheynkman, Gloria M.; Shortreed, Michael R.; Frey, Brian L.; Scalf, Mark; Smith, Lloyd M.

    2013-01-01

    Each individual carries thousands of non-synonymous single nucleotide variants (nsSNVs) in their genome, each corresponding to a single amino acid polymorphism (SAP) in the encoded proteins. It is important to be able to directly detect and quantify these variations at the protein level in order to study post-transcriptional regulation, differential allelic expression, and other important biological processes. However, such variant peptides are not generally detected in standard proteomic analyses, due to their absence from the generic databases that are employed for mass spectrometry searching. Here, we extend previous work that demonstrated the use of customized SAP databases constructed from sample-matched RNA-Seq data. We collected deep coverage RNA-Seq data from the Jurkat cell line, compiled the set of nsSNVs that are expressed, used this information to construct a customized SAP database, and searched it against deep coverage shotgun MS data obtained from the same sample. This approach enabled detection of 421 SAP peptides mapping to 395 nsSNVs. We compared these peptides to peptides identified from a large generic search database containing all known nsSNVs (dbSNP) and found that more than 70% of the SAP peptides from this dbSNP-derived search were not supported by the RNA-Seq data, and thus are likely false positives. Next, we increased the SAP coverage from the RNA-Seq derived database by utilizing multiple protease digestions, thereby increasing variant detection to 695 SAP peptides mapping to 504 nsSNV sites. These detected SAP peptides corresponded to moderate to high abundance transcripts (30+ transcripts per million, TPM). The SAP peptides included 192 allelic pairs; the relative expression levels of the two alleles were evaluated for 51 of those pairs, and found to be comparable in all cases. PMID:24175627

  16. A generic method for improving the spatial interoperability of medical and ecological databases.

    PubMed

    Ghenassia, A; Beuscart, J B; Ficheur, G; Occelli, F; Babykina, E; Chazard, E; Genin, M

    2017-10-03

    The availability of big data in healthcare and the intensive development of data reuse and georeferencing have opened up perspectives for health spatial analysis. However, fine-scale spatial studies of ecological and medical databases are limited by the change of support problem and thus a lack of spatial unit interoperability. The use of spatial disaggregation methods to solve this problem introduces errors into the spatial estimations. Here, we present a generic, two-step method for merging medical and ecological databases that avoids the use of spatial disaggregation methods, while maximizing the spatial resolution. Firstly, a mapping table is created after one or more transition matrices have been defined. The latter link the spatial units of the original databases to the spatial units of the final database. Secondly, the mapping table is validated by (1) comparing the covariates contained in the two original databases, and (2) checking the spatial validity with a spatial continuity criterion and a spatial resolution index. We used our novel method to merge a medical database (the French national diagnosis-related group database, containing 5644 spatial units) with an ecological database (produced by the French National Institute of Statistics and Economic Studies, and containing with 36,594 spatial units). The mapping table yielded 5632 final spatial units. The mapping table's validity was evaluated by comparing the number of births in the medical database and the ecological databases in each final spatial unit. The median [interquartile range] relative difference was 2.3% [0; 5.7]. The spatial continuity criterion was low (2.4%), and the spatial resolution index was greater than for most French administrative areas. Our innovative approach improves interoperability between medical and ecological databases and facilitates fine-scale spatial analyses. We have shown that disaggregation models and large aggregation techniques are not necessarily the best ways to tackle the change of support problem.

  17. Computer-aided system for detecting runway incursions

    NASA Astrophysics Data System (ADS)

    Sridhar, Banavar; Chatterji, Gano B.

    1994-07-01

    A synthetic vision system for enhancing the pilot's ability to navigate and control the aircraft on the ground is described. The system uses the onboard airport database and images acquired by external sensors. Additional navigation information needed by the system is provided by the Inertial Navigation System and the Global Positioning System. The various functions of the system, such as image enhancement, map generation, obstacle detection, collision avoidance, guidance, etc., are identified. The available technologies, some of which were developed at NASA, that are applicable to the aircraft ground navigation problem are noted. Example images of a truck crossing the runway while the aircraft flies close to the runway centerline are described. These images are from a sequence of images acquired during one of the several flight experiments conducted by NASA to acquire data to be used for the development and verification of the synthetic vision concepts. These experiments provide a realistic database including video and infrared images, motion states from the Inertial Navigation System and the Global Positioning System, and camera parameters.

  18. Semantic mediation in the national geologic map database (US)

    USGS Publications Warehouse

    Percy, D.; Richard, S.; Soller, D.

    2008-01-01

    Controlled language is the primary challenge in merging heterogeneous databases of geologic information. Each agency or organization produces databases with different schema, and different terminology for describing the objects within. In order to make some progress toward merging these databases using current technology, we have developed software and a workflow that allows for the "manual semantic mediation" of these geologic map databases. Enthusiastic support from many state agencies (stakeholders and data stewards) has shown that the community supports this approach. Future implementations will move toward a more Artificial Intelligence-based approach, using expert-systems or knowledge-bases to process data based on the training sets we have developed manually.

  19. BlackOPs: increasing confidence in variant detection through mappability filtering.

    PubMed

    Cabanski, Christopher R; Wilkerson, Matthew D; Soloway, Matthew; Parker, Joel S; Liu, Jinze; Prins, Jan F; Marron, J S; Perou, Charles M; Hayes, D Neil

    2013-10-01

    Identifying variants using high-throughput sequencing data is currently a challenge because true biological variants can be indistinguishable from technical artifacts. One source of technical artifact results from incorrectly aligning experimentally observed sequences to their true genomic origin ('mismapping') and inferring differences in mismapped sequences to be true variants. We developed BlackOPs, an open-source tool that simulates experimental RNA-seq and DNA whole exome sequences derived from the reference genome, aligns these sequences by custom parameters, detects variants and outputs a blacklist of positions and alleles caused by mismapping. Blacklists contain thousands of artifact variants that are indistinguishable from true variants and, for a given sample, are expected to be almost completely false positives. We show that these blacklist positions are specific to the alignment algorithm and read length used, and BlackOPs allows users to generate a blacklist specific to their experimental setup. We queried the dbSNP and COSMIC variant databases and found numerous variants indistinguishable from mapping errors. We demonstrate how filtering against blacklist positions reduces the number of potential false variants using an RNA-seq glioblastoma cell line data set. In summary, accounting for mapping-caused variants tuned to experimental setups reduces false positives and, therefore, improves genome characterization by high-throughput sequencing.

  20. Database and Map of Quaternary Faults and Folds in Peru and its Offshore Region

    USGS Publications Warehouse

    Machare, Jose; Fenton, Clark H.; Machette, Michael N.; Lavenu, Alain; Costa, Carlos; Dart, Richard L.

    2003-01-01

    This publication consists of a main map of Quaternary faults and fiolds of Peru, a table of Quaternary fault data, a region inset map showing relative plate motion, and a second inset map of an enlarged area of interest in southern Peru. These maps and data compilation show evidence for activity of Quaternary faults and folds in Peru and its offshore regions of the Pacific Ocean. The maps show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds. These data are accompanied by text databases that describe these features and document current information on their activity in the Quaternary.

  1. An affinity-structure database of helix-turn-helix: DNA complexes with a universal coordinate system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    AlQuraishi, Mohammed; Tang, Shengdong; Xia, Xide

    Molecular interactions between proteins and DNA molecules underlie many cellular processes, including transcriptional regulation, chromosome replication, and nucleosome positioning. Computational analyses of protein-DNA interactions rely on experimental data characterizing known protein-DNA interactions structurally and biochemically. While many databases exist that contain either structural or biochemical data, few integrate these two data sources in a unified fashion. Such integration is becoming increasingly critical with the rapid growth of structural and biochemical data, and the emergence of algorithms that rely on the synthesis of multiple data types to derive computational models of molecular interactions. We have developed an integrated affinity-structure database inmore » which the experimental and quantitative DNA binding affinities of helix-turn-helix proteins are mapped onto the crystal structures of the corresponding protein-DNA complexes. This database provides access to: (i) protein-DNA structures, (ii) quantitative summaries of protein-DNA binding affinities using position weight matrices, and (iii) raw experimental data of protein-DNA binding instances. Critically, this database establishes a correspondence between experimental structural data and quantitative binding affinity data at the single basepair level. Furthermore, we present a novel alignment algorithm that structurally aligns the protein-DNA complexes in the database and creates a unified residue-level coordinate system for comparing the physico-chemical environments at the interface between complexes. Using this unified coordinate system, we compute the statistics of atomic interactions at the protein-DNA interface of helix-turn-helix proteins. We provide an interactive website for visualization, querying, and analyzing this database, and a downloadable version to facilitate programmatic analysis. Lastly, this database will facilitate the analysis of protein-DNA interactions and the development of programmatic computational methods that capitalize on integration of structural and biochemical datasets. The database can be accessed at http://ProteinDNA.hms.harvard.edu.« less

  2. An affinity-structure database of helix-turn-helix: DNA complexes with a universal coordinate system

    DOE PAGES

    AlQuraishi, Mohammed; Tang, Shengdong; Xia, Xide

    2015-11-19

    Molecular interactions between proteins and DNA molecules underlie many cellular processes, including transcriptional regulation, chromosome replication, and nucleosome positioning. Computational analyses of protein-DNA interactions rely on experimental data characterizing known protein-DNA interactions structurally and biochemically. While many databases exist that contain either structural or biochemical data, few integrate these two data sources in a unified fashion. Such integration is becoming increasingly critical with the rapid growth of structural and biochemical data, and the emergence of algorithms that rely on the synthesis of multiple data types to derive computational models of molecular interactions. We have developed an integrated affinity-structure database inmore » which the experimental and quantitative DNA binding affinities of helix-turn-helix proteins are mapped onto the crystal structures of the corresponding protein-DNA complexes. This database provides access to: (i) protein-DNA structures, (ii) quantitative summaries of protein-DNA binding affinities using position weight matrices, and (iii) raw experimental data of protein-DNA binding instances. Critically, this database establishes a correspondence between experimental structural data and quantitative binding affinity data at the single basepair level. Furthermore, we present a novel alignment algorithm that structurally aligns the protein-DNA complexes in the database and creates a unified residue-level coordinate system for comparing the physico-chemical environments at the interface between complexes. Using this unified coordinate system, we compute the statistics of atomic interactions at the protein-DNA interface of helix-turn-helix proteins. We provide an interactive website for visualization, querying, and analyzing this database, and a downloadable version to facilitate programmatic analysis. Lastly, this database will facilitate the analysis of protein-DNA interactions and the development of programmatic computational methods that capitalize on integration of structural and biochemical datasets. The database can be accessed at http://ProteinDNA.hms.harvard.edu.« less

  3. Updating road databases from shape-files using aerial images

    NASA Astrophysics Data System (ADS)

    Häufel, Gisela; Bulatov, Dimitri; Pohl, Melanie

    2015-10-01

    Road databases are an important part of geo data infrastructure. The knowledge about their characteristics and course is essential for urban planning, navigation or evacuation tasks. Starting from OpenStreetMap (OSM) shape-file data for street networks, we introduce an algorithm to enrich these available road maps by new maps which are based on other airborne sensor technology. In our case, these are results of our context-based urban terrain reconstruction process. We wish to enhance the use of road databases by computing additional junctions, narrow passages and other items which may emerge due to changes in the terrain. This is relevant for various military and civil applications.

  4. The U.S. Geological Survey mapping and cartographic database activities, 2006-2010

    USGS Publications Warehouse

    Craun, Kari J.; Donnelly, John P.; Allord, Gregory J.

    2011-01-01

    The U.S. Geological Survey (USGS) began systematic topographic mapping of the United States in the 1880s, beginning with scales of 1:250,000 and 1:125,000 in support of geological mapping. Responding to the need for higher resolution and more detail, the 1:62,500-scale, 15-minute, topographic map series was begun in the beginning of the 20th century. Finally, in the 1950s the USGS adopted the 1:24,000-scale, 7.5-minute topographic map series to portray even more detail, completing the coverage of the conterminous 48 states of the United States with this series in 1992. In 2001, the USGS developed the vision and concept of The National Map, a topographic database for the 21st century and the source for a new generation of topographic maps (http://nationalmap.gov/). In 2008, the initial production of those maps began with a 1:24,000-scale digital product. In a separate, but related project, the USGS began scanning the existing inventory of historical topographic maps at all scales to accompany the new topographic maps. The USGS also had developed a digital database of The National Atlas of the United States. The digital version of Atlas is now Web-available and supports a mapping engine for small scale maps of the United States and North America. These three efforts define topographic mapping activities of the USGS during the last few years and are discussed below.

  5. Ridge 2000 Data Management System

    NASA Astrophysics Data System (ADS)

    Goodwillie, A. M.; Carbotte, S. M.; Arko, R. A.; Haxby, W. F.; Ryan, W. B.; Chayes, D. N.; Lehnert, K. A.; Shank, T. M.

    2005-12-01

    Hosted at Lamont by the marine geoscience Data Management group, mgDMS, the NSF-funded Ridge 2000 electronic database, http://www.marine-geo.org/ridge2000/, is a key component of the Ridge 2000 multi-disciplinary program. The database covers each of the three Ridge 2000 Integrated Study Sites: Endeavour Segment, Lau Basin, and 8-11N Segment. It promotes the sharing of information to the broader community, facilitates integration of the suite of information collected at each study site, and enables comparisons between sites. The Ridge 2000 data system provides easy web access to a relational database that is built around a catalogue of cruise metadata. Any web browser can be used to perform a versatile text-based search which returns basic cruise and submersible dive information, sample and data inventories, navigation, and other relevant metadata such as shipboard personnel and links to NSF program awards. In addition, non-proprietary data files, images, and derived products which are hosted locally or in national repositories, as well as science and technical reports, can be freely downloaded. On the Ridge 2000 database page, our Data Link allows users to search the database using a broad range of parameters including data type, cruise ID, chief scientist, geographical location. The first Ridge 2000 field programs sailed in 2004 and, in addition to numerous data sets collected prior to the Ridge 2000 program, the database currently contains information on fifteen Ridge 2000-funded cruises and almost sixty Alvin dives. Track lines can be viewed using a recently- implemented Web Map Service button labelled Map View. The Ridge 2000 database is fully integrated with databases hosted by the mgDMS group for MARGINS and the Antarctic multibeam and seismic reflection data initiatives. Links are provided to partner databases including PetDB, SIOExplorer, and the ODP Janus system. Improved inter-operability with existing and new partner repositories continues to be strengthened. One major effort involves the gradual unification of the metadata across these partner databases. Standardised electronic metadata forms that can be filled in at sea are available from our web site. Interactive map-based exploration and visualisation of the Ridge 2000 database is provided by GeoMapApp, a freely-available Java(tm) application being developed within the mgDMS group. GeoMapApp includes high-resolution bathymetric grids for the 8-11N EPR segment and allows customised maps and grids for any of the Ridge 2000 ISS to be created. Vent and instrument locations can be plotted and saved as images, and Alvin dive photos are also available.

  6. Digital map databases in support of avionic display systems

    NASA Astrophysics Data System (ADS)

    Trenchard, Michael E.; Lohrenz, Maura C.; Rosche, Henry, III; Wischow, Perry B.

    1991-08-01

    The emergence of computerized mission planning systems (MPS) and airborne digital moving map systems (DMS) has necessitated the development of a global database of raster aeronautical chart data specifically designed for input to these systems. The Naval Oceanographic and Atmospheric Research Laboratory''s (NOARL) Map Data Formatting Facility (MDFF) is presently dedicated to supporting these avionic display systems with the development of the Compressed Aeronautical Chart (CAC) database on Compact Disk Read Only Memory (CDROM) optical discs. The MDFF is also developing a series of aircraft-specific Write-Once Read Many (WORM) optical discs. NOARL has initiated a comprehensive research program aimed at improving the pilots'' moving map displays current research efforts include the development of an alternate image compression technique and generation of a standard set of color palettes. The CAC database will provide digital aeronautical chart data in six different scales. CAC is derived from the Defense Mapping Agency''s (DMA) Equal Arc-second (ARC) Digitized Raster Graphics (ADRG) a series of scanned aeronautical charts. NOARL processes ADRG to tailor the chart image resolution to that of the DMS display while reducing storage requirements through image compression techniques. CAC is being distributed by DMA as a library of CDROMs.

  7. Database and online map service on unstable rock slopes in Norway - From data perpetuation to public information

    NASA Astrophysics Data System (ADS)

    Oppikofer, Thierry; Nordahl, Bobo; Bunkholt, Halvor; Nicolaisen, Magnus; Jarna, Alexandra; Iversen, Sverre; Hermanns, Reginald L.; Böhme, Martina; Yugsi Molina, Freddy X.

    2015-11-01

    The unstable rock slope database is developed and maintained by the Geological Survey of Norway as part of the systematic mapping of unstable rock slopes in Norway. This mapping aims to detect catastrophic rock slope failures before they occur. More than 250 unstable slopes with post-glacial deformation are detected up to now. The main aims of the unstable rock slope database are (1) to serve as a national archive for unstable rock slopes in Norway; (2) to serve for data collection and storage during field mapping; (3) to provide decision-makers with hazard zones and other necessary information on unstable rock slopes for land-use planning and mitigation; and (4) to inform the public through an online map service. The database is organized hierarchically with a main point for each unstable rock slope to which several feature classes and tables are linked. This main point feature class includes several general attributes of the unstable rock slopes, such as site name, general and geological descriptions, executed works, recommendations, technical parameters (volume, lithology, mechanism and others), displacement rates, possible consequences, as well as hazard and risk classification. Feature classes and tables linked to the main feature class include different scenarios of an unstable rock slope, field observation points, sampling points for dating, displacement measurement stations, lineaments, unstable areas, run-out areas, areas affected by secondary effects, along with tables for hazard and risk classification and URL links to further documentation and references. The database on unstable rock slopes in Norway will be publicly consultable through an online map service. Factsheets with key information on unstable rock slopes can be automatically generated and downloaded for each site. Areas of possible rock avalanche run-out and their secondary effects displayed in the online map service, along with hazard and risk assessments, will become important tools for land-use planning. The present database will further evolve in the coming years as the systematic mapping progresses and as available techniques and tools evolve.

  8. The Iranian National Geodata Revision Strategy and Realization Based on Geodatabase

    NASA Astrophysics Data System (ADS)

    Haeri, M.; Fasihi, A.; Ayazi, S. M.

    2012-07-01

    In recent years, using of spatial database for storing and managing spatial data has become a hot topic in the field of GIS. Accordingly National Cartographic Center of Iran (NCC) produces - from time to time - some spatial data which is usually included in some databases. One of the NCC major projects was designing National Topographic Database (NTDB). NCC decided to create National Topographic Database of the entire country-based on 1:25000 coverage maps. The standard of NTDB was published in 1994 and its database was created at the same time. In NTDB geometric data was stored in MicroStation design format (DGN) which each feature has a link to its attribute data (stored in Microsoft Access file). Also NTDB file was produced in a sheet-wise mode and then stored in a file-based style. Besides map compilation, revision of existing maps has already been started. Key problems of NCC are revision strategy, NTDB file-based style storage and operator challenges (NCC operators are almost preferred to edit and revise geometry data in CAD environments). A GeoDatabase solution for national Geodata, based on NTDB map files and operators' revision preferences, is introduced and released herein. The proposed solution extends the traditional methods to have a seamless spatial database which it can be revised in CAD and GIS environment, simultaneously. The proposed system is the common data framework to create a central data repository for spatial data storage and management.

  9. Thematic Accuracy Assessment of the 2011 National Land Cover Database (NLCD)

    EPA Science Inventory

    Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment o...

  10. APPLICATION OF A "VITURAL FIELD REFERENCE DATABASE" TO ASSESS LAND-COVER MAP ACCURACIES

    EPA Science Inventory

    An accuracy assessment was performed for the Neuse River Basin, NC land-cover/use
    (LCLU) mapping results using a "Virtual Field Reference Database (VFRDB)". The VFRDB was developed using field measurement and digital imagery (camera) data collected at 1,409 sites over a perio...

  11. Automated Database Mediation Using Ontological Metadata Mappings

    PubMed Central

    Marenco, Luis; Wang, Rixin; Nadkarni, Prakash

    2009-01-01

    Objective To devise an automated approach for integrating federated database information using database ontologies constructed from their extended metadata. Background One challenge of database federation is that the granularity of representation of equivalent data varies across systems. Dealing effectively with this problem is analogous to dealing with precoordinated vs. postcoordinated concepts in biomedical ontologies. Model Description The authors describe an approach based on ontological metadata mapping rules defined with elements of a global vocabulary, which allows a query specified at one granularity level to fetch data, where possible, from databases within the federation that use different granularities. This is implemented in OntoMediator, a newly developed production component of our previously described Query Integrator System. OntoMediator's operation is illustrated with a query that accesses three geographically separate, interoperating databases. An example based on SNOMED also illustrates the applicability of high-level rules to support the enforcement of constraints that can prevent inappropriate curator or power-user actions. Summary A rule-based framework simplifies the design and maintenance of systems where categories of data must be mapped to each other, for the purpose of either cross-database query or for curation of the contents of compositional controlled vocabularies. PMID:19567801

  12. Position-sensitive radiation monitoring (surface contamination monitor). Innovative technology summary report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1999-06-01

    The Shonka Research Associates, Inc. Position-Sensitive Radiation Monitor both detects surface radiation and prepares electronic survey map/survey report of surveyed area automatically. The electronically recorded map can be downloaded to a personal computer for review and a map/report can be generated for inclusion in work packages. Switching from beta-gamma detection to alpha detection is relatively simple and entails moving a switch position to alpha and adjusting the voltage level to an alpha detection level. No field calibration is required when switching from beta-gamma to alpha detection. The system can be used for free-release surveys because it meets the federal detectionmore » level sensitivity limits requires for surface survey instrumentation. This technology is superior to traditionally-used floor contamination monitor (FCM) and hand-held survey instrumentation because it can precisely register locations of radioactivity and accurately correlate contamination levels to specific locations. Additionally, it can collect and store continuous radiological data in database format, which can be used to produce real-time imagery as well as automated graphics of survey data. Its flexible design can accommodate a variety of detectors. The cost of the innovative technology is 13% to 57% lower than traditional methods. This technology is suited for radiological surveys of flat surfaces at US Department of Energy (DOE) nuclear facility decontamination and decommissioning (D and D) sites or similar public or commercial sites.« less

  13. Kin-Driver: a database of driver mutations in protein kinases.

    PubMed

    Simonetti, Franco L; Tornador, Cristian; Nabau-Moretó, Nuria; Molina-Vila, Miguel A; Marino-Buslje, Cristina

    2014-01-01

    Somatic mutations in protein kinases (PKs) are frequent driver events in many human tumors, while germ-line mutations are associated with hereditary diseases. Here we present Kin-driver, the first database that compiles driver mutations in PKs with experimental evidence demonstrating their functional role. Kin-driver is a manual expert-curated database that pays special attention to activating mutations (AMs) and can serve as a validation set to develop new generation tools focused on the prediction of gain-of-function driver mutations. It also offers an easy and intuitive environment to facilitate the visualization and analysis of mutations in PKs. Because all mutations are mapped onto a multiple sequence alignment, analogue positions between kinases can be identified and tentative new mutations can be proposed for studying by transferring annotation. Finally, our database can also be of use to clinical and translational laboratories, helping them to identify uncommon AMs that can correlate with response to new antitumor drugs. The website was developed using PHP and JavaScript, which are supported by all major browsers; the database was built using MySQL server. Kin-driver is available at: http://kin-driver.leloir.org.ar/ © The Author(s) 2014. Published by Oxford University Press.

  14. Digital Geologic Map of the Rosalia 1:100,000 Quadrangle, Washington and Idaho: A Digital Database for the 1990 S.Z. Waggoner Map

    USGS Publications Warehouse

    Derkey, Pamela D.; Johnson, Bruce R.; Lackaff, Beatrice B.; Derkey, Robert E.

    1998-01-01

    The geologic map of the Rosalia 1:100,000-scale quadrangle was compiled in 1990 by S.Z. Waggoner of the Washington state Division of Geology and Earth Resources. This data was entered into a geographic information system (GIS) as part of a larger effort to create regional digital geology for the Pacific Northwest. The intent was to provide a digital geospatial database for a previously published black and white paper geologic map. This database can be queried in many ways to produce a variety of geologic maps. Digital base map data files are not included: they may be obtained from a variety of commercial and government sources. This database is not meant to be used or displayed at any scale larger than 1:100,000 (e.g., 1:62,500 or 1:24,000) as it has been somewhat generalized to fit the 1:100,000 scale map. The map area is located in eastern Washington and extends across the state border into western Idaho. This open-file report describes the methods used to convert the geologic map data into a digital format, documents the file structures, and explains how to download the digital files from the U.S. Geological Survey public access World Wide Web site on the Internet. We wish to thank J. Eric Schuster of the Washington Division of Geology and Earth Resources for providing the original stable-base mylar and the funding for it to be scanned. We also thank Dick Blank and Barry Moring of the U.S. Geological Survey for reviewing the manuscript and digital files, respectively.

  15. Database for volcanic processes and geology of Augustine Volcano, Alaska

    USGS Publications Warehouse

    McIntire, Jacqueline; Ramsey, David W.; Thoms, Evan; Waitt, Richard B.; Beget, James E.

    2012-01-01

    This digital release contains information used to produce the geologic map published as Plate 1 in U.S. Geological Survey Professional Paper 1762 (Waitt and Begét, 2009). The main component of this digital release is a geologic map database prepared using geographic information systems (GIS) applications. This release also contains links to files to view or print the map plate, accompanying measured sections, and main report text from Professional Paper 1762. It should be noted that Augustine Volcano erupted in 2006, after the completion of the geologic mapping shown in Professional Paper 1762 and presented in this database. Information on the 2006 eruption can be found in U.S. Geological Survey Professional Paper 1769. For the most up to date information on the status of Alaska volcanoes, please refer to the U.S. Geological Survey Volcano Hazards Program website.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webb-Robertson, Bobbie-Jo M.

    Accurate identification of peptides is a current challenge in mass spectrometry (MS) based proteomics. The standard approach uses a search routine to compare tandem mass spectra to a database of peptides associated with the target organism. These database search routines yield multiple metrics associated with the quality of the mapping of the experimental spectrum to the theoretical spectrum of a peptide. The structure of these results make separating correct from false identifications difficult and has created a false identification problem. Statistical confidence scores are an approach to battle this false positive problem that has led to significant improvements in peptidemore » identification. We have shown that machine learning, specifically support vector machine (SVM), is an effective approach to separating true peptide identifications from false ones. The SVM-based peptide statistical scoring method transforms a peptide into a vector representation based on database search metrics to train and validate the SVM. In practice, following the database search routine, a peptides is denoted in its vector representation and the SVM generates a single statistical score that is then used to classify presence or absence in the sample« less

  17. Geologic Surface Effects of Underground Nuclear Testing, Buckboard Mesa, Climax Stock, Dome Mountain, Frenchman Flat, Rainier/Aqueduct Mesa, and Shoshone Mountain, Nevada Test Site, Nevada

    USGS Publications Warehouse

    Grasso, Dennis N.

    2003-01-01

    Surface effects maps were produced for 72 of 89 underground detonations conducted at the Frenchman Flat, Rainier Mesa and Aqueduct Mesa, Climax Stock, Shoshone Mountain, Buckboard Mesa, and Dome Mountain testing areas of the Nevada Test Site between August 10, 1957 (Saturn detonation, Area 12) and September 18, 1992 (Hunters Trophy detonation, Area 12). The ?Other Areas? Surface Effects Map Database, which was used to construct the maps shown in this report, contains digital reproductions of these original maps. The database is provided in both ArcGIS (v. 8.2) geodatabase format and ArcView (v. 3.2) shapefile format. This database contains sinks, cracks, faults, and other surface effects having a combined (cumulative) length of 136.38 km (84.74 mi). In GIS digital format, the user can view all surface effects maps simultaneously, select and view the surface effects of one or more sites of interest, or view specific surface effects by area or site. Three map layers comprise the database. They are: (1) the surface effects maps layer (oase_n27f), (2) the bar symbols layer (oase_bar_n27f), and (3) the ball symbols layer (oase_ball_n27f). Additionally, an annotation layer, named 'Ball_and_Bar_Labels,' and a polygon features layer, named 'Area12_features_poly_n27f,' are contained in the geodatabase version of the database. The annotation layer automatically labels all 295 ball-and-bar symbols shown on these maps. The polygon features layer displays areas of ground disturbances, such as rock spall and disturbed ground caused by the detonations. Shapefile versions of the polygon features layer in Nevada State Plane and Universal Transverse Mercator projections, named 'area12_features_poly_n27f.shp' and 'area12_features_poly_u83m.shp,' are also provided in the archive.

  18. Geologic Map of the Mount Trumbull 30' X 60' Quadrangle, Mohave and Coconino Counties, Northwestern Arizona

    USGS Publications Warehouse

    Billingsley, George H.; Wellmeyer, Jessica L.

    2003-01-01

    The geologic map of the Mount Trumbull 30' x 60' quadrangle is a cooperative product of the U.S. Geological Survey, the National Park Service, and the Bureau of Land Management that provides geologic map coverage and regional geologic information for visitor services and resource management of Grand Canyon National Park, Lake Mead Recreational Area, and Grand Canyon Parashant National Monument, Arizona. This map is a compilation of previous and new geologic mapping that encompasses the Mount Trumbull 30' x 60' quadrangle of Arizona. This digital database, a compilation of previous and new geologic mapping, contains geologic data used to produce the 100,000-scale Geologic Map of the Mount Trumbull 30' x 60' Quadrangle, Mohave and Coconino Counties, Northwestern Arizona. The geologic features that were mapped as part of this project include: geologic contacts and faults, bedrock and surficial geologic units, structural data, fold axes, karst features, mines, and volcanic features. This map was produced using 1:24,000-scale 1976 infrared aerial photographs followed by extensive field checking. Volcanic rocks were mapped as separate units when identified on aerial photographs as mappable and distinctly separate units associated with one or more pyroclastic cones and flows. Many of the Quaternary alluvial deposits that have similar lithology but different geomorphic characteristics were mapped almost entirely by photogeologic methods. Stratigraphic position and amount of erosional degradation were used to determine relative ages of alluvial deposits having similar lithologies. Each map unit and structure was investigated in detail in the field to ensure accuracy of description. Punch-registered mylar sheets were scanned at the Flagstaff Field Center using an Optronics 5040 raster scanner at a resolution of 50 microns (508 dpi). The scans were output in .rle format, converted to .rlc, and then converted to ARC/INFO grids. A tic file was created in geographic coordinates and projected into the base map projection (Polyconic) using a central meridian of -113.500. The tic file was used to transform the grid into Universal Transverse Mercator projection. The linework was vectorized using gridline. Scanned lines were edited interactively in ArcEdit. Polygons were attributed in ArcEdit and all artifacts and scanning errors visible at 1:100,000 were removed. Point data were digitized onscreen. Due to the discovery of digital and geologic errors on the original files, the ARC/INFO coverages were converted to a personal geodatabase and corrected in ArcMap. The feature classes which define the geologic units, lines and polygons, are topologically related and maintained in the geodatabase by a set of validation rules. The internal database structure and feature attributes were then modified to match other geologic map databases being created for the Grand Canyon region. Faults were edited with the downthrown block, if known, on the 'right side' of the line. The 'right' and 'left' sides of a line are determined from 'starting' at the line's 'from node' and moving to the line's end or 'to node'.

  19. TabSQL: a MySQL tool to facilitate mapping user data to public databases.

    PubMed

    Xia, Xiao-Qin; McClelland, Michael; Wang, Yipeng

    2010-06-23

    With advances in high-throughput genomics and proteomics, it is challenging for biologists to deal with large data files and to map their data to annotations in public databases. We developed TabSQL, a MySQL-based application tool, for viewing, filtering and querying data files with large numbers of rows. TabSQL provides functions for downloading and installing table files from public databases including the Gene Ontology database (GO), the Ensembl databases, and genome databases from the UCSC genome bioinformatics site. Any other database that provides tab-delimited flat files can also be imported. The downloaded gene annotation tables can be queried together with users' data in TabSQL using either a graphic interface or command line. TabSQL allows queries across the user's data and public databases without programming. It is a convenient tool for biologists to annotate and enrich their data.

  20. TabSQL: a MySQL tool to facilitate mapping user data to public databases

    PubMed Central

    2010-01-01

    Background With advances in high-throughput genomics and proteomics, it is challenging for biologists to deal with large data files and to map their data to annotations in public databases. Results We developed TabSQL, a MySQL-based application tool, for viewing, filtering and querying data files with large numbers of rows. TabSQL provides functions for downloading and installing table files from public databases including the Gene Ontology database (GO), the Ensembl databases, and genome databases from the UCSC genome bioinformatics site. Any other database that provides tab-delimited flat files can also be imported. The downloaded gene annotation tables can be queried together with users' data in TabSQL using either a graphic interface or command line. Conclusions TabSQL allows queries across the user's data and public databases without programming. It is a convenient tool for biologists to annotate and enrich their data. PMID:20573251

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bower, J.C.; Burford, M.J.; Downing, T.R.

    The Integrated Baseline System (IBS) is an emergency management planning and analysis tool that is being developed under the direction of the US Army Nuclear and Chemical Agency (USANCA). The IBS Data Management Guide provides the background, as well as the operations and procedures needed to generate and maintain a site-specific map database. Data and system managers use this guide to manage the data files and database that support the administrative, user-environment, database management, and operational capabilities of the IBS. This document provides a description of the data files and structures necessary for running the IBS software and using themore » site map database.« less

  2. Map showing geologic terranes of the Hailey 1 degree x 2 degrees quadrangle and the western part of the Idaho Falls 1 degree x 2 degrees quadrangle, south-central Idaho

    USGS Publications Warehouse

    Worl, R.G.; Johnson, K.M.

    1995-01-01

    The paper version of Map Showing Geologic Terranes of the Hailey 1x2 Quadrangle and the western part of the Idaho Falls 1x2 Quadrangle, south-central Idaho was compiled by Ron Worl and Kate Johnson in 1995. The plate was compiled on a 1:250,000 scale topographic base map. TechniGraphic System, Inc. of Fort Collins Colorado digitized this map under contract for N.Shock. G.Green edited and prepared the digital version for publication as a geographic information system database. The digital geologic map database can be queried in many ways to produce a variety of geologic maps.

  3. Preliminary investigation of submerged aquatic vegetation mapping using hyperspectral remote sensing.

    PubMed

    William, David J; Rybicki, Nancy B; Lombana, Alfonso V; O'Brien, Tim M; Gomez, Richard B

    2003-01-01

    The use of airborne hyperspectral remote sensing imagery for automated mapping of submerged aquatic vegetation (SAV) in the tidal Potomac River was investigated for near to real-time resource assessment and monitoring. Airborne hyperspectral imagery and field spectrometer measurements were obtained in October of 2000. A spectral library database containing selected ground-based and airborne sensor spectra was developed for use in image processing. The spectral library is used to automate the processing of hyperspectral imagery for potential real-time material identification and mapping. Field based spectra were compared to the airborne imagery using the database to identify and map two species of SAV (Myriophyllum spicatum and Vallisneria americana). Overall accuracy of the vegetation maps derived from hyperspectral imagery was determined by comparison to a product that combined aerial photography and field based sampling at the end of the SAV growing season. The algorithms and databases developed in this study will be useful with the current and forthcoming space-based hyperspectral remote sensing systems.

  4. Bedrock geologic map of the Grafton quadrangle, Worcester County, Massachusetts

    USGS Publications Warehouse

    Walsh, Gregory J.; Aleinikoff, John N.; Dorais, Michael J.

    2011-01-01

    The bedrock geology of the 7.5-minute Grafton, Massachusetts, quadrangle consists of deformed Neoproterozoic to early Paleozoic crystalline metamorphic and intrusive igneous rocks. Neoproterozoic intrusive, metasedimentary, and metavolcanic rocks crop out in the Avalon zone, and Cambrian to Silurian intrusive, metasedimentary, and metavolcanic rocks crop out in the Nashoba zone. Rocks of the Avalon and Nashoba zones, or terranes, are separated by the Bloody Bluff fault. The bedrock geology was mapped to study the tectonic history of the area and to provide a framework for ongoing hydrogeologic characterization of the fractured bedrock of Massachusetts. This report presents mapping by G.J. Walsh, geochronology by J.N. Aleinikoff, geochemistry by M.J. Dorais, and consists of a map, text pamphlet, and GIS database. The map and text pamphlet are available in paper format or as downloadable files (see frame at right). The GIS database is available for download. The database includes contacts of bedrock geologic units, faults, outcrops, structural geologic information, and photographs.

  5. MareyMap Online: A User-Friendly Web Application and Database Service for Estimating Recombination Rates Using Physical and Genetic Maps.

    PubMed

    Siberchicot, Aurélie; Bessy, Adrien; Guéguen, Laurent; Marais, Gabriel A B

    2017-10-01

    Given the importance of meiotic recombination in biology, there is a need to develop robust methods to estimate meiotic recombination rates. A popular approach, called the Marey map approach, relies on comparing genetic and physical maps of a chromosome to estimate local recombination rates. In the past, we have implemented this approach in an R package called MareyMap, which includes many functionalities useful to get reliable recombination rate estimates in a semi-automated way. MareyMap has been used repeatedly in studies looking at the effect of recombination on genome evolution. Here, we propose a simpler user-friendly web service version of MareyMap, called MareyMap Online, which allows a user to get recombination rates from her/his own data or from a publicly available database that we offer in a few clicks. When the analysis is done, the user is asked whether her/his curated data can be placed in the database and shared with other users, which we hope will make meta-analysis on recombination rates including many species easy in the future. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  6. Preliminary geologic map of the eastern Willapa Hills, Cowlitz, Lewis, and Wahkiakum Counties, Washington

    USGS Publications Warehouse

    Wells, Ray E.; Sawlan, Michael G.

    2014-01-01

    This digital map database and the PDF derived from the database were created from the analog geologic map: Wells, R.E. (1981), “Geologic map of the eastern Willapa Hills, Cowlitz, Lewis, and Wahkiakum Counties, Washington.” The geodatabase replicates the geologic mapping of the 1981 report with minor exceptions along water boundaries and also along the north and south map boundaries. Slight adjustments to contacts along water boundaries were made to correct differences between the topographic base map used in the 1981 compilation (analog USGS 15-minute series quadrangle maps at 1:62,500 scale) and the base map used for this digital compilation (scanned USGS 7.5-minute series quadrangle maps at 1:24,000 scale). These minor adjustments, however, did not materially alter the geologic map. No new field mapping was performed to create this digital map database, and no attempt was made to fit geologic contacts to the new 1:24,000 topographic base, except as noted above. We corrected typographical errors, formatting errors, and attribution errors (for example, the name change of Goble Volcanics to Grays River Volcanics following current State of Washington usage; Walsh and others, 1987). We also updated selected references, substituted published papers for abstracts, and cited published radiometric ages for the volcanic and plutonic rocks. The reader is referred to Magill and others (1982), Wells and Coe (1985), Walsh and others (1987), Moothart (1993), Payne (1998), Kleibacker (2001), McCutcheon (2003), Wells and others (2009), Chan and others (2012), and Wells and others (in press) for subsequent interpretations of the Willapa Hills geology.

  7. Comprehensive annotated STR physical map of the human Y chromosome: Forensic implications.

    PubMed

    Hanson, Erin K; Ballantyne, Jack

    2006-03-01

    A plethora of Y-STR markers from diverse sources have been deposited in public databases and represent potential candidates for incorporation into the next generation of Y-STR multiplexes for forensic use. Here, based upon all of the Y-STR loci that have been deposited in the human genome database (>400), we have sequentially positioned each one along the Y chromosome using the most current human genome sequencing data (NCBI Build 35). The information derived from this work defines the number and relative position of all potentially forensically relevant Y-STR loci, their location within the physical linkage map of the Y chromosome and their relationship to structural genes. We conclude that there exists at present at least 417 separate Y-STR markers available for potential forensic use, although many of these will be found to be unsuitable for other reasons. However, from this data, we were able to identify 28 pairs of duplicated loci that were given separate DYS designations and four pairs of loci with overlapping flanking regions. Removing one locus from each set of duplicates reduced the number of potentially useful loci from 417 to 389. The derived information should be useful for workers who are designing novel Y-STR multiplexes to ensure the presence of non-synonymous loci and, if so desired, to avoid loci that lie within structural genes. It may also be useful for forensic casework practitioners (or molecular anthropologists) to aid in distinguishing between chromosomal rearrangements (such as duplications and deletions) and bona fide DNA admixtures or null alleles caused by primer binding site mutations. We illustrate the practical usefulness of the chromosomal positioning data in the design of eight multiplex systems using 94 Y-STR loci.

  8. Integrating Radar Image Data with Google Maps

    NASA Technical Reports Server (NTRS)

    Chapman, Bruce D.; Gibas, Sarah

    2010-01-01

    A public Web site has been developed as a method for displaying the multitude of radar imagery collected by NASA s Airborne Synthetic Aperture Radar (AIRSAR) instrument during its 16-year mission. Utilizing NASA s internal AIRSAR site, the new Web site features more sophisticated visualization tools that enable the general public to have access to these images. The site was originally maintained at NASA on six computers: one that held the Oracle database, two that took care of the software for the interactive map, and three that were for the Web site itself. Several tasks were involved in moving this complicated setup to just one computer. First, the AIRSAR database was migrated from Oracle to MySQL. Then the back-end of the AIRSAR Web site was updated in order to access the MySQL database. To do this, a few of the scripts needed to be modified; specifically three Perl scripts that query that database. The database connections were then updated from Oracle to MySQL, numerous syntax errors were corrected, and a query was implemented that replaced one of the stored Oracle procedures. Lastly, the interactive map was designed, implemented, and tested so that users could easily browse and access the radar imagery through the Google Maps interface.

  9. An integrated photogrammetric and spatial database management system for producing fully structured data using aerial and remote sensing images.

    PubMed

    Ahmadi, Farshid Farnood; Ebadi, Hamid

    2009-01-01

    3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs); direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium) standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS) is presented.

  10. Relational Database for the Geology of the Northern Rocky Mountains - Idaho, Montana, and Washington

    USGS Publications Warehouse

    Causey, J. Douglas; Zientek, Michael L.; Bookstrom, Arthur A.; Frost, Thomas P.; Evans, Karl V.; Wilson, Anna B.; Van Gosen, Bradley S.; Boleneus, David E.; Pitts, Rebecca A.

    2008-01-01

    A relational database was created to prepare and organize geologic map-unit and lithologic descriptions for input into a spatial database for the geology of the northern Rocky Mountains, a compilation of forty-three geologic maps for parts of Idaho, Montana, and Washington in U.S. Geological Survey Open File Report 2005-1235. Not all of the information was transferred to and incorporated in the spatial database due to physical file limitations. This report releases that part of the relational database that was completed for that earlier product. In addition to descriptive geologic information for the northern Rocky Mountains region, the relational database contains a substantial bibliography of geologic literature for the area. The relational database nrgeo.mdb (linked below) is available in Microsoft Access version 2000, a proprietary database program. The relational database contains data tables and other tables used to define terms, relationships between the data tables, and hierarchical relationships in the data; forms used to enter data; and queries used to extract data.

  11. BiKEGG: a COBRA toolbox extension for bridging the BiGG and KEGG databases.

    PubMed

    Jamialahmadi, Oveis; Motamedian, Ehsan; Hashemi-Najafabadi, Sameereh

    2016-10-18

    Development of an interface tool between the Biochemical, Genetic and Genomic (BiGG) and KEGG databases is necessary for simultaneous access to the features of both databases. For this purpose, we present the BiKEGG toolbox, an open source COBRA toolbox extension providing a set of functions to infer the reaction correspondences between the KEGG reaction identifiers and those in the BiGG knowledgebase using a combination of manual verification and computational methods. Inferred reaction correspondences using this approach are supported by evidence from the literature, which provides a higher number of reconciled reactions between these two databases compared to the MetaNetX and MetRxn databases. This set of equivalent reactions is then used to automatically superimpose the predicted fluxes using COBRA methods on classical KEGG pathway maps or to create a customized metabolic map based on the KEGG global metabolic pathway, and to find the corresponding reactions in BiGG based on the genome annotation of an organism in the KEGG database. Customized metabolic maps can be created for a set of pathways of interest, for the whole KEGG global map or exclusively for all pathways for which there exists at least one flux carrying reaction. This flexibility in visualization enables BiKEGG to indicate reaction directionality as well as to visualize the reaction fluxes for different static or dynamic conditions in an animated manner. BiKEGG allows the user to export (1) the output visualized metabolic maps to various standard image formats or save them as a video or animated GIF file, and (2) the equivalent reactions for an organism as an Excel spreadsheet.

  12. BAID: The Barrow Area Information Database - an Interactive Web Mapping Portal and Cyberinfrastructure for Science and Land Management in the Vicinity of Barrow on the North Slope of Alaska.

    NASA Astrophysics Data System (ADS)

    Escarzaga, S. M.; Cody, R. P.; Gaylord, A. G.; Kassin, A.; Barba, M.; Aiken, Q.; Nelson, L.; Mazza Ramsay, F. D.; Tweedie, C. E.

    2016-12-01

    The Barrow area of northern Alaska is one of the most intensely researched locations in the Arctic and the Barrow Area Information Database (BAID, www.barrowmapped.org) tracks and facilitates a gamut of research, management, and educational activities in the area. BAID is a cyberinfrastructure (CI) that details much of the historic and extant research undertaken within in the Barrow region in a suite of interactive web-based mapping and information portals (geobrowsers). The BAID user community and target audience for BAID is diverse and includes research scientists, science logisticians, land managers, educators, students, and the general public. BAID contains information on more than 16,000 Barrow area research sites that extend back to the 1940's and more than 640 remote sensing images and geospatial datasets. In a web-based setting, users can zoom, pan, query, measure distance, save or print maps and query results, and filter or view information by space, time, and/or other tags. Recent advances include provision of differential global positioning (dGPS) system and high resolution aerial imagery support to visiting scientists, analysis and multitemporal mapping of over 120 km of coastline for erosion monitoring; maintenance of a wireless micrometeorological sensor network; links to Barrow area datasets housed at national data archives; and substantial upgrades to the BAID website. Web mapping applications that have launched to the public include: an Imagery Time Viewer that allows users to compare imagery of the Barrow area between 1949 and the present; a Coastal Erosion Viewer that allows users to view long-term (1955-2015) and recent (2013-2015) rates of erosion for the Barrow area; and a Community Planning Tool that allows users to view and print dynamic reports based on an array of basemaps including a new 0.5m resolution wetlands map designed to enhance decision making for development and land management.

  13. A search map for organic additives and solvents applicable in high-voltage rechargeable batteries.

    PubMed

    Park, Min Sik; Park, Insun; Kang, Yoon-Sok; Im, Dongmin; Doo, Seok-Gwang

    2016-09-29

    Chemical databases store information such as molecular formulas, chemical structures, and the physical and chemical properties of compounds. Although the massive databases of organic compounds exist, the search of target materials is constrained by a lack of physical and chemical properties necessary for specific applications. With increasing interest in the development of energy storage systems such as high-voltage rechargeable batteries, it is critical to find new electrolytes efficiently. Here we build a search map to screen organic additives and solvents with novel core and functional groups, and thus establish a database of electrolytes to identify the most promising electrolyte for high-voltage rechargeable batteries. This search map is generated from MAssive Molecular Map BUilder (MAMMBU) by combining a high-throughput quantum chemical simulation with an artificial neural network algorithm. MAMMBU is designed for predicting the oxidation and reduction potentials of organic compounds existing in the massive organic compound database, PubChem. We develop a search map composed of ∼1 000 000 redox potentials and elucidate the quantitative relationship between the redox potentials and functional groups. Finally, we screen a quinoxaline compound for an anode additive and apply it to electrolytes and improve the capacity retention from 64.3% to 80.8% near 200 cycles for a lithium ion battery in experiments.

  14. Latent fingerprint matching.

    PubMed

    Jain, Anil K; Feng, Jianjiang

    2011-01-01

    Latent fingerprint identification is of critical importance to law enforcement agencies in identifying suspects: Latent fingerprints are inadvertent impressions left by fingers on surfaces of objects. While tremendous progress has been made in plain and rolled fingerprint matching, latent fingerprint matching continues to be a difficult problem. Poor quality of ridge impressions, small finger area, and large nonlinear distortion are the main difficulties in latent fingerprint matching compared to plain or rolled fingerprint matching. We propose a system for matching latent fingerprints found at crime scenes to rolled fingerprints enrolled in law enforcement databases. In addition to minutiae, we also use extended features, including singularity, ridge quality map, ridge flow map, ridge wavelength map, and skeleton. We tested our system by matching 258 latents in the NIST SD27 database against a background database of 29,257 rolled fingerprints obtained by combining the NIST SD4, SD14, and SD27 databases. The minutiae-based baseline rank-1 identification rate of 34.9 percent was improved to 74 percent when extended features were used. In order to evaluate the relative importance of each extended feature, these features were incrementally used in the order of their cost in marking by latent experts. The experimental results indicate that singularity, ridge quality map, and ridge flow map are the most effective features in improving the matching accuracy.

  15. Vegetation database for land-cover mapping, Clark and Lincoln Counties, Nevada

    USGS Publications Warehouse

    Charlet, David A.; Damar, Nancy A.; Leary, Patrick J.

    2014-01-01

    Floristic and other vegetation data were collected at 3,175 sample sites to support land-cover mapping projects in Clark and Lincoln Counties, Nevada, from 2007 to 2013. Data were collected at sample sites that were selected to fulfill mapping priorities by one of two different plot sampling approaches. Samples were described at the stand level and classified into the National Vegetation Classification hierarchy at the alliance level and above. The vegetation database is presented in geospatial and tabular formats.

  16. An integrated genetic and physical map of the autosomal recessive polycystic kidney disease region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lens, X.M.; Onuchic, L.F.; Daoust, M.

    1997-05-01

    Autosomal recessive polycystic kidney disease is one of the most common hereditary renal cystic diseases in children. Genetic studies have recently assigned the only known locus for this disorder, PKHD1, to chromosome 6p21-p12. We have generated a YAC contig that spans {approximately}5 cM of this region, defined by the markers D6S1253-D6S295, and have mapped 43 sequence-tagged sites (STS) within this interval. This set includes 20 novel STSs, which define 12 unique positions in the region, and three ESTs. A minimal set of two YACs spans the segment D6S465-D6S466, which contains PKHD1, and estimates of their sizes based on information inmore » public databases suggest that the size of the critical region is <3.1 Mb. Twenty-eight STSs map to this interval, giving an average STS density of <1/150 kb. These resources will be useful for establishing a complete trancription map of the PKHD1 region. 10 refs., 1 fig., 1 tab.« less

  17. Singapore Genome Variation Project: a haplotype map of three Southeast Asian populations.

    PubMed

    Teo, Yik-Ying; Sim, Xueling; Ong, Rick T H; Tan, Adrian K S; Chen, Jieming; Tantoso, Erwin; Small, Kerrin S; Ku, Chee-Seng; Lee, Edmund J D; Seielstad, Mark; Chia, Kee-Seng

    2009-11-01

    The Singapore Genome Variation Project (SGVP) provides a publicly available resource of 1.6 million single nucleotide polymorphisms (SNPs) genotyped in 268 individuals from the Chinese, Malay, and Indian population groups in Southeast Asia. This online database catalogs information and summaries on genotype and phased haplotype data, including allele frequencies, assessment of linkage disequilibrium (LD), and recombination rates in a format similar to the International HapMap Project. Here, we introduce this resource and describe the analysis of human genomic variation upon agglomerating data from the HapMap and the Human Genome Diversity Project, providing useful insights into the population structure of the three major population groups in Asia. In addition, this resource also surveyed across the genome for variation in regional patterns of LD between the HapMap and SGVP populations, and for signatures of positive natural selection using two well-established metrics: iHS and XP-EHH. The raw and processed genetic data, together with all population genetic summaries, are publicly available for download and browsing through a web browser modeled with the Generic Genome Browser.

  18. Singapore Genome Variation Project: A haplotype map of three Southeast Asian populations

    PubMed Central

    Teo, Yik-Ying; Sim, Xueling; Ong, Rick T.H.; Tan, Adrian K.S.; Chen, Jieming; Tantoso, Erwin; Small, Kerrin S.; Ku, Chee-Seng; Lee, Edmund J.D.; Seielstad, Mark; Chia, Kee-Seng

    2009-01-01

    The Singapore Genome Variation Project (SGVP) provides a publicly available resource of 1.6 million single nucleotide polymorphisms (SNPs) genotyped in 268 individuals from the Chinese, Malay, and Indian population groups in Southeast Asia. This online database catalogs information and summaries on genotype and phased haplotype data, including allele frequencies, assessment of linkage disequilibrium (LD), and recombination rates in a format similar to the International HapMap Project. Here, we introduce this resource and describe the analysis of human genomic variation upon agglomerating data from the HapMap and the Human Genome Diversity Project, providing useful insights into the population structure of the three major population groups in Asia. In addition, this resource also surveyed across the genome for variation in regional patterns of LD between the HapMap and SGVP populations, and for signatures of positive natural selection using two well-established metrics: iHS and XP-EHH. The raw and processed genetic data, together with all population genetic summaries, are publicly available for download and browsing through a web browser modeled with the Generic Genome Browser. PMID:19700652

  19. Geologic map and digital database of the Porcupine Wash 7.5 minute Quadrangle, Riverside County, southern California

    USGS Publications Warehouse

    Powell, Robert E.

    2001-01-01

    This data set maps and describes the geology of the Porcupine Wash 7.5 minute quadrangle, Riverside County, southern California. The quadrangle, situated in Joshua Tree National Park in the eastern Transverse Ranges physiographic and structural province, encompasses parts of the Hexie Mountains, Cottonwood Mountains, northern Eagle Mountains, and south flank of Pinto Basin. It is underlain by a basement terrane comprising Proterozoic metamorphic rocks, Mesozoic plutonic rocks, and Mesozoic and Mesozoic or Cenozoic hypabyssal dikes. The basement terrane is capped by a widespread Tertiary erosion surface preserved in remnants in the Eagle and Cottonwood Mountains and buried beneath Cenozoic deposits in Pinto Basin. Locally, Miocene basalt overlies the erosion surface. A sequence of at least three Quaternary pediments is planed into the north piedmont of the Eagle and Hexie Mountains, each in turn overlain by successively younger residual and alluvial deposits. The Tertiary erosion surface is deformed and broken by north-northwest-trending, high-angle, dip-slip faults and an east-west trending system of high-angle dip- and left-slip faults. East-west trending faults are younger than and perhaps in part coeval with faults of the northwest-trending set. The Porcupine Wash database was created using ARCVIEW and ARC/INFO, which are geographical information system (GIS) software products of Envronmental Systems Research Institute (ESRI). The database consists of the following items: (1) a map coverage showing faults and geologic contacts and units, (2) a separate coverage showing dikes, (3) a coverage showing structural data, (4) a scanned topographic base at a scale of 1:24,000, and (5) attribute tables for geologic units (polygons and regions), contacts (arcs), and site-specific data (points). The database, accompanied by a pamphlet file and this metadata file, also includes the following graphic and text products: (1) A portable document file (.pdf) containing a navigable graphic of the geologic map on a 1:24,000 topographic base. The map is accompanied by a marginal explanation consisting of a Description of Map and Database Units (DMU), a Correlation of Map and Database Units (CMU), and a key to point-and line-symbols. (2) Separate .pdf files of the DMU and CMU, individually. (3) A PostScript graphic-file containing the geologic map on a 1:24,000 topographic base accompanied by the marginal explanation. (4) A pamphlet that describes the database and how to access it. Within the database, geologic contacts , faults, and dikes are represented as lines (arcs), geologic units as polygons and regions, and site-specific data as points. Polygon, arc, and point attribute tables (.pat, .aat, and .pat, respectively) uniquely identify each geologic datum and link it to other tables (.rel) that provide more detailed geologic information.

  20. Enrichment of OpenStreetMap Data Completeness with Sidewalk Geometries Using Data Mining Techniques.

    PubMed

    Mobasheri, Amin; Huang, Haosheng; Degrossi, Lívia Castro; Zipf, Alexander

    2018-02-08

    Tailored routing and navigation services utilized by wheelchair users require certain information about sidewalk geometries and their attributes to execute efficiently. Except some minor regions/cities, such detailed information is not present in current versions of crowdsourced mapping databases including OpenStreetMap. CAP4Access European project aimed to use (and enrich) OpenStreetMap for making it fit to the purpose of wheelchair routing. In this respect, this study presents a modified methodology based on data mining techniques for constructing sidewalk geometries using multiple GPS traces collected by wheelchair users during an urban travel experiment. The derived sidewalk geometries can be used to enrich OpenStreetMap to support wheelchair routing. The proposed method was applied to a case study in Heidelberg, Germany. The constructed sidewalk geometries were compared to an official reference dataset ("ground truth dataset"). The case study shows that the constructed sidewalk network overlays with 96% of the official reference dataset. Furthermore, in terms of positional accuracy, a low Root Mean Square Error (RMSE) value (0.93 m) is achieved. The article presents our discussion on the results as well as the conclusion and future research directions.

  1. New generation of integrated geological-geomorphological reconstruction maps in the Rhine-Meuse delta, The Netherlands

    NASA Astrophysics Data System (ADS)

    Pierik, Harm Jan; Cohen, Kim; Stouthamer, Esther

    2016-04-01

    Geological-geomorphological reconstructions are important for integrating diverse types of data and improving understanding of landscape formation processes. This works especially well in densely populated Holocene landscapes, where large quantities of raw data are produced by geotechnical, archaeological, soil science and hydrological communities as well as in academic research. The Rhine-Meuse delta, The Netherlands, has a long tradition of integrated digital reconstruction maps and databases. This contributed to improve understanding of delta evolution, especially regarding the channel belt network evolution. In this contribution, we present a new generation of digital map products for the Holocene Rhine-Meuse delta. Our reconstructions expand existing channel belt network maps, with new map layers containing natural levee extent and relative elevation. The maps we present have been based on hundreds of thousands of lithological borehole descriptions, >1000 radiocarbon dates, and further integrate LIDAR data, soil maps and archaeological information. For selected time slices through the Late Holocene, the map products describe the patterns of levee distribution. Additionally, we mapped the palaeo-topography of the levees through the delta, aiming to resolve what parts of the overbank river landscape were the relatively low and high positioned areas in the past landscape. The resulting palaeogeographical maps are integrative products created for a very data-rich research area. They will allow for delta-wide analysis in studying changes in the Late Holocene landscape and the interaction with past habitation.

  2. Possible costs associated with investigating and mitigating geologic hazards in rural areas of western San Mateo County, California with a section on using the USGS website to determine the cost of developing property for residences in rural parts of San Mateo County, California

    USGS Publications Warehouse

    Brabb, Earl E.; Roberts, Sebastian; Cotton, William R.; Kropp, Alan L.; Wright, Robert H.; Zinn, Erik N.; Digital database by Roberts, Sebastian; Mills, Suzanne K.; Barnes, Jason B.; Marsolek, Joanna E.

    2000-01-01

    This publication consists of a digital map database on a geohazards web site, http://kaibab.wr.usgs.gov/geohazweb/intro.htm, this text, and 43 digital map images available for downloading at this site. The report is stored as several digital files, in ARC export (uncompressed) format for the database, and Postscript and PDF formats for the map images. Several of the source data layers for the images have already been released in other publications by the USGS and are available for downloading on the Internet. These source layers are not included in this digital database, but rather a reference is given for the web site where the data can be found in digital format. The exported ARC coverages and grids lie in UTM zone 10 projection. The pamphlet, which only describes the content and character of the digital map database, is included as Postscript, PDF, and ASCII text files and is also available on paper as USGS Open-File Report 00-127. The full versatility of the spatial database is realized by importing the ARC export files into ARC/INFO or an equivalent GIS. Other GIS packages, including MapInfo and ARCVIEW, can also use the ARC export files. The Postscript map image can be used for viewing or plotting in computer systems with sufficient capacity, and the considerably smaller PDF image files can be viewed or plotted in full or in part from Adobe ACROBAT software running on Macintosh, PC, or UNIX platforms.

  3. Review and critical appraisal of studies mapping from quality of life or clinical measures to EQ-5D: an online database and application of the MAPS statement.

    PubMed

    Dakin, Helen; Abel, Lucy; Burns, Richéal; Yang, Yaling

    2018-02-12

    The Health Economics Research Centre (HERC) Database of Mapping Studies was established in 2013, based on a systematic review of studies developing mapping algorithms predicting EQ-5D. The Mapping onto Preference-based measures reporting Standards (MAPS) statement was published in 2015 to improve reporting of mapping studies. We aimed to update the systematic review and assess the extent to which recently-published studies mapping condition-specific quality of life or clinical measures to the EQ-5D follow the guidelines published in the MAPS Reporting Statement. A published systematic review was updated using the original inclusion criteria to include studies published by December 2016. We included studies reporting novel algorithms mapping from any clinical measure or patient-reported quality of life measure to either the EQ-5D-3L or EQ-5D-5L. Titles and abstracts of all identified studies and the full text of papers published in 2016 were assessed against the MAPS checklist. The systematic review identified 144 mapping studies reporting 190 algorithms mapping from 110 different source instruments to EQ-5D. Of the 17 studies published in 2016, nine (53%) had titles that followed the MAPS statement guidance, although only two (12%) had abstracts that fully addressed all MAPS items. When the full text of these papers was assessed against the complete MAPS checklist, only two studies (12%) were found to fulfil or partly fulfil all criteria. Of the 141 papers (across all years) that included abstracts, the items on the MAPS statement checklist that were fulfilled by the largest number of studies comprised having a structured abstract (95%) and describing target instruments (91%) and source instruments (88%). The number of published mapping studies continues to increase. Our updated database provides a convenient way to identify mapping studies for use in cost-utility analysis. Most recent studies do not fully address all items on the MAPS checklist.

  4. Engineering geological mapping in Wallonia (Belgium) : present state and recent computerized approach

    NASA Astrophysics Data System (ADS)

    Delvoie, S.; Radu, J.-P.; Ruthy, I.; Charlier, R.

    2012-04-01

    An engineering geological map can be defined as a geological map with a generalized representation of all the components of a geological environment which are strongly required for spatial planning, design, construction and maintenance of civil engineering buildings. In Wallonia (Belgium) 24 engineering geological maps have been developed between the 70s and the 90s at 1/5,000 or 1/10,000 scale covering some areas of the most industrialized and urbanized cities (Liège, Charleroi and Mons). They were based on soil and subsoil data point (boring, drilling, penetration test, geophysical test, outcrop…). Some displayed data present the depth (with isoheights) or the thickness (with isopachs) of the different subsoil layers up to about 50 m depth. Information about geomechanical properties of each subsoil layer, useful for engineers and urban planners, is also synthesized. However, these maps were built up only on paper and progressively needed to be updated with new soil and subsoil data. The Public Service of Wallonia and the University of Liège have recently initiated a study to evaluate the feasibility to develop engineering geological mapping with a computerized approach. Numerous and various data (about soil and subsoil) are stored into a georelational database (the geotechnical database - using Access, Microsoft®). All the data are geographically referenced. The database is linked to a GIS project (using ArcGIS, ESRI®). Both the database and GIS project consist of a powerful tool for spatial data management and analysis. This approach involves a methodology using interpolation methods to update the previous maps and to extent the coverage to new areas. The location (x, y, z) of each subsoil layer is then computed from data point. The geomechanical data of these layers are synthesized in an explanatory booklet joined to maps.

  5. Accurate atom-mapping computation for biochemical reactions.

    PubMed

    Latendresse, Mario; Malerich, Jeremiah P; Travers, Mike; Karp, Peter D

    2012-11-26

    The complete atom mapping of a chemical reaction is a bijection of the reactant atoms to the product atoms that specifies the terminus of each reactant atom. Atom mapping of biochemical reactions is useful for many applications of systems biology, in particular for metabolic engineering where synthesizing new biochemical pathways has to take into account for the number of carbon atoms from a source compound that are conserved in the synthesis of a target compound. Rapid, accurate computation of the atom mapping(s) of a biochemical reaction remains elusive despite significant work on this topic. In particular, past researchers did not validate the accuracy of mapping algorithms. We introduce a new method for computing atom mappings called the minimum weighted edit-distance (MWED) metric. The metric is based on bond propensity to react and computes biochemically valid atom mappings for a large percentage of biochemical reactions. MWED models can be formulated efficiently as Mixed-Integer Linear Programs (MILPs). We have demonstrated this approach on 7501 reactions of the MetaCyc database for which 87% of the models could be solved in less than 10 s. For 2.1% of the reactions, we found multiple optimal atom mappings. We show that the error rate is 0.9% (22 reactions) by comparing these atom mappings to 2446 atom mappings of the manually curated Kyoto Encyclopedia of Genes and Genomes (KEGG) RPAIR database. To our knowledge, our computational atom-mapping approach is the most accurate and among the fastest published to date. The atom-mapping data will be available in the MetaCyc database later in 2012; the atom-mapping software will be available within the Pathway Tools software later in 2012.

  6. Brassica ASTRA: an integrated database for Brassica genomic research.

    PubMed

    Love, Christopher G; Robinson, Andrew J; Lim, Geraldine A C; Hopkins, Clare J; Batley, Jacqueline; Barker, Gary; Spangenberg, German C; Edwards, David

    2005-01-01

    Brassica ASTRA is a public database for genomic information on Brassica species. The database incorporates expressed sequences with Swiss-Prot and GenBank comparative sequence annotation as well as secondary Gene Ontology (GO) annotation derived from the comparison with Arabidopsis TAIR GO annotations. Simple sequence repeat molecular markers are identified within resident sequences and mapped onto the closely related Arabidopsis genome sequence. Bacterial artificial chromosome (BAC) end sequences derived from the Multinational Brassica Genome Project are also mapped onto the Arabidopsis genome sequence enabling users to identify candidate Brassica BACs corresponding to syntenic regions of Arabidopsis. This information is maintained in a MySQL database with a web interface providing the primary means of interrogation. The database is accessible at http://hornbill.cspp.latrobe.edu.au.

  7. Genome databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courteau, J.

    1991-10-11

    Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts inmore » the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.« less

  8. ReprDB and panDB: minimalist databases with maximal microbial representation.

    PubMed

    Zhou, Wei; Gay, Nicole; Oh, Julia

    2018-01-18

    Profiling of shotgun metagenomic samples is hindered by a lack of unified microbial reference genome databases that (i) assemble genomic information from all open access microbial genomes, (ii) have relatively small sizes, and (iii) are compatible to various metagenomic read mapping tools. Moreover, computational tools to rapidly compile and update such databases to accommodate the rapid increase in new reference genomes do not exist. As a result, database-guided analyses often fail to profile a substantial fraction of metagenomic shotgun sequencing reads from complex microbiomes. We report pipelines that efficiently traverse all open access microbial genomes and assemble non-redundant genomic information. The pipelines result in two species-resolution microbial reference databases of relatively small sizes: reprDB, which assembles microbial representative or reference genomes, and panDB, for which we developed a novel iterative alignment algorithm to identify and assemble non-redundant genomic regions in multiple sequenced strains. With the databases, we managed to assign taxonomic labels and genome positions to the majority of metagenomic reads from human skin and gut microbiomes, demonstrating a significant improvement over a previous database-guided analysis on the same datasets. reprDB and panDB leverage the rapid increases in the number of open access microbial genomes to more fully profile metagenomic samples. Additionally, the databases exclude redundant sequence information to avoid inflated storage or memory space and indexing or analyzing time. Finally, the novel iterative alignment algorithm significantly increases efficiency in pan-genome identification and can be useful in comparative genomic analyses.

  9. KIDFamMap: a database of kinase-inhibitor-disease family maps for kinase inhibitor selectivity and binding mechanisms

    PubMed Central

    Chiu, Yi-Yuan; Lin, Chih-Ta; Huang, Jhang-Wei; Hsu, Kai-Cheng; Tseng, Jen-Hu; You, Syuan-Ren; Yang, Jinn-Moon

    2013-01-01

    Kinases play central roles in signaling pathways and are promising therapeutic targets for many diseases. Designing selective kinase inhibitors is an emergent and challenging task, because kinases share an evolutionary conserved ATP-binding site. KIDFamMap (http://gemdock.life.nctu.edu.tw/KIDFamMap/) is the first database to explore kinase-inhibitor families (KIFs) and kinase-inhibitor-disease (KID) relationships for kinase inhibitor selectivity and mechanisms. This database includes 1208 KIFs, 962 KIDs, 55 603 kinase-inhibitor interactions (KIIs), 35 788 kinase inhibitors, 399 human protein kinases, 339 diseases and 638 disease allelic variants. Here, a KIF can be defined as follows: (i) the kinases in the KIF with significant sequence similarity, (ii) the inhibitors in the KIF with significant topology similarity and (iii) the KIIs in the KIF with significant interaction similarity. The KIIs within a KIF are often conserved on some consensus KIDFamMap anchors, which represent conserved interactions between the kinase subsites and consensus moieties of their inhibitors. Our experimental results reveal that the members of a KIF often possess similar inhibition profiles. The KIDFamMap anchors can reflect kinase conformations types, kinase functions and kinase inhibitor selectivity. We believe that KIDFamMap provides biological insights into kinase inhibitor selectivity and binding mechanisms. PMID:23193279

  10. A new edition of the Mars 1:5,000,000 map series

    NASA Technical Reports Server (NTRS)

    Batson, R. M.; Mcewen, Alfred S.; Wu, Sherman S. C.

    1991-01-01

    A new edition of the Mars 1:5,000,000 scale map series is in preparation. Two sheets will be made for each quadrangle. Sheet one will show shaded relief, contours, and nomenclature. Sheet 2 will be a full-color photomosaic prepared on the Mars digital image model (MDIM) base co-registered with the Mars low-resolution color database. The latter will have an abbreviated graticule (latitude/longitude ticks only) and no other line overprint. The four major databases used to assemble this series are now virtually complete. These are: (1) Viking-revised shaded relief maps at 1:5,000,000 scale; (2) contour maps at 1:2,000,000 scale; (3) the Mars digital image model; and (4) a color image mosaic of Mars. Together, these databases form the most complete planetwide cartographic definition of Mars that can be compiled with existing data. The new edition will supersede the published Mars 1:5,000,000 scale maps, including the original shaded relief and topographic maps made primarily with Mariner 9 data and the Viking-revised shaded relief and controlled photomosaic series. Publication of the new series will begin in late 1991 or early 1992, and it should be completed in two years.

  11. DRUMS: Disk Repository with Update Management and Select option for high throughput sequencing data

    PubMed Central

    2014-01-01

    Background New technologies for analyzing biological samples, like next generation sequencing, are producing a growing amount of data together with quality scores. Moreover, software tools (e.g., for mapping sequence reads), calculating transcription factor binding probabilities, estimating epigenetic modification enriched regions or determining single nucleotide polymorphism increase this amount of position-specific DNA-related data even further. Hence, requesting data becomes challenging and expensive and is often implemented using specialised hardware. In addition, picking specific data as fast as possible becomes increasingly important in many fields of science. The general problem of handling big data sets was addressed by developing specialized databases like HBase, HyperTable or Cassandra. However, these database solutions require also specialized or distributed hardware leading to expensive investments. To the best of our knowledge, there is no database capable of (i) storing billions of position-specific DNA-related records, (ii) performing fast and resource saving requests, and (iii) running on a single standard computer hardware. Results Here, we present DRUMS (Disk Repository with Update Management and Select option), satisfying demands (i)-(iii). It tackles the weaknesses of traditional databases while handling position-specific DNA-related data in an efficient manner. DRUMS is capable of storing up to billions of records. Moreover, it focuses on optimizing relating single lookups as range request, which are needed permanently for computations in bioinformatics. To validate the power of DRUMS, we compare it to the widely used MySQL database. The test setting considers two biological data sets. We use standard desktop hardware as test environment. Conclusions DRUMS outperforms MySQL in writing and reading records by a factor of two up to a factor of 10000. Furthermore, it can work with significantly larger data sets. Our work focuses on mid-sized data sets up to several billion records without requiring cluster technology. Storing position-specific data is a general problem and the concept we present here is a generalized approach. Hence, it can be easily applied to other fields of bioinformatics. PMID:24495746

  12. Database for the geologic map of Upper Geyser Basin, Yellowstone National Park, Wyoming

    USGS Publications Warehouse

    Abendini, Atosa A.; Robinson, Joel E.; Muffler, L. J. Patrick; White, D. E.; Beeson, Melvin H.; Truesdell, A. H.

    2015-01-01

    This dataset contains contacts, geologic units, and map boundaries from Miscellaneous Investigations Series Map I-1371, "The Geologic map of upper Geyser Basin, Yellowstone, National Park, Wyoming". This dataset was constructed to produce a digital geologic map as a basis for ongoing studies of hydrothermal processes.

  13. The National Deep-Sea Coral and Sponge Database: A Comprehensive Resource for United States Deep-Sea Coral and Sponge Records

    NASA Astrophysics Data System (ADS)

    Dornback, M.; Hourigan, T.; Etnoyer, P.; McGuinn, R.; Cross, S. L.

    2014-12-01

    Research on deep-sea corals has expanded rapidly over the last two decades, as scientists began to realize their value as long-lived structural components of high biodiversity habitats and archives of environmental information. The NOAA Deep Sea Coral Research and Technology Program's National Database for Deep-Sea Corals and Sponges is a comprehensive resource for georeferenced data on these organisms in U.S. waters. The National Database currently includes more than 220,000 deep-sea coral records representing approximately 880 unique species. Database records from museum archives, commercial and scientific bycatch, and from journal publications provide baseline information with relatively coarse spatial resolution dating back as far as 1842. These data are complemented by modern, in-situ submersible observations with high spatial resolution, from surveys conducted by NOAA and NOAA partners. Management of high volumes of modern high-resolution observational data can be challenging. NOAA is working with our data partners to incorporate this occurrence data into the National Database, along with images and associated information related to geoposition, time, biology, taxonomy, environment, provenance, and accuracy. NOAA is also working to link associated datasets collected by our program's research, to properly archive them to the NOAA National Data Centers, to build a robust metadata record, and to establish a standard protocol to simplify the process. Access to the National Database is provided through an online mapping portal. The map displays point based records from the database. Records can be refined by taxon, region, time, and depth. The queries and extent used to view the map can also be used to download subsets of the database. The database, map, and website is already in use by NOAA, regional fishery management councils, and regional ocean planning bodies, but we envision it as a model that can expand to accommodate data on a global scale.

  14. Kazusa Marker DataBase: a database for genomics, genetics, and molecular breeding in plants.

    PubMed

    Shirasawa, Kenta; Isobe, Sachiko; Tabata, Satoshi; Hirakawa, Hideki

    2014-09-01

    In order to provide useful genomic information for agronomical plants, we have established a database, the Kazusa Marker DataBase (http://marker.kazusa.or.jp). This database includes information on DNA markers, e.g., SSR and SNP markers, genetic linkage maps, and physical maps, that were developed at the Kazusa DNA Research Institute. Keyword searches for the markers, sequence data used for marker development, and experimental conditions are also available through this database. Currently, 10 plant species have been targeted: tomato (Solanum lycopersicum), pepper (Capsicum annuum), strawberry (Fragaria × ananassa), radish (Raphanus sativus), Lotus japonicus, soybean (Glycine max), peanut (Arachis hypogaea), red clover (Trifolium pratense), white clover (Trifolium repens), and eucalyptus (Eucalyptus camaldulensis). In addition, the number of plant species registered in this database will be increased as our research progresses. The Kazusa Marker DataBase will be a useful tool for both basic and applied sciences, such as genomics, genetics, and molecular breeding in crops.

  15. Maps of Quaternary Deposits and Liquefaction Susceptibility in the Central San Francisco Bay Region, California

    USGS Publications Warehouse

    Witter, Robert C.; Knudsen, Keith L.; Sowers, Janet M.; Wentworth, Carl M.; Koehler, Richard D.; Randolph, Carolyn E.; Brooks, Suzanna K.; Gans, Kathleen D.

    2006-01-01

    This report presents a map and database of Quaternary deposits and liquefaction susceptibility for the urban core of the San Francisco Bay region. It supercedes the equivalent area of U.S. Geological Survey Open-File Report 00-444 (Knudsen and others, 2000), which covers the larger 9-county San Francisco Bay region. The report consists of (1) a spatial database, (2) two small-scale colored maps (Quaternary deposits and liquefaction susceptibility), (3) a text describing the Quaternary map and liquefaction interpretation (part 3), and (4) a text introducing the report and describing the database (part 1). All parts of the report are digital; part 1 describes the database and digital files and how to obtain them by downloading across the internet. The nine counties surrounding San Francisco Bay straddle the San Andreas fault system, which exposes the region to serious earthquake hazard (Working Group on California Earthquake Probabilities, 1999). Much of the land adjacent to the Bay and the major rivers and streams is underlain by unconsolidated deposits that are particularly vulnerable to earthquake shaking and liquefaction of water-saturated granular sediment. This new map provides a consistent detailed treatment of the central part of the 9-county region in which much of the mapping of Open-File Report 00-444 was either at smaller (less detailed) scale or represented only preliminary revision of earlier work. Like Open-File Report 00-444, the current mapping uses geomorphic expression, pedogenic soils, inferred depositional environments, and geologic age to define and distinguish the map units. Further scrutiny of the factors controlling liquefaction susceptibility has led to some changes relative to Open-File Report 00-444: particularly the reclassification of San Francisco Bay mud (Qhbm) to have only MODERATE susceptibility and the rating of artificial fills according to the Quaternary map units inferred to underlie them (other than dams - adf). The two colored maps provide a regional summary of the new mapping at a scale of 1:200,000, a scale that is sufficient to show the general distribution and relationships of the map units but not to distinguish the more detailed elements that are present in the database. The report is the product of cooperative work by the National Earthquake Hazards Reduction Program (NEHRP) and National Cooperative Geologic Mapping Program of the U.S. Geological Survey, William Lettis and & Associates, Inc. (WLA), and the California Geological Survey. An earlier version was submitted to the U.S. Geological Survey by WLA as a final report for a NEHRP grant (Witter and others, 2005). The mapping has been carried out by WLA geologists under contract to the NEHRP Earthquake Program (Grant 99-HQ-GR-0095) and by the California Geological Survey.

  16. Map-Based Querying for Multimedia Database

    DTIC Science & Technology

    2014-09-01

    existing assets in a custom multimedia database based on an area of interest. It also describes the augmentation of an Android Tactical Assault Kit (ATAK......for Multimedia Database Somiya Metu Computational and Information Sciences Directorate, ARL

  17. A Free Database of Auto-detected Full-sun Coronal Hole Maps

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.; Downs, C.; Linker, J.

    2016-12-01

    We present a 4-yr (06/10/2010 to 08/18/14 at 6-hr cadence) database of full-sun synchronic EUV and coronal hole (CH) maps made available on a dedicated web site (http://www.predsci.com/chd). The maps are generated using STEREO/EUVI A&B 195Å and SDO/AIA 193Å images through an automated pipeline (Caplan et al, (2016) Ap.J. 823, 53).Specifically, the original data is preprocessed with PSF-deconvolution, a nonlinear limb-brightening correction, and a nonlinear inter-instrument intensity normalization. Coronal holes are then detected in the preprocessed images using a GPU-accelerated region growing segmentation algorithm. The final results from all three instruments are then merged and projected to form full-sun sine-latitude maps. All the software used in processing the maps is provided, which can easily be adapted for use with other instruments and channels. We describe the data pipeline and show examples from the database. We also detail recent CH-detection validation experiments using synthetic EUV emission images produced from global thermodynamic MHD simulations.

  18. SModelS v1.1 user manual: Improving simplified model constraints with efficiency maps

    NASA Astrophysics Data System (ADS)

    Ambrogi, Federico; Kraml, Sabine; Kulkarni, Suchita; Laa, Ursula; Lessa, Andre; Magerl, Veronika; Sonneveld, Jory; Traub, Michael; Waltenberger, Wolfgang

    2018-06-01

    SModelS is an automatized tool for the interpretation of simplified model results from the LHC. It allows to decompose models of new physics obeying a Z2 symmetry into simplified model components, and to compare these against a large database of experimental results. The first release of SModelS, v1.0, used only cross section upper limit maps provided by the experimental collaborations. In this new release, v1.1, we extend the functionality of SModelS to efficiency maps. This increases the constraining power of the software, as efficiency maps allow to combine contributions to the same signal region from different simplified models. Other new features of version 1.1 include likelihood and χ2 calculations, extended information on the topology coverage, an extended database of experimental results as well as major speed upgrades for both the code and the database. We describe in detail the concepts and procedures used in SModelS v1.1, explaining in particular how upper limits and efficiency map results are dealt with in parallel. Detailed instructions for code usage are also provided.

  19. Predictive landslide susceptibility mapping using spatial information in the Pechabun area of Thailand

    NASA Astrophysics Data System (ADS)

    Oh, Hyun-Joo; Lee, Saro; Chotikasathien, Wisut; Kim, Chang Hwan; Kwon, Ju Hyoung

    2009-04-01

    For predictive landslide susceptibility mapping, this study applied and verified probability model, the frequency ratio and statistical model, logistic regression at Pechabun, Thailand, using a geographic information system (GIS) and remote sensing. Landslide locations were identified in the study area from interpretation of aerial photographs and field surveys, and maps of the topography, geology and land cover were constructed to spatial database. The factors that influence landslide occurrence, such as slope gradient, slope aspect and curvature of topography and distance from drainage were calculated from the topographic database. Lithology and distance from fault were extracted and calculated from the geology database. Land cover was classified from Landsat TM satellite image. The frequency ratio and logistic regression coefficient were overlaid for landslide susceptibility mapping as each factor’s ratings. Then the landslide susceptibility map was verified and compared using the existing landslide location. As the verification results, the frequency ratio model showed 76.39% and logistic regression model showed 70.42% in prediction accuracy. The method can be used to reduce hazards associated with landslides and to plan land cover.

  20. EMAP and EMAGE: a framework for understanding spatially organized data.

    PubMed

    Baldock, Richard A; Bard, Jonathan B L; Burger, Albert; Burton, Nicolas; Christiansen, Jeff; Feng, Guanjie; Hill, Bill; Houghton, Derek; Kaufman, Matthew; Rao, Jianguo; Sharpe, James; Ross, Allyson; Stevenson, Peter; Venkataraman, Shanmugasundaram; Waterhouse, Andrew; Yang, Yiya; Davidson, Duncan R

    2003-01-01

    The Edinburgh MouseAtlas Project (EMAP) is a time-series of mouse-embryo volumetric models. The models provide a context-free spatial framework onto which structural interpretations and experimental data can be mapped. This enables collation, comparison, and query of complex spatial patterns with respect to each other and with respect to known or hypothesized structure. The atlas also includes a time-dependent anatomical ontology and mapping between the ontology and the spatial models in the form of delineated anatomical regions or tissues. The models provide a natural, graphical context for browsing and visualizing complex data. The Edinburgh Mouse Atlas Gene-Expression Database (EMAGE) is one of the first applications of the EMAP framework and provides a spatially mapped gene-expression database with associated tools for data mapping, submission, and query. In this article, we describe the underlying principles of the Atlas and the gene-expression database, and provide a practical introduction to the use of the EMAP and EMAGE tools, including use of new techniques for whole body gene-expression data capture and mapping.

  1. Digital images in the map revision process

    NASA Astrophysics Data System (ADS)

    Newby, P. R. T.

    Progress towards the adoption of digital (or softcopy) photogrammetric techniques for database and map revision is reviewed. Particular attention is given to the Ordnance Survey of Great Britain, the author's former employer, where digital processes are under investigation but have not yet been introduced for routine production. Developments which may lead to increasing automation of database update processes appear promising, but because of the cost and practical problems associated with managing as well as updating large digital databases, caution is advised when considering the transition to softcopy photogrammetry for revision tasks.

  2. Geologic map of outcrop areas of sedimentary units in the eastern part of the Hailey 1 degree x 2 degrees quadrangle and part of the southern part of the Challis 1 degree x 2 degrees quadrangle, south-central Idaho

    USGS Publications Warehouse

    Link, P.K.; Mahoney, J.B.; Bruner, D.J.; Batatian, L.D.; Wilson, Eric; Williams, F.J.C.

    1995-01-01

    The paper version of the Geologic map of outcrop areas of sedimentary units in the eastern part of the Hailey 1x2 Quadrangle and part of the southern part of the Challis 1x2 Quadrangle, south-central Idaho was compiled by Paul Link and others in 1995. The plate was compiled on a 1:100,000 scale topographic base map. TechniGraphic System, Inc. of Fort Collins Colorado digitized this map under contract for N.Shock. G.Green edited and prepared the digital version for publication as a GIS database. The digital geologic map database can be queried in many ways to produce a variety of geologic maps.

  3. Mapping Research in the Field of Special Education on the Island of Ireland since 2000

    ERIC Educational Resources Information Center

    Travers, Joseph; Savage, Rosie; Butler, Cathal; O'Donnell, Margaret

    2018-01-01

    This paper describes the process of building a database mapping research and policy in the field of special education on the island of Ireland from 2000 to 2013. The field of study includes special educational needs, disability and inclusion. The database contains 3188 references organised thematically and forms a source for researchers to access…

  4. Mapping PDB chains to UniProtKB entries.

    PubMed

    Martin, Andrew C R

    2005-12-01

    UniProtKB/SwissProt is the main resource for detailed annotations of protein sequences. This database provides a jumping-off point to many other resources through the links it provides. Among others, these include other primary databases, secondary databases, the Gene Ontology and OMIM. While a large number of links are provided to Protein Data Bank (PDB) files, obtaining a regularly updated mapping between UniProtKB entries and PDB entries at the chain or residue level is not straightforward. In particular, there is no regularly updated resource which allows a UniProtKB/SwissProt entry to be identified for a given residue of a PDB file. We have created a completely automatically maintained database which maps PDB residues to residues in UniProtKB/SwissProt and UniProtKB/trEMBL entries. The protocol uses links from PDB to UniProtKB, from UniProtKB to PDB and a brute-force sequence scan to resolve PDB chains for which no annotated link is available. Finally the sequences from PDB and UniProtKB are aligned to obtain a residue-level mapping. The resource may be queried interactively or downloaded from http://www.bioinf.org.uk/pdbsws/.

  5. Bibliometric mapping and clustering analysis of Iranian papers on reproductive medicine in Scopus database (2010-2014).

    PubMed

    Bazm, Soheila; Kalantar, Seyyed Mehdi; Mirzaei, Masoud

    2016-06-01

    To meet the future challenges in the field of reproductive medicine in Iran, better understanding of published studies is needed. Bibliometric methods and social network analysis have been used to measure the scope and illustrate scientific output of researchers in this field. This study provides insight into the structure of the network of Iranian papers published in the field of reproductive medicine through 2010-2014. In this cross-sectional study, all relevant scientific publications were retrieved from Scopus database and were analyzed according to document type, journal of publication, hot topics, authors and institutions. The results were mapped and clustered by VosViewer software. In total, 3141 papers from Iranian researchers were identified in Scopus database between 2010-2014. The numbers of publications per year have been increased from 461 in 2010 to 749 in 2014. Tehran University of Medical Sciences and "Soleimani M" are occupied the top position based on Productivity indicator. Likewise "Soleimani M" was obtained the first rank among authors according to degree centrality, betweenness centrality and collaboration criteria. In addition, among institutions, Iranian Academic Center for Education, Culture and Research (ACECR) was leader based on degree centrality, betweenness centrality and collaboration indicators. Publications of Iranian researchers in the field of reproductive medicine showed steadily growth during 2010-2014. It seems that in addition to quantity, Iranian authors have to promote quality of articles and collaboration. It will help them to advance their efforts.

  6. Bibliometric mapping and clustering analysis of Iranian papers on reproductive medicine in Scopus database (2010-2014)

    PubMed Central

    Bazm, Soheila; Kalantar, Seyyed Mehdi; Mirzaei, Masoud

    2016-01-01

    Background: To meet the future challenges in the field of reproductive medicine in Iran, better understanding of published studies is needed. Bibliometric methods and social network analysis have been used to measure the scope and illustrate scientific output of researchers in this field. Objective: This study provides insight into the structure of the network of Iranian papers published in the field of reproductive medicine through 2010-2014. Materials and Methods: In this cross-sectional study, all relevant scientific publications were retrieved from Scopus database and were analyzed according to document type, journal of publication, hot topics, authors and institutions. The results were mapped and clustered by VosViewer software. Results: In total, 3141 papers from Iranian researchers were identified in Scopus database between 2010-2014. The numbers of publications per year have been increased from 461 in 2010 to 749 in 2014. Tehran University of Medical Sciences and "Soleimani M" are occupied the top position based on Productivity indicator. Likewise "Soleimani M" was obtained the first rank among authors according to degree centrality, betweenness centrality and collaboration criteria. In addition, among institutions, Iranian Academic Center for Education, Culture and Research (ACECR) was leader based on degree centrality, betweenness centrality and collaboration indicators. Conclusion: Publications of Iranian researchers in the field of reproductive medicine showed steadily growth during 2010-2014. It seems that in addition to quantity, Iranian authors have to promote quality of articles and collaboration. It will help them to advance their efforts. PMID:27525320

  7. Digital geologic map of the Thirsty Canyon NW quadrangle, Nye County, Nevada

    USGS Publications Warehouse

    Minor, S.A.; Orkild, P.P.; Sargent, K.A.; Warren, R.G.; Sawyer, D.A.; Workman, J.B.

    1998-01-01

    This digital geologic map compilation presents new polygon (i.e., geologic map unit contacts), line (i.e., fault, fold axis, dike, and caldera wall), and point (i.e., structural attitude) vector data for the Thirsty Canyon NW 7 1/2' quadrangle in southern Nevada. The map database, which is at 1:24,000-scale resolution, provides geologic coverage of an area of current hydrogeologic and tectonic interest. The Thirsty Canyon NW quadrangle is located in southern Nye County about 20 km west of the Nevada Test Site (NTS) and 30 km north of the town of Beatty. The map area is underlain by extensive layers of Neogene (about 14 to 4.5 million years old [Ma]) mafic and silicic volcanic rocks that are temporally and spatially associated with transtensional tectonic deformation. Mapped volcanic features include part of a late Miocene (about 9.2 Ma) collapse caldera, a Pliocene (about 4.5 Ma) shield volcano, and two Pleistocene (about 0.3 Ma) cinder cones. Also documented are numerous normal, oblique-slip, and strike-slip faults that reflect regional transtensional deformation along the southern part of the Walker Lane belt. The Thirsty Canyon NW map provides new geologic information for modeling groundwater flow paths that may enter the map area from underground nuclear testing areas located in the NTS about 25 km to the east. The geologic map database comprises six component ArcINFO map coverages that can be accessed after decompressing and unbundling the data archive file (tcnw.tar.gz). These six coverages (tcnwpoly, tcnwflt, tcnwfold, tcnwdike, tcnwcald, and tcnwatt) are formatted here in ArcINFO EXPORT format. Bundled with this database are two PDF files for readily viewing and printing the map, accessory graphics, and a description of map units and compilation methods.

  8. Comparison of glaucoma diagnostic accuracy of macular ganglion cell complex thickness based on nonhighly myopic and highly myopic normative database

    PubMed Central

    Chen, Henry Shen-Lih; Liu, Chun-Hsiu; Lu, Da-Wen

    2016-01-01

    Background/Purpose: To evaluate and compare the diagnostic discriminative ability for detecting glaucoma in highly myopic eyes from a normative database of macular ganglion cell complex (mGCC) thickness based on nonhighly myopic and highly myopic normal eyes. Methods: Forty-nine eyes of 49 participants with high myopia (axial length ≥ 26.0 mm) were enrolled. Spectral-domain optical coherence tomography scans were done using RS-3000, and the mGCC thickness/significance maps within a 9-mm diameter circle were generated using built-in software. We compared the difference of sensitivity, specificity, and diagnostic accuracy between the nonhighly myopic database and the highly myopic database for differentiating the early glaucomatous eyes from the nonglaucomatous eyes. Results: This study enrolled 15 normal eyes and 34 eyes with glaucoma. The mean mGCC thickness of the glaucoma group was significantly less than that of the normal group (p < 0.001). Sensitivity was 96.3%, and the specificity was 50.0% when using the nonhighly myopic normative database. When the highly myopic normative database was used, the sensitivity was 88.9%, and the specificity was 90.0%. The false positive rate was significantly lower when using the highly myopic normative database (p < 0.05). Conclusion: The evaluations of glaucoma in eyes with high myopia using a nonhighly myopic normative database may lead to a frequent misdiagnosis. When evaluating glaucoma in high myopic eyes, the mGCC thickness determined by the long axial length high myopic normative database should be applied. PMID:29018704

  9. Geologic Map and Map Database of the Oakland Metropolitan Area, Alameda, Contra Costa, and San Francisco Counties, California

    USGS Publications Warehouse

    Graymer, R.W.

    2000-01-01

    Introduction This report contains a new geologic map at 1:50,000 scale, derived from a set of geologic map databases containing information at a resolution associated with 1:24,000 scale, and a new description of geologic map units and structural relationships in the mapped area. The map database represents the integration of previously published reports and new geologic mapping and field checking by the author (see Sources of Data index map on the map sheet or the Arc-Info coverage pi-so and the textfile pi-so.txt). The descriptive text (below) contains new ideas about the Hayward fault and other faults in the East Bay fault system, as well as new ideas about the geologic units and their relations. These new data are released in digital form in conjunction with the Federal Emergency Management Agency Project Impact in Oakland. The goal of Project Impact is to use geologic information in land-use and emergency services planning to reduce the losses occurring during earthquakes, landslides, and other hazardous geologic events. The USGS, California Division of Mines and Geology, FEMA, California Office of Emergency Services, and City of Oakland participated in the cooperative project. The geologic data in this report were provided in pre-release form to other Project Impact scientists, and served as one of the basic data layers for the analysis of hazard related to earthquake shaking, liquifaction, earthquake induced landsliding, and rainfall induced landsliding. The publication of these data provides an opportunity for regional planners, local, state, and federal agencies, teachers, consultants, and others outside Project Impact who are interested in geologic data to have the new data long before a traditional paper map could be published. Because the database contains information about both the bedrock and surficial deposits, it has practical applications in the study of groundwater and engineering of hillside materials, as well as the study of geologic hazards and the academic research on the geologic history and development of the region.

  10. pseudoMap: an innovative and comprehensive resource for identification of siRNA-mediated mechanisms in human transcribed pseudogenes.

    PubMed

    Chan, Wen-Ling; Yang, Wen-Kuang; Huang, Hsien-Da; Chang, Jan-Gowth

    2013-01-01

    RNA interference (RNAi) is a gene silencing process within living cells, which is controlled by the RNA-induced silencing complex with a sequence-specific manner. In flies and mice, the pseudogene transcripts can be processed into short interfering RNAs (siRNAs) that regulate protein-coding genes through the RNAi pathway. Following these findings, we construct an innovative and comprehensive database to elucidate siRNA-mediated mechanism in human transcribed pseudogenes (TPGs). To investigate TPG producing siRNAs that regulate protein-coding genes, we mapped the TPGs to small RNAs (sRNAs) that were supported by publicly deep sequencing data from various sRNA libraries and constructed the TPG-derived siRNA-target interactions. In addition, we also presented that TPGs can act as a target for miRNAs that actually regulate the parental gene. To enable the systematic compilation and updating of these results and additional information, we have developed a database, pseudoMap, capturing various types of information, including sequence data, TPG and cognate annotation, deep sequencing data, RNA-folding structure, gene expression profiles, miRNA annotation and target prediction. As our knowledge, pseudoMap is the first database to demonstrate two mechanisms of human TPGs: encoding siRNAs and decoying miRNAs that target the parental gene. pseudoMap is freely accessible at http://pseudomap.mbc.nctu.edu.tw/. Database URL: http://pseudomap.mbc.nctu.edu.tw/

  11. Large Survey Database: A Distributed Framework for Storage and Analysis of Large Datasets

    NASA Astrophysics Data System (ADS)

    Juric, Mario

    2011-01-01

    The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures. An LSD database consists of a set of vertically and horizontally partitioned tables, physically stored as compressed HDF5 files. Vertically, we partition the tables into groups of related columns ('column groups'), storing together logically related data (e.g., astrometry, photometry). Horizontally, the tables are partitioned into partially overlapping ``cells'' by position in space (lon, lat) and time (t). This organization allows for fast lookups based on spatial and temporal coordinates, as well as data and task distribution. The design was inspired by the success of Google BigTable (Chang et al., 2006). Our programming model is a pipelined extension of MapReduce (Dean and Ghemawat, 2004). An SQL-like query language is used to access data. For complex tasks, map-reduce ``kernels'' that operate on query results on a per-cell basis can be written, with the framework taking care of scheduling and execution. The combination leverages users' familiarity with SQL, while offering a fully distributed computing environment. LSD adds little overhead compared to direct Python file I/O. In tests, we sweeped through 1.1 Grows of PanSTARRS+SDSS data (220GB) less than 15 minutes on a dual CPU machine. In a cluster environment, we achieved bandwidths of 17Gbits/sec (I/O limited). Based on current experience, we believe LSD should scale to be useful for analysis and storage of LSST-scale datasets. It can be downloaded from http://mwscience.net/lsd.

  12. Low-altitude photographic transects of the Arctic Network of National Park Units and Selawik National Wildlife Refuge, Alaska, July 2013

    USGS Publications Warehouse

    Marcot, Bruce G.; Jorgenson, M. Torre; DeGange, Anthony R.

    2014-01-01

    5. A Canon® Rebel 3Ti with a Sigma zoom lens (18–200 mm focal length). The Drift® HD-170 and GoPro® Hero3 cameras were secured to the struts and underwing for nadir (direct downward) imaging. The Panasonic® and Canon® cameras were each hand-held for oblique-angle landscape images, shooting through the airplanes’ windows, targeting both general landscape conditions as well as landscape features of special interest, such as tundra fire scars and landslips. The Drift® and GoPro® cameras each were set for time-lapse photography at 5-second intervals for overlapping coverage. Photographs from all cameras (100 percent .jpg format) were date- and time-synchronized to geographic positioning system waypoints taken during the flights, also at 5-second intervals, providing precise geotagging (latitude-longitude) of all files. All photographs were adjusted for color saturation and gamma, and nadir photographs were corrected for lens distortion for the Drift® and GoPro® cameras’ 170° wide-angle distortion. EXIF (exchangeable image file format) data on camera settings and geotagging were extracted into spreadsheet databases. An additional 1 hour, 20 minutes, and 43 seconds of high-resolution videos were recorded at 60 frames per second with the GoPro® camera along selected transect segments, and also were image-adjusted and corrected for lens distortion. Geotagged locations of 12,395 nadir photographs from the Drift® and GoPro® cameras were overlayed in a geographic information system (ArcMap 10.0) onto a map of 44 ecotypes (land- and water-cover types) of the Arctic Network study area. Presence and area of each ecotype occurring within a geographic information system window centered on the location of each photograph were recorded and included in the spreadsheet databases. All original and adjusted photographs, videos, geographic positioning system flight tracks, and photograph databases are available by contacting ascweb@usgs.gov.

  13. Infiltration processes in karstic chalk investigated through a spatial analysis of the geochemical properties of the groundwater: The effect of the superficial layer of clay-with-flints

    NASA Astrophysics Data System (ADS)

    Valdes, Danièle; Dupont, Jean-Paul; Laignel, Benoît; Slimani, Smaïl; Delbart, Célestine

    2014-11-01

    In the Paris Basin in Upper Normandy (France), the chalk plateaus are covered with thick deposits of loess and clay-with-flints, from a few meters to approximately 40 m thick locally. A perched groundwater is sometimes observed in the superficial layers in which evapotranspiration processes seem to occur. This study's objective was to understand the effects of the thick clay-with-flints layers on the infiltration processes. To achieve this, we adopted a spatial approach comparing the maps of the geochemical properties of the Chalk groundwater and the maps of the thickness of clay-with-flints. The French national groundwater database, ADES (Accès aux Données des Eaux, BRGM), provided the mean geochemical properties in the Chalk aquifer of Upper Normandy. This database was used to prepare maps of the environmental tracers: Ca2+, HCO3-, Mg2+, Cl-, Na+, NO3-, and SO42. The data are spatially well organized. Using principal component analysis (PCA), these maps were compared with the maps of the thickness of clay-with-flints. A focus on the coastal basins (northern Upper Normandy) shows a very strong spatial correlation between the maps of clay-with-flints thickness and all of the maps of the major ions. The thickness of clay-with-flints is negatively correlated with the autochthonous ions (HCO3- and Ca2+) and is positively correlated with the allochthonous ions (Cl-, Na+, SO42-, and NO3-). These results highlight that the thickness of clay-with-flints controls recharge. Two types of infiltration processes are proposed: (1) Thicker clay-with-flints allows storage in the perched groundwater, which allows evapotranspiration, resulting in high concentrations of allochthonous ions and a decrease in the dissolution potential of water and low concentrations of autochthonous ions. The infiltration of the perched groundwater is thus delayed and concentrated. (2) Thinner clay-with-flints causes the infiltration to be more diffuse, with low evapotranspiration and thus low concentrations of allochthonous ions in the Chalk groundwater; more, there is more dissolution and higher concentrations of autochthonous ions in the Chalk groundwater.

  14. Publications of the Western Geologic Mapping Team 1997-1998

    USGS Publications Warehouse

    Stone, Paul; Powell, C.L.

    1999-01-01

    The Western Geologic Mapping Team (WGMT) of the U.S. Geological Survey, Geologic Division (USGS, GD), conducts geologic mapping and related topical earth-science studies in the western United States. This work is focused on areas where modern geologic maps and associated earth-science data are needed to address key societal and environmental issues such as ground-water quality, potential geologic hazards, and land-use decisions. Areas of primary emphasis currently include southern California, the San Francisco Bay region, the Pacific Northwest, the Las Vegas urban corridor, and selected National Park lands. The team has its headquarters in Menlo Park, California, and maintains smaller field offices at several other locations in the western United States. The results of research conducted by the WGMT are released to the public as a variety of databases, maps, text reports, and abstracts, both through the internal publication system of the USGS and in diverse external publications such as scientific journals and books. This report lists publications of the WGMT released in calendar years 1997 and 1998. Most of the publications listed were authored or coauthored by WGMT staff. However, the list also includes some publications authored by formal non-USGS cooperators with the WGMT, as well as some authored by USGS staff outside the WGMT in cooperation with WGMT projects. Several of the publications listed are available on the World Wide Web; for these, URL addresses are provided. Most of these Web publications are USGS open-file reports that contain large digital databases of geologic map and related information. For these, the bibliographic citation refers specifically to an explanatory pamphlet containing information about the content and accessibility of the database, not to the actual map or related information comprising the database itself.

  15. Reconstruction of metabolic pathways by combining probabilistic graphical model-based and knowledge-based methods

    PubMed Central

    2014-01-01

    Automatic reconstruction of metabolic pathways for an organism from genomics and transcriptomics data has been a challenging and important problem in bioinformatics. Traditionally, known reference pathways can be mapped into an organism-specific ones based on its genome annotation and protein homology. However, this simple knowledge-based mapping method might produce incomplete pathways and generally cannot predict unknown new relations and reactions. In contrast, ab initio metabolic network construction methods can predict novel reactions and interactions, but its accuracy tends to be low leading to a lot of false positives. Here we combine existing pathway knowledge and a new ab initio Bayesian probabilistic graphical model together in a novel fashion to improve automatic reconstruction of metabolic networks. Specifically, we built a knowledge database containing known, individual gene / protein interactions and metabolic reactions extracted from existing reference pathways. Known reactions and interactions were then used as constraints for Bayesian network learning methods to predict metabolic pathways. Using individual reactions and interactions extracted from different pathways of many organisms to guide pathway construction is new and improves both the coverage and accuracy of metabolic pathway construction. We applied this probabilistic knowledge-based approach to construct the metabolic networks from yeast gene expression data and compared its results with 62 known metabolic networks in the KEGG database. The experiment showed that the method improved the coverage of metabolic network construction over the traditional reference pathway mapping method and was more accurate than pure ab initio methods. PMID:25374614

  16. Geologic map of the Reyes Peak quadrangle, Ventura County, California

    USGS Publications Warehouse

    Minor, Scott A.

    2004-01-01

    New 1:24,000-scale geologic mapping in the Cuyama 30' x 60' quadrangle, in support of the USGS Southern California Areal Mapping Project (SCAMP), is contributing to a more complete understanding of the stratigraphy, structure, and tectonic evolution of the complex junction area between the NW-trending Coast Ranges and EW-trending western Transverse Ranges. The 1:24,000-scale geologic map of the Reyes Peak quadrangle, located in the eastern part of the Cuyama map area, is the final of six contiguous 7 ?' quadrangle geologic maps compiled for a more detailed portrayal and reevaluation of geologic structures and rock units shown on previous maps of the region (Carman, 1964; Dibblee, 1972; Vedder and others, 1973). SCAMP digital geologic maps of the five other contiguous quadrangles have recently been published (Minor, 1999; Kellogg, 1999, 2003; Stone and Cossette, 2000; Kellogg and Miggins, 2002). This digital compilation presents a new geologic map database for the Reyes Peak 7?' quadrangle, which is located in southern California about 75 km northwest of Los Angeles. The map database is at 1:24,000-scale resolution.

  17. Preliminary geologic map of the Fontana 7.5' quadrangle, Riverside and San Bernardino Counties, California

    USGS Publications Warehouse

    Morton, Douglas M.; Digital preparation by Bovard, Kelly R.

    2003-01-01

    Open-File Report 03-418 is a digital geologic data set that maps and describes the geology of the Fontana 7.5’ quadrangle, Riverside and San Bernardino Counties, California. The Fontana quadrangle database is one of several 7.5’ quadrangle databases that are being produced by the Southern California Areal Mapping Project (SCAMP). These maps and databases are, in turn, part of the nation-wide digital geologic map coverage being developed by the National Cooperative Geologic Map Program of the U.S. Geological Survey (USGS). General Open-File Report 03-418 contains a digital geologic map database of the Fontana 7.5’ quadrangle, Riverside and San Bernardino Counties, California that includes: 1. ARC/INFO (Environmental Systems Research Institute, http://www.esri.com) version 7.2.1 coverages of the various elements of the geologic map. 2. A Postscript file (fon_map.ps) to plot the geologic map on a topographic base, and containing a Correlation of Map Units diagram (CMU), a Description of Map Units (DMU), and an index map. 3. An Encapsulated PostScript (EPS) file (fon_grey.eps) created in Adobe Illustrator 10.0 to plot the geologic map on a grey topographic base, and containing a Correlation of Map Units (CMU), a Description of Map Units (DMU), and an index map. 4. Portable Document Format (.pdf) files of: a. the Readme file; includes in Appendix I, data contained in fon_met.txt b. The same graphics as plotted in 2 and 3 above.Test plots have not produced precise 1:24,000-scale map sheets. Adobe Acrobat page size setting influences map scale. The Correlation of Map Units and Description of Map Units is in the editorial format of USGS Geologic Investigations Series (I-series) maps but has not been edited to comply with I-map standards. Within the geologic map data package, map units are identified by standard geologic map criteria such as formation-name, age, and lithology. Where known, grain size is indicated on the map by a subscripted letter or letters following the unit symbols as follows: lg, large boulders; b, boulder; g, gravel; a, arenaceous; s, silt; c, clay; e.g. Qyfa is a predominantly young alluvial fan deposit that is arenaceous. Multiple letters are used for more specific identification or for mixed units, e.g., Qfysa is a silty sand. In some cases, mixed units are indicated by a compound symbol; e.g., Qyf2sc. Even though this is an Open-File Report and includes the standard USGS Open-File disclaimer, the report closely adheres to the stratigraphic nomenclature of the U.S. Geological Survey. Descriptions of units can be obtained by viewing or plotting the .pdf file (4b above) or plotting the postscript files (2 or 3 above).

  18. BAID: The Barrow Area Information Database - an interactive web mapping portal and cyberinfrastructure for scientific activities in the vicinity of Barrow, Alaska.

    NASA Astrophysics Data System (ADS)

    Cody, R. P.; Kassin, A.; Kofoed, K. B.; Copenhaver, W.; Laney, C. M.; Gaylord, A. G.; Collins, J. A.; Tweedie, C. E.

    2014-12-01

    The Barrow area of northern Alaska is one of the most intensely researched locations in the Arctic and the Barrow Area Information Database (BAID, www.barrowmapped.org) tracks and facilitates a gamut of research, management, and educational activities in the area. BAID is a cyberinfrastructure (CI) that details much of the historic and extant research undertaken within in the Barrow region in a suite of interactive web-based mapping and information portals (geobrowsers). The BAID user community and target audience for BAID is diverse and includes research scientists, science logisticians, land managers, educators, students, and the general public. BAID contains information on more than 12,000 Barrow area research sites that extend back to the 1940's and more than 640 remote sensing images and geospatial datasets. In a web-based setting, users can zoom, pan, query, measure distance, save or print maps and query results, and filter or view information by space, time, and/or other tags. Data are described with metadata that meet Federal Geographic Data Committee standards and are archived at the University Corporation for Atmospheric Research Earth Observing Laboratory (EOL) where non-proprietary BAID data can be freely downloaded. Recent advances include the addition of more than 2000 new research sites, provision of differential global position system (dGPS) and Unmanned Aerial Vehicle (UAV) support to visiting scientists, surveying over 80 miles of coastline to document rates of erosion, training of local GIS personal to better make use of science in local decision making, deployment and near real time connectivity to a wireless micrometeorological sensor network, links to Barrow area datasets housed at national data archives and substantial upgrades to the BAID website and web mapping applications.

  19. Development and application of a ray-tracing code integrating with 3D equilibrium mapping in LHD ECH experiments

    NASA Astrophysics Data System (ADS)

    Tsujimura, T., Ii; Kubo, S.; Takahashi, H.; Makino, R.; Seki, R.; Yoshimura, Y.; Igami, H.; Shimozuma, T.; Ida, K.; Suzuki, C.; Emoto, M.; Yokoyama, M.; Kobayashi, T.; Moon, C.; Nagaoka, K.; Osakabe, M.; Kobayashi, S.; Ito, S.; Mizuno, Y.; Okada, K.; Ejiri, A.; Mutoh, T.

    2015-11-01

    The central electron temperature has successfully reached up to 7.5 keV in large helical device (LHD) plasmas with a central high-ion temperature of 5 keV and a central electron density of 1.3× {{10}19} m-3. This result was obtained by heating with a newly-installed 154 GHz gyrotron and also the optimisation of injection geometry in electron cyclotron heating (ECH). The optimisation was carried out by using the ray-tracing code ‘LHDGauss’, which was upgraded to include the rapid post-processing three-dimensional (3D) equilibrium mapping obtained from experiments. For ray-tracing calculations, LHDGauss can automatically read the relevant data registered in the LHD database after a discharge, such as ECH injection settings (e.g. Gaussian beam parameters, target positions, polarisation and ECH power) and Thomson scattering diagnostic data along with the 3D equilibrium mapping data. The equilibrium map of the electron density and temperature profiles are then extrapolated into the region outside the last closed flux surface. Mode purity, or the ratio between the ordinary mode and the extraordinary mode, is obtained by calculating the 1D full-wave equation along the direction of the rays from the antenna to the absorption target point. Using the virtual magnetic flux surfaces, the effects of the modelled density profiles and the magnetic shear at the peripheral region with a given polarisation are taken into account. Power deposition profiles calculated for each Thomson scattering measurement timing are registered in the LHD database. The adjustment of the injection settings for the desired deposition profile from the feedback provided on a shot-by-shot basis resulted in an effective experimental procedure.

  20. Spatial disaggregation of complex soil map units at regional scale based on soil-landscape relationships

    NASA Astrophysics Data System (ADS)

    Vincent, Sébastien; Lemercier, Blandine; Berthier, Lionel; Walter, Christian

    2015-04-01

    Accurate soil information over large extent is essential to manage agronomical and environmental issues. Where it exists, information on soil is often sparse or available at coarser resolution than required. Typically, the spatial distribution of soil at regional scale is represented as a set of polygons defining soil map units (SMU), each one describing several soil types not spatially delineated, and a semantic database describing these objects. Delineation of soil types within SMU, ie spatial disaggregation of SMU allows improved soil information's accuracy using legacy data. The aim of this study was to predict soil types by spatial disaggregation of SMU through a decision tree approach, considering expert knowledge on soil-landscape relationships embedded in soil databases. The DSMART (Disaggregation and Harmonization of Soil Map Units Through resampled Classification Trees) algorithm developed by Odgers et al. (2014) was used. It requires soil information, environmental covariates, and calibration samples, to build then extrapolate decision trees. To assign a soil type to a particular spatial position, a weighed random allocation approach is applied: each soil type in the SMU is weighted according to its assumed proportion of occurrence in the SMU. Thus soil-landscape relationships are not considered in the current version of DSMART. Expert rules on soil distribution considering the relief, parent material and wetlands location were proposed to drive the procedure of allocation of soil type to sampled positions, in order to integrate the soil-landscape relationships. Semantic information about spatial organization of soil types within SMU and exhaustive landscape descriptors were used. In the eastern part of Brittany (NW France), 171 soil types were described; their relative area in the SMU were estimated, geomorphological and geological contexts were recorded. The model predicted 144 soil types. An external validation was performed by comparing predicted with effectively observed soil types derived from available soil maps at scale of 1:25.000 or 1:50.000. Overall accuracies were 63.1% and 36.2%, respectively considering or not the adjacent pixels. The introduction of expert rules based on soil-landscape relationships to allocate soil types to calibration samples enhanced dramatically the results in comparison with a simple weighted random allocation procedure. It also enabled the production of a comprehensive soil map, retrieving expected spatial organization of soils. Estimation of soil properties for various depths is planned using disaggregated soil types, according to the GlobalSoilmap.net specifications. Odgers, N.P., Sun, W., McBratney, A.B., Minasny, B., Clifford, D., 2014. Disaggregating and harmonising soil map units through resampled classification trees. Geoderma 214, 91-100.

  1. Open Land-Use Map: A Regional Land-Use Mapping Strategy for Incorporating OpenStreetMap with Earth Observations

    NASA Astrophysics Data System (ADS)

    Yang, D.; Fu, C. S.; Binford, M. W.

    2017-12-01

    The southeastern United States has high landscape heterogeneity, withheavily managed forestlands, highly developed agriculture lands, and multiple metropolitan areas. Human activities are transforming and altering land patterns and structures in both negative and positive manners. A land-use map for at the greater scale is a heavy computation task but is critical to most landowners, researchers, and decision makers, enabling them to make informed decisions for varying objectives. There are two major difficulties in generating the classification maps at the regional scale: the necessity of large training point sets and the expensive computation cost-in terms of both money and time-in classifier modeling. Volunteered Geographic Information (VGI) opens a new era in mapping and visualizing our world, where the platform is open for collecting valuable georeferenced information by volunteer citizens, and the data is freely available to the public. As one of the most well-known VGI initiatives, OpenStreetMap (OSM) contributes not only road network distribution, but also the potential for using this data to justify land cover and land use classifications. Google Earth Engine (GEE) is a platform designed for cloud-based mapping with a robust and fast computing power. Most large scale and national mapping approaches confuse "land cover" and "land-use", or build up the land-use database based on modeled land cover datasets. Unlike most other large-scale approaches, we distinguish and differentiate land-use from land cover. By focusing our prime objective of mapping land-use and management practices, a robust regional land-use mapping approach is developed by incorporating the OpenstreepMap dataset into Earth observation remote sensing imageries instead of the often-used land cover base maps.

  2. Brickworx builds recurrent RNA and DNA structural motifs into medium- and low-resolution electron-density maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chojnowski, Grzegorz, E-mail: gchojnowski@genesilico.pl; Waleń, Tomasz; University of Warsaw, Banacha 2, 02-097 Warsaw

    2015-03-01

    A computer program that builds crystal structure models of nucleic acid molecules is presented. Brickworx is a computer program that builds crystal structure models of nucleic acid molecules using recurrent motifs including double-stranded helices. In a first step, the program searches for electron-density peaks that may correspond to phosphate groups; it may also take into account phosphate-group positions provided by the user. Subsequently, comparing the three-dimensional patterns of the P atoms with a database of nucleic acid fragments, it finds the matching positions of the double-stranded helical motifs (A-RNA or B-DNA) in the unit cell. If the target structure ismore » RNA, the helical fragments are further extended with recurrent RNA motifs from a fragment library that contains single-stranded segments. Finally, the matched motifs are merged and refined in real space to find the most likely conformations, including a fit of the sequence to the electron-density map. The Brickworx program is available for download and as a web server at http://iimcb.genesilico.pl/brickworx.« less

  3. An Android based location service using GSMCellID and GPS to obtain a graphical guide to the nearest cash machine

    NASA Astrophysics Data System (ADS)

    Jacobsen, Jurma; Edlich, Stefan

    2009-02-01

    There is a broad range of potential useful mobile location-based applications. One crucial point seems to be to make them available to the public at large. This case illuminates the abilities of Android - the operating system for mobile devices - to fulfill this demand in the mashup way by use of some special geocoding web services and one integrated web service for getting the nearest cash machines data. It shows an exemplary approach for building mobile location-based mashups for everyone: 1. As a basis for reaching as many people as possible the open source Android OS is assumed to spread widely. 2. Everyone also means that the handset has not to be an expensive GPS device. This is realized by re-utilization of the existing GSM infrastructure with the Cell of Origin (COO) method which makes a lookup of the CellID in one of the growing web available CellID databases. Some of these databases are still undocumented and not yet published. Furthermore the Google Maps API for Mobile (GMM) and the open source counterpart OpenCellID are used. The user's current position localization via lookup of the closest cell to which the handset is currently connected to (COO) is not as precise as GPS, but appears to be sufficient for lots of applications. For this reason the GPS user is the most pleased one - for this user the system is fully automated. In contrary there could be some users who doesn't own a GPS cellular. This user should refine his/her location by one click on the map inside of the determined circular region. The users are then shown and guided by a path to the nearest cash machine by integrating Google Maps API with an overlay. Additionally, the GPS user can keep track of him- or herself by getting a frequently updated view via constantly requested precise GPS data for his or her position.

  4. Database on unstable rock slopes in Norway

    NASA Astrophysics Data System (ADS)

    Oppikofer, Thierry; Nordahl, Bo; Bunkholt, Halvor; Nicolaisen, Magnus; Hermanns, Reginald L.; Böhme, Martina; Yugsi Molina, Freddy X.

    2014-05-01

    Several large rockslides have occurred in historic times in Norway causing many casualties. Most of these casualties are due to displacement waves triggered by a rock avalanche and affecting coast lines of entire lakes and fjords. The Geological Survey of Norway performs systematic mapping of unstable rock slopes in Norway and has detected up to now more than 230 unstable slopes with significant postglacial deformation. This systematic mapping aims to detect future rock avalanches before they occur. The registered unstable rock slopes are stored in a database on unstable rock slopes developed and maintained by the Geological Survey of Norway. The main aims of this database are (1) to serve as a national archive for unstable rock slopes in Norway; (2) to serve for data collection and storage during field mapping; (3) to provide decision-makers with hazard zones and other necessary information on unstable rock slopes for land-use planning and mitigation; and (4) to inform the public through an online map service. The database is organized hierarchically with a main point for each unstable rock slope to which several feature classes and tables are linked. This main point feature class includes several general attributes of the unstable rock slopes, such as site name, general and geological descriptions, executed works, recommendations, technical parameters (volume, lithology, mechanism and others), displacement rates, possible consequences, hazard and risk classification and so on. Feature classes and tables linked to the main feature class include the run-out area, the area effected by secondary effects, the hazard and risk classification, subareas and scenarios of an unstable rock slope, field observation points, displacement measurement stations, URL links for further documentation and references. The database on unstable rock slopes in Norway will be publicly consultable through the online map service on www.skrednett.no in 2014. Only publicly relevant parts of the database will be shown in the online map service (e.g. processed results of displacement measurements), while more detailed data will not (e.g. raw data of displacement measurements). Factsheets with key information on unstable rock slopes can be automatically generated and downloaded for each site, a municipality, a county or the entire country. Selected data will also be downloadable free of charge. The present database on unstable rock slopes in Norway will further evolve in the coming years as the systematic mapping conducted by the Geological Survey of Norway progresses and as available techniques and tools evolve.

  5. Interactive Tools to Access the HELCATS Catalogues

    NASA Astrophysics Data System (ADS)

    Rouillard, Alexis; Plotnikov, Illya; Pinto, Rui; Génot, Vincent; Bouchemit, Myriam; Davies, Jackie

    2017-04-01

    The propagation tool is a web-based interface written in java that allows users to propagate Coronal Mass Ejections (CMEs), Corotating Interaction Regions (CIRs) and Solar Energetic Particles (SEPs) in the inner heliosphere. The tool displays unique datasets and catalogues through a 2-D visualisation of the trajectories of these heliospheric structures in relation to the orbital position of probes/planets and the pointing direction and extent of different imaging instruments. Summary plots of in-situ data or images of the solar corona and planetary aurorae stored at the CDPP, MEDOC and APIS databases, respectively, can be used to verify the presence of heliospheric structures at the estimated launch or impact times. A great novelty of the tool is the immediate visualisation of J-maps and the possibility to superpose on these maps the HELCATS CME and CIR catalogues.

  6. Interactive Tools to Access the HELCATS Catalogues

    NASA Astrophysics Data System (ADS)

    Rouillard, A.; Génot, V.; Bouchemit, M.; Pinto, R.

    2017-09-01

    The propagation tool is a web-based interface written in java that allows users to propagate Coronal Mass Ejections (CMEs), Corotating Interaction Regions (CIRs) and Solar Energetic Particles (SEPs) in the inner heliosphere. The tool displays unique datasets and catalogues through a 2-D visualisation of the trajectories of these heliospheric structures in relation to the orbital position of probes/planets and the pointing direction and extent of different imaging instruments. Summary plots of in-situ data or images of the solar corona and planetary aurorae stored at the CDPP, MEDOC and APIS databases, respectively, can be used to verify the presence of heliospheric structures at the estimated launch or impact times. A great novelty of the tool is the immediate visualisation of J-maps and the possibility to superpose on these maps the HELCATS CME and CIR catalogues.

  7. Integrated WiFi/PDR/Smartphone Using an Unscented Kalman Filter Algorithm for 3D Indoor Localization.

    PubMed

    Chen, Guoliang; Meng, Xiaolin; Wang, Yunjia; Zhang, Yanzhe; Tian, Peng; Yang, Huachao

    2015-09-23

    Because of the high calculation cost and poor performance of a traditional planar map when dealing with complicated indoor geographic information, a WiFi fingerprint indoor positioning system cannot be widely employed on a smartphone platform. By making full use of the hardware sensors embedded in the smartphone, this study proposes an integrated approach to a three-dimensional (3D) indoor positioning system. First, an improved K-means clustering method is adopted to reduce the fingerprint database retrieval time and enhance positioning efficiency. Next, with the mobile phone's acceleration sensor, a new step counting method based on auto-correlation analysis is proposed to achieve cell phone inertial navigation positioning. Furthermore, the integration of WiFi positioning with Pedestrian Dead Reckoning (PDR) obtains higher positional accuracy with the help of the Unscented Kalman Filter algorithm. Finally, a hybrid 3D positioning system based on Unity 3D, which can carry out real-time positioning for targets in 3D scenes, is designed for the fluent operation of mobile terminals.

  8. Integrated WiFi/PDR/Smartphone Using an Unscented Kalman Filter Algorithm for 3D Indoor Localization

    PubMed Central

    Chen, Guoliang; Meng, Xiaolin; Wang, Yunjia; Zhang, Yanzhe; Tian, Peng; Yang, Huachao

    2015-01-01

    Because of the high calculation cost and poor performance of a traditional planar map when dealing with complicated indoor geographic information, a WiFi fingerprint indoor positioning system cannot be widely employed on a smartphone platform. By making full use of the hardware sensors embedded in the smartphone, this study proposes an integrated approach to a three-dimensional (3D) indoor positioning system. First, an improved K-means clustering method is adopted to reduce the fingerprint database retrieval time and enhance positioning efficiency. Next, with the mobile phone’s acceleration sensor, a new step counting method based on auto-correlation analysis is proposed to achieve cell phone inertial navigation positioning. Furthermore, the integration of WiFi positioning with Pedestrian Dead Reckoning (PDR) obtains higher positional accuracy with the help of the Unscented Kalman Filter algorithm. Finally, a hybrid 3D positioning system based on Unity 3D, which can carry out real-time positioning for targets in 3D scenes, is designed for the fluent operation of mobile terminals. PMID:26404314

  9. Vegetation classification and distribution mapping report Mesa Verde National Park

    USGS Publications Warehouse

    Thomas, Kathryn A.; McTeague, Monica L.; Ogden, Lindsay; Floyd, M. Lisa; Schulz, Keith; Friesen, Beverly A.; Fancher, Tammy; Waltermire, Robert G.; Cully, Anne

    2009-01-01

    The classification and distribution mapping of the vegetation of Mesa Verde National Park (MEVE) and surrounding environment was achieved through a multi-agency effort between 2004 and 2007. The National Park Service’s Southern Colorado Plateau Network facilitated the team that conducted the work, which comprised the U.S. Geological Survey’s Southwest Biological Science Center, Fort Collins Research Center, and Rocky Mountain Geographic Science Center; Northern Arizona University; Prescott College; and NatureServe. The project team described 47 plant communities for MEVE, 34 of which were described from quantitative classification based on f eld-relevé data collected in 1993 and 2004. The team derived 13 additional plant communities from field observations during the photointerpretation phase of the project. The National Vegetation Classification Standard served as a framework for classifying these plant communities to the alliance and association level. Eleven of the 47 plant communities were classified as “park specials;” that is, plant communities with insufficient data to describe them as new alliances or associations. The project team also developed a spatial vegetation map database representing MEVE, with three different map-class schemas: base, group, and management map classes. The base map classes represent the fi nest level of spatial detail. Initial polygons were developed using Definiens Professional (at the time of our use, this software was called eCognition), assisted by interpretation of 1:12,000 true-color digital orthophoto quarter quadrangles (DOQQs). These polygons (base map classes) were labeled using manual photo interpretation of the DOQQs and 1:12,000 true-color aerial photography. Field visits verified interpretation concepts. The vegetation map database includes 46 base map classes, which consist of associations, alliances, and park specials classified with quantitative analysis, additional associations and park specials noted during photointerpretation, and non-vegetated land cover, such as infrastructure, land use, and geological land cover. The base map classes consist of 5,007 polygons in the project area. A field-based accuracy assessment of the base map classes showed overall accuracy to be 43.5%. Seven map classes comprise 89.1% of the park vegetated land cover. The group map classes represent aggregations of the base map classes, approximating the group level of the National Vegetation Classification Standard, version 2 (Federal Geographic Data Committee 2007), and reflecting physiognomy and floristics. Terrestrial ecological systems, as described by NatureServe (Comer et al. 2003), were used as the fi rst approximation of the group level. The project team identified 14 group map classes for this project. The overall accuracy of the group map classes was determined using the same accuracy assessment data as for the base map classes. The overall accuracy of the group representation of vegetation was 80.3%. In consultation with park staff , the team developed management map classes, consisting of park-defined groupings of base map classes intended to represent a balance between maintaining required accuracy and providing a focus on vegetation of particular interest or import to park managers. The 23 management map classes had an overall accuracy of 73.3%. While the main products of this project are the vegetation classification and the vegetation map database, a number of ancillary digital geographic information system and database products were also produced that can be used independently or to augment the main products. These products include shapefiles of the locations of field-collected data and relational databases of field-collected data.

  10. Enhanced digital mapping project : final report

    DOT National Transportation Integrated Search

    2004-11-19

    The Enhanced Digital Map Project (EDMap) was a three-year effort launched in April 2001 to develop a range of digital map database enhancements that enable or improve the performance of driver assistance systems currently under development or conside...

  11. Recently active traces of the Bartlett Springs Fault, California: a digital database

    USGS Publications Warehouse

    Lienkaemper, James J.

    2010-01-01

    The purpose of this map is to show the location of and evidence for recent movement on active fault traces within the Bartlett Springs Fault Zone, California. The location and recency of the mapped traces is primarily based on geomorphic expression of the fault as interpreted from large-scale aerial photography. In a few places, evidence of fault creep and offset Holocene strata in trenches and natural exposures have confirmed the activity of some of these traces. This publication is formatted both as a digital database for use within a geographic information system (GIS) and for broader public access as map images that may be browsed on-line or download a summary map. The report text describes the types of scientific observations used to make the map, gives references pertaining to the fault and the evidence of faulting, and provides guidance for use of and limitations of the map.

  12. Land cover mapping of Greater Mesoamerica using MODIS data

    USGS Publications Warehouse

    Giri, Chandra; Jenkins, Clinton N.

    2005-01-01

    A new land cover database of Greater Mesoamerica has been prepared using moderate resolution imaging spectroradiometer (MODIS, 500 m resolution) satellite data. Daily surface reflectance MODIS data and a suite of ancillary data were used in preparing the database by employing a decision tree classification approach. The new land cover data are an improvement over traditional advanced very high resolution radiometer (AVHRR) based land cover data in terms of both spatial and thematic details. The dominant land cover type in Greater Mesoamerica is forest (39%), followed by shrubland (30%) and cropland (22%). Country analysis shows forest as the dominant land cover type in Belize (62%), Cost Rica (52%), Guatemala (53%), Honduras (56%), Nicaragua (53%), and Panama (48%), cropland as the dominant land cover type in El Salvador (60.5%), and shrubland as the dominant land cover type in Mexico (37%). A three-step approach was used to assess the quality of the classified land cover data: (i) qualitative assessment provided good insight in identifying and correcting gross errors; (ii) correlation analysis of MODIS- and Landsat-derived land cover data revealed strong positive association for forest (r2 = 0.88), shrubland (r2 = 0.75), and cropland (r2 = 0.97) but weak positive association for grassland (r2 = 0.26); and (iii) an error matrix generated using unseen training data provided an overall accuracy of 77.3% with a Kappa coefficient of 0.73608. Overall, MODIS 500 m data and the methodology used were found to be quite useful for broad-scale land cover mapping of Greater Mesoamerica.

  13. Mars Global Digital Dune Database: MC2-MC29

    USGS Publications Warehouse

    Hayward, Rosalyn K.; Mullins, Kevin F.; Fenton, L.K.; Hare, T.M.; Titus, T.N.; Bourke, M.C.; Colaprete, Anthony; Christensen, P.R.

    2007-01-01

    Introduction The Mars Global Digital Dune Database presents data and describes the methodology used in creating the database. The database provides a comprehensive and quantitative view of the geographic distribution of moderate- to large-size dune fields from 65? N to 65? S latitude and encompasses ~ 550 dune fields. The database will be expanded to cover the entire planet in later versions. Although we have attempted to include all dune fields between 65? N and 65? S, some have likely been excluded for two reasons: 1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or 2) resolution of THEMIS IR coverage (100m/pixel) certainly caused us to exclude smaller dune fields. The smallest dune fields in the database are ~ 1 km2 in area. While the moderate to large dune fields are likely to constitute the largest compilation of sediment on the planet, smaller stores of sediment of dunes are likely to be found elsewhere via higher resolution data. Thus, it should be noted that our database excludes all small dune fields and some moderate to large dune fields as well. Therefore the absence of mapped dune fields does not mean that such dune fields do not exist and is not intended to imply a lack of saltating sand in other areas. Where availability and quality of THEMIS visible (VIS) or Mars Orbiter Camera narrow angle (MOC NA) images allowed, we classifed dunes and included dune slipface measurements, which were derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. For dunes located within craters, the azimuth from crater centroid to dune field centroid was calculated. Output from a general circulation model (GCM) is also included. In addition to polygons locating dune fields, the database includes over 1800 selected Thermal Emission Imaging System (THEMIS) infrared (IR), THEMIS visible (VIS) and Mars Orbiter Camera Narrow Angle (MOC NA) images that were used to build the database. The database is presented in a variety of formats. It is presented as a series of ArcReader projects which can be opened using the free ArcReader software. The latest version of ArcReader can be downloaded at http://www.esri.com/software/arcgis/arcreader/download.html. The database is also presented in ArcMap projects. The ArcMap projects allow fuller use of the data, but require ESRI ArcMap? software. Multiple projects were required to accommodate the large number of images needed. A fuller description of the projects can be found in the Dunes_ReadMe file and the ReadMe_GIS file in the Documentation folder. For users who prefer to create their own projects, the data is available in ESRI shapefile and geodatabase formats, as well as the open Geographic Markup Language (GML) format. A printable map of the dunes and craters in the database is available as a Portable Document Format (PDF) document. The map is also included as a JPEG file. ReadMe files are available in PDF and ASCII (.txt) files. Tables are available in both Excel (.xls) and ASCII formats.

  14. What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products.

    PubMed

    Miake-Lye, Isomi M; Hempel, Susanne; Shanman, Roberta; Shekelle, Paul G

    2016-02-10

    The need for systematic methods for reviewing evidence is continuously increasing. Evidence mapping is one emerging method. There are no authoritative recommendations for what constitutes an evidence map or what methods should be used, and anecdotal evidence suggests heterogeneity in both. Our objectives are to identify published evidence maps and to compare and contrast the presented definitions of evidence mapping, the domains used to classify data in evidence maps, and the form the evidence map takes. We conducted a systematic review of publications that presented results with a process termed "evidence mapping" or included a figure called an "evidence map." We identified publications from searches of ten databases through 8/21/2015, reference mining, and consulting topic experts. We abstracted the research question, the unit of analysis, the search methods and search period covered, and the country of origin. Data were narratively synthesized. Thirty-nine publications met inclusion criteria. Published evidence maps varied in their definition and the form of the evidence map. Of the 31 definitions provided, 67 % described the purpose as identification of gaps and 58 % referenced a stakeholder engagement process or user-friendly product. All evidence maps explicitly used a systematic approach to evidence synthesis. Twenty-six publications referred to a figure or table explicitly called an "evidence map," eight referred to an online database as the evidence map, and five stated they used a mapping methodology but did not present a visual depiction of the evidence. The principal conclusion of our evaluation of studies that call themselves "evidence maps" is that the implied definition of what constitutes an evidence map is a systematic search of a broad field to identify gaps in knowledge and/or future research needs that presents results in a user-friendly format, often a visual figure or graph, or a searchable database. Foundational work is needed to better standardize the methods and products of an evidence map so that researchers and policymakers will know what to expect of this new type of evidence review. Although an a priori protocol was developed, no registration was completed; this review did not fit the PROSPERO format.

  15. A physical map of a BAC clone contig covering the entire autosome insertion between ovine MHC Class IIa and IIb

    PubMed Central

    2012-01-01

    Background The ovine Major Histocompatibility Complex (MHC) harbors genes involved in overall resistance/susceptibility of the host to infectious diseases. Compared to human and mouse, the ovine MHC is interrupted by a large piece of autosome insertion via a hypothetical chromosome inversion that constitutes ~25% of ovine chromosome 20. The evolutionary consequence of such an inversion and an insertion (inversion/insertion) in relation to MHC function remains unknown. We previously constructed a BAC clone physical map for the ovine MHC exclusive of the insertion region. Here we report the construction of a high-density physical map covering the autosome insertion in order to address the question of what the inversion/insertion had to do with ruminants during the MHC evolution. Results A total of 119 pairs of comparative bovine oligo primers were utilized to screen an ovine BAC library for positive clones and the orders and overlapping relationships of the identified clones were determined by DNA fingerprinting, BAC-end sequencing, and sequence-specific PCR. A total of 368 positive BAC clones were identified and 108 of the effective clones were ordered into an overlapping BAC contig to cover the consensus region between ovine MHC class IIa and IIb. Therefore, a continuous physical map covering the entire ovine autosome inversion/insertion region was successfully constructed. The map confirmed the bovine sequence assembly for the same homologous region. The DNA sequences of 185 BAC-ends have been deposited into NCBI database with the access numbers HR309252 through HR309068, corresponding to dbGSS ID 30164010 through 30163826. Conclusions We have constructed a high-density BAC clone physical map for the ovine autosome inversion/insertion between the MHC class IIa and IIb. The entire ovine MHC region is now fully covered by a continuous BAC clone contig. The physical map we generated will facilitate MHC functional studies in the ovine, as well as the comparative MHC evolution in ruminants. PMID:22897909

  16. A web-based system architecture for ontology-based data integration in the domain of IT benchmarking

    NASA Astrophysics Data System (ADS)

    Pfaff, Matthias; Krcmar, Helmut

    2018-03-01

    In the domain of IT benchmarking (ITBM), a variety of data and information are collected. Although these data serve as the basis for business analyses, no unified semantic representation of such data yet exists. Consequently, data analysis across different distributed data sets and different benchmarks is almost impossible. This paper presents a system architecture and prototypical implementation for an integrated data management of distributed databases based on a domain-specific ontology. To preserve the semantic meaning of the data, the ITBM ontology is linked to data sources and functions as the central concept for database access. Thus, additional databases can be integrated by linking them to this domain-specific ontology and are directly available for further business analyses. Moreover, the web-based system supports the process of mapping ontology concepts to external databases by introducing a semi-automatic mapping recommender and by visualizing possible mapping candidates. The system also provides a natural language interface to easily query linked databases. The expected result of this ontology-based approach of knowledge representation and data access is an increase in knowledge and data sharing in this domain, which will enhance existing business analysis methods.

  17. Geologic and geophysical maps of the eastern three-fourths of the Cambria 30' x 60' quadrangle, central California Coast Ranges

    USGS Publications Warehouse

    Graymer, R.W.; Langenheim, V.E.; Roberts, M.A.; McDougall, Kristin

    2014-01-01

    The Cambria 30´ x 60´ quadrangle comprises southwestern Monterey County and northwestern San Luis Obispo County. The land area includes rugged mountains of the Santa Lucia Range extending from the northwest to the southeast part of the map; the southern part of the Big Sur coast in the northwest; broad marine terraces along the southwest coast; and broadvalleys, rolling hills, and modest mountains in the northeast. This report contains geologic, gravity anomaly, and aeromagnetic anomaly maps of the eastern three-fourths of the 1:100,000-scale Cambria quadrangle and the associated geologic and geophysical databases (ArcMap databases), as well as complete descriptions of the geologic map units and the structural relations in the mapped area. A cross section is based on both the geologic map and potential-field geophysical data. The maps are presented as an interactive, multilayer PDF, rather than more traditional pre-formatted map-sheet PDFs. Various geologic, geophysical, paleontological, and base map elements are placed on separate layers, which allows the user to combine elements interactively to create map views beyond the traditional map sheets. Four traditional map sheets (geologic map, gravity map, aeromagnetic map, paleontological locality map) are easily compiled by choosing the associated data layers or by choosing the desired map under Bookmarks.

  18. ReactionMap: an efficient atom-mapping algorithm for chemical reactions.

    PubMed

    Fooshee, David; Andronico, Alessio; Baldi, Pierre

    2013-11-25

    Large databases of chemical reactions provide new data-mining opportunities and challenges. Key challenges result from the imperfect quality of the data and the fact that many of these reactions are not properly balanced or atom-mapped. Here, we describe ReactionMap, an efficient atom-mapping algorithm. Our approach uses a combination of maximum common chemical subgraph search and minimization of an assignment cost function derived empirically from training data. We use a set of over 259,000 balanced atom-mapped reactions from the SPRESI commercial database to train the system, and we validate it on random sets of 1000 and 17,996 reactions sampled from this pool. These large test sets represent a broad range of chemical reaction types, and ReactionMap correctly maps about 99% of the atoms and about 96% of the reactions, with a mean time per mapping of 2 s. Most correctly mapped reactions are mapped with high confidence. Mapping accuracy compares favorably with ChemAxon's AutoMapper, versions 5 and 6.1, and the DREAM Web tool. These approaches correctly map 60.7%, 86.5%, and 90.3% of the reactions, respectively, on the same data set. A ReactionMap server is available on the ChemDB Web portal at http://cdb.ics.uci.edu .

  19. A Framework for Mapping User-Designed Forms to Relational Databases

    ERIC Educational Resources Information Center

    Khare, Ritu

    2011-01-01

    In the quest for database usability, several applications enable users to design custom forms using a graphical interface, and forward engineer the forms into new databases. The path-breaking aspect of such applications is that users are completely shielded from the technicalities of database creation. Despite this innovation, the process of…

  20. From 20th century metabolic wall charts to 21st century systems biology: database of mammalian metabolic enzymes

    PubMed Central

    Corcoran, Callan C.; Grady, Cameron R.; Pisitkun, Trairak; Parulekar, Jaya

    2017-01-01

    The organization of the mammalian genome into gene subsets corresponding to specific functional classes has provided key tools for systems biology research. Here, we have created a web-accessible resource called the Mammalian Metabolic Enzyme Database (https://hpcwebapps.cit.nih.gov/ESBL/Database/MetabolicEnzymes/MetabolicEnzymeDatabase.html) keyed to the biochemical reactions represented on iconic metabolic pathway wall charts created in the previous century. Overall, we have mapped 1,647 genes to these pathways, representing ~7 percent of the protein-coding genome. To illustrate the use of the database, we apply it to the area of kidney physiology. In so doing, we have created an additional database (Database of Metabolic Enzymes in Kidney Tubule Segments: https://hpcwebapps.cit.nih.gov/ESBL/Database/MetabolicEnzymes/), mapping mRNA abundance measurements (mined from RNA-Seq studies) for all metabolic enzymes to each of 14 renal tubule segments. We carry out bioinformatics analysis of the enzyme expression pattern among renal tubule segments and mine various data sources to identify vasopressin-regulated metabolic enzymes in the renal collecting duct. PMID:27974320

  1. Querying XML Data with SPARQL

    NASA Astrophysics Data System (ADS)

    Bikakis, Nikos; Gioldasis, Nektarios; Tsinaraki, Chrisa; Christodoulakis, Stavros

    SPARQL is today the standard access language for Semantic Web data. In the recent years XML databases have also acquired industrial importance due to the widespread applicability of XML in the Web. In this paper we present a framework that bridges the heterogeneity gap and creates an interoperable environment where SPARQL queries are used to access XML databases. Our approach assumes that fairly generic mappings between ontology constructs and XML Schema constructs have been automatically derived or manually specified. The mappings are used to automatically translate SPARQL queries to semantically equivalent XQuery queries which are used to access the XML databases. We present the algorithms and the implementation of SPARQL2XQuery framework, which is used for answering SPARQL queries over XML databases.

  2. SITEX 2.0: Projections of protein functional sites on eukaryotic genes. Extension with orthologous genes.

    PubMed

    Medvedeva, Irina V; Demenkov, Pavel S; Ivanisenko, Vladimir A

    2017-04-01

    Functional sites define the diversity of protein functions and are the central object of research of the structural and functional organization of proteins. The mechanisms underlying protein functional sites emergence and their variability during evolution are distinguished by duplication, shuffling, insertion and deletion of the exons in genes. The study of the correlation between a site structure and exon structure serves as the basis for the in-depth understanding of sites organization. In this regard, the development of programming resources that allow the realization of the mutual projection of exon structure of genes and primary and tertiary structures of encoded proteins is still the actual problem. Previously, we developed the SitEx system that provides information about protein and gene sequences with mapped exon borders and protein functional sites amino acid positions. The database included information on proteins with known 3D structure. However, data with respect to orthologs was not available. Therefore, we added the projection of sites positions to the exon structures of orthologs in SitEx 2.0. We implemented a search through database using site conservation variability and site discontinuity through exon structure. Inclusion of the information on orthologs allowed to expand the possibilities of SitEx usage for solving problems regarding the analysis of the structural and functional organization of proteins. Database URL: http://www-bionet.sscc.ru/sitex/ .

  3. Assessment and mapping of water pollution indices in zone-III of municipal corporation of hyderabad using remote sensing and geographic information system.

    PubMed

    Asadi, S S; Vuppala, Padmaja; Reddy, M Anji

    2005-01-01

    A preliminary survey of area under Zone-III of MCH was undertaken to assess the ground water quality, demonstrate its spatial distribution and correlate with the land use patterns using advance techniques of remote sensing and geographical information system (GIS). Twenty-seven ground water samples were collected and their chemical analysis was done to form the attribute database. Water quality index was calculated from the measured parameters, based on which the study area was classified into five groups with respect to suitability of water for drinking purpose. Thematic maps viz., base map, road network, drainage and land use/land cover were prepared from IRS ID PAN + LISS III merged satellite imagery forming the spatial database. Attribute database was integrated with spatial sampling locations map in Arc/Info and maps showing spatial distribution of water quality parameters were prepared in Arc View. Results indicated that high concentrations of total dissolved solids (TDS), nitrates, fluorides and total hardness were observed in few industrial and densely populated areas indicating deteriorated water quality while the other areas exhibited moderate to good water quality.

  4. Map and data for Quaternary faults and folds in New Mexico

    USGS Publications Warehouse

    Machette, M.N.; Personius, S.F.; Kelson, K.I.; Haller, K.M.; Dart, R.L.

    1998-01-01

    The "World Map of Major Active Faults" Task Group is compiling a series of digital maps for the United States and other countries in the Western Hemisphere that show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds; the companion database includes published information on these seismogenic features. The Western Hemisphere effort is sponsored by International Lithosphere Program (ILP) Task Group H-2, whereas the effort to compile a new map and database for the United States is funded by the Earthquake Reduction Program (ERP) through the U.S. Geological Survey. The maps and accompanying databases represent a key contribution to the new Global Seismic Hazards Assessment Program (ILP Task Group II-O) for the International Decade for Natural Disaster Reduction. This compilation, which describes evidence for surface faulting and folding in New Mexico, is the third of many similar State and regional compilations that are planned for the U.S. The compilation for West Texas is available as U.S. Geological Survey Open-File Report 96-002 (Collins and others, 1996 #993) and the compilation for Montana will be released as a Montana Bureau of Mines product (Haller and others, in press #1750).

  5. Version VI of the ESTree db: an improved tool for peach transcriptome analysis

    PubMed Central

    Lazzari, Barbara; Caprera, Andrea; Vecchietti, Alberto; Merelli, Ivan; Barale, Francesca; Milanesi, Luciano; Stella, Alessandra; Pozzi, Carlo

    2008-01-01

    Background The ESTree database (db) is a collection of Prunus persica and Prunus dulcis EST sequences that in its current version encompasses 75,404 sequences from 3 almond and 19 peach libraries. Nine peach genotypes and four peach tissues are represented, from four fruit developmental stages. The aim of this work was to implement the already existing ESTree db by adding new sequences and analysis programs. Particular care was given to the implementation of the web interface, that allows querying each of the database features. Results A Perl modular pipeline is the backbone of sequence analysis in the ESTree db project. Outputs obtained during the pipeline steps are automatically arrayed into the fields of a MySQL database. Apart from standard clustering and annotation analyses, version VI of the ESTree db encompasses new tools for tandem repeat identification, annotation against genomic Rosaceae sequences, and positioning on the database of oligomer sequences that were used in a peach microarray study. Furthermore, known protein patterns and motifs were identified by comparison to PROSITE. Based on data retrieved from sequence annotation against the UniProtKB database, a script was prepared to track positions of homologous hits on the GO tree and build statistics on the ontologies distribution in GO functional categories. EST mapping data were also integrated in the database. The PHP-based web interface was upgraded and extended. The aim of the authors was to enable querying the database according to all the biological aspects that can be investigated from the analysis of data available in the ESTree db. This is achieved by allowing multiple searches on logical subsets of sequences that represent different biological situations or features. Conclusions The version VI of ESTree db offers a broad overview on peach gene expression. Sequence analyses results contained in the database, extensively linked to external related resources, represent a large amount of information that can be queried via the tools offered in the web interface. Flexibility and modularity of the ESTree analysis pipeline and of the web interface allowed the authors to set up similar structures for different datasets, with limited manual intervention. PMID:18387211

  6. Improvements in the Protein Identifier Cross-Reference service.

    PubMed

    Wein, Samuel P; Côté, Richard G; Dumousseau, Marine; Reisinger, Florian; Hermjakob, Henning; Vizcaíno, Juan A

    2012-07-01

    The Protein Identifier Cross-Reference (PICR) service is a tool that allows users to map protein identifiers, protein sequences and gene identifiers across over 100 different source databases. PICR takes input through an interactive website as well as Representational State Transfer (REST) and Simple Object Access Protocol (SOAP) services. It returns the results as HTML pages, XLS and CSV files. It has been in production since 2007 and has been recently enhanced to add new functionality and increase the number of databases it covers. Protein subsequences can be Basic Local Alignment Search Tool (BLAST) against the UniProt Knowledgebase (UniProtKB) to provide an entry point to the standard PICR mapping algorithm. In addition, gene identifiers from UniProtKB and Ensembl can now be submitted as input or mapped to as output from PICR. We have also implemented a 'best-guess' mapping algorithm for UniProt. In this article, we describe the usefulness of PICR, how these changes have been implemented, and the corresponding additions to the web services. Finally, we explain that the number of source databases covered by PICR has increased from the initial 73 to the current 102. New resources include several new species-specific Ensembl databases as well as the Ensembl Genome ones. PICR can be accessed at http://www.ebi.ac.uk/Tools/picr/.

  7. The zoonotic potential of Mycobacterium avium spp. paratuberculosis: a systematic review.

    PubMed

    Waddell, Lisa A; Rajić, Andrijana; Sargeant, Jan; Harris, Janet; Amezcua, Rocio; Downey, Lindsay; Read, Susan; McEwen, Scott A

    2008-01-01

    The zoonotic potential of Mycobacterium avium ssp. paratuberculosis (MAP) has been debated for almost a century because of similarities between Johne's Disease (JD) in cattle and Crohn's disease (CD) in humans. Our objective was to evaluate scientific literature investigating the potential association between these two diseases (MAP and CD) and the presence of MAP in retail milk or dairy products using a qualitative systematic review. The search strategy included 19 bibliographic databases, 8 conference proceedings, reference lists of 15 articles and contacting 28 topic-related scientists. Two independent reviewers performed relevance screening, quality assessment and data extraction stages of the review. Seventy-five articles were included. Among 60 case-control studies that investigated the association between MAP and CD, 37 were of acceptable quality. Twenty-three studies reported significant positive associations, 23 reported non-significant associations, and 14 did not detect MAP in any sample. Different laboratory tests, test protocols, types of samples and source populations were used in these studies resulting in large variability among studies. Seven studies investigated the association between CD and JD, two challenge trials reported contradictory results, one cross-sectional study did not support the association, and four descriptive studies suggested that isolated MAP is often closely related to cattle isolates. MAP detection in raw and pasteurized milk was reported in several studies. Evidence for the zoonotic potential of MAP is not strong, but should not be ignored. Interdisciplinary collaboration among medical, veterinary and other public health officials may contribute to a better understanding of the potential routes of human exposure to MAP.

  8. Failure mode and effects analysis outputs: are they valid?

    PubMed

    Shebl, Nada Atef; Franklin, Bryony Dean; Barber, Nick

    2012-06-10

    Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: Face validity: by comparing the FMEA participants' mapped processes with observational work. Content validity: by presenting the FMEA findings to other healthcare professionals. Criterion validity: by comparing the FMEA findings with data reported on the trust's incident report database. Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust's incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA's methodology for scoring failures, there were discrepancies between the teams' estimates and similar incidents reported on the trust's incident database. Furthermore, the concept of multiplying ordinal scales to prioritise failures is mathematically flawed. Until FMEA's validity is further explored, healthcare organisations should not solely depend on their FMEA results to prioritise patient safety issues.

  9. The National Landslide Database of Great Britain: Acquisition, communication and the role of social media

    NASA Astrophysics Data System (ADS)

    Pennington, Catherine; Freeborough, Katy; Dashwood, Claire; Dijkstra, Tom; Lawrie, Kenneth

    2015-11-01

    The British Geological Survey (BGS) is the national geological agency for Great Britain that provides geoscientific information to government, other institutions and the public. The National Landslide Database has been developed by the BGS and is the focus for national geohazard research for landslides in Great Britain. The history and structure of the geospatial database and associated Geographical Information System (GIS) are explained, along with the future developments of the database and its applications. The database is the most extensive source of information on landslides in Great Britain with over 17,000 records of landslide events to date, each documented as fully as possible for inland, coastal and artificial slopes. Data are gathered through a range of procedures, including: incorporation of other databases; automated trawling of current and historical scientific literature and media reports; new field- and desk-based mapping technologies with digital data capture, and using citizen science through social media and other online resources. This information is invaluable for directing the investigation, prevention and mitigation of areas of unstable ground in accordance with Government planning policy guidelines. The national landslide susceptibility map (GeoSure) and a national landslide domains map currently under development, as well as regional mapping campaigns, rely heavily on the information contained within the landslide database. Assessing susceptibility to landsliding requires knowledge of the distribution of failures, an understanding of causative factors, their spatial distribution and likely impacts, whilst understanding the frequency and types of landsliding present is integral to modelling how rainfall will influence the stability of a region. Communication of landslide data through the Natural Hazard Partnership (NHP) and Hazard Impact Model contributes to national hazard mitigation and disaster risk reduction with respect to weather and climate. Daily reports of landslide potential are published by BGS through the NHP partnership and data collected for the National Landslide Database are used widely for the creation of these assessments. The National Landslide Database is freely available via an online GIS and is used by a variety of stakeholders for research purposes.

  10. A digital version of the 1970 U.S. Geological Survey topographic map of the San Francisco Bay region, three sheets, 1:125,000

    USGS Publications Warehouse

    Aitken, Douglas S.

    1997-01-01

    This Open-File report is a digital topographic map database. It contains a digital version of the 1970 U.S. Geological Survey topographic map of the San Francisco Bay Region (3 sheets), at a scale of 1:125,000. These ARC/INFO coverages are in vector format. The vectorization process has distorted characters representing letters and numbers, as well as some road and other symbols, making them difficult to read in some instances. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. The content and character of the database and methods of obtaining it are described herein.

  11. Understanding the productive author who published papers in medicine using National Health Insurance Database: A systematic review and meta-analysis.

    PubMed

    Chien, Tsair-Wei; Chang, Yu; Wang, Hsien-Yi

    2018-02-01

    Many researchers used National Health Insurance database to publish medical papers which are often retrospective, population-based, and cohort studies. However, the author's research domain and academic characteristics are still unclear.By searching the PubMed database (Pubmed.com), we used the keyword of [Taiwan] and [National Health Insurance Research Database], then downloaded 2913 articles published from 1995 to 2017. Social network analysis (SNA), Gini coefficient, and Google Maps were applied to gather these data for visualizing: the most productive author; the pattern of coauthor collaboration teams; and the author's research domain denoted by abstract keywords and Pubmed MESH (medical subject heading) terms.Utilizing the 2913 papers from Taiwan's National Health Insurance database, we chose the top 10 research teams shown on Google Maps and analyzed one author (Dr. Kao) who published 149 papers in the database in 2015. In the past 15 years, we found Dr. Kao had 2987 connections with other coauthors from 13 research teams. The cooccurrence abstract keywords with the highest frequency are cohort study and National Health Insurance Research Database. The most coexistent MESH terms are tomography, X-ray computed, and positron-emission tomography. The strength of the author research distinct domain is very low (Gini < 0.40).SNA incorporated with Google Maps and Gini coefficient provides insight into the relationships between entities. The results obtained in this study can be applied for a comprehensive understanding of other productive authors in the field of academics.

  12. The National Landslide Database and GIS for Great Britain: construction, development, data acquisition, application and communication

    NASA Astrophysics Data System (ADS)

    Pennington, Catherine; Dashwood, Claire; Freeborough, Katy

    2014-05-01

    The National Landslide Database has been developed by the British Geological Survey (BGS) and is the focus for national geohazard research for landslides in Great Britain. The history and structure of the geospatial database and associated Geographical Information System (GIS) are explained, along with the future developments of the database and its applications. The database is the most extensive source of information on landslides in Great Britain with over 16,500 records of landslide events, each documented as fully as possible. Data are gathered through a range of procedures, including: incorporation of other databases; automated trawling of current and historical scientific literature and media reports; new field- and desk-based mapping technologies with digital data capture, and crowd-sourcing information through social media and other online resources. This information is invaluable for the investigation, prevention and mitigation of areas of unstable ground in accordance with Government planning policy guidelines. The national landslide susceptibility map (GeoSure) and a national landslide domain map currently under development rely heavily on the information contained within the landslide database. Assessing susceptibility to landsliding requires knowledge of the distribution of failures and an understanding of causative factors and their spatial distribution, whilst understanding the frequency and types of landsliding present is integral to modelling how rainfall will influence the stability of a region. Communication of landslide data through the Natural Hazard Partnership (NHP) contributes to national hazard mitigation and disaster risk reduction with respect to weather and climate. Daily reports of landslide potential are published by BGS through the NHP and data collected for the National Landslide Database is used widely for the creation of these assessments. The National Landslide Database is freely available via an online GIS and is used by a variety of stakeholders for research purposes.

  13. LONI visualization environment.

    PubMed

    Dinov, Ivo D; Valentino, Daniel; Shin, Bae Cheol; Konstantinidis, Fotios; Hu, Guogang; MacKenzie-Graham, Allan; Lee, Erh-Fang; Shattuck, David; Ma, Jeff; Schwartz, Craig; Toga, Arthur W

    2006-06-01

    Over the past decade, the use of informatics to solve complex neuroscientific problems has increased dramatically. Many of these research endeavors involve examining large amounts of imaging, behavioral, genetic, neurobiological, and neuropsychiatric data. Superimposing, processing, visualizing, or interpreting such a complex cohort of datasets frequently becomes a challenge. We developed a new software environment that allows investigators to integrate multimodal imaging data, hierarchical brain ontology systems, on-line genetic and phylogenic databases, and 3D virtual data reconstruction models. The Laboratory of Neuro Imaging visualization environment (LONI Viz) consists of the following components: a sectional viewer for imaging data, an interactive 3D display for surface and volume rendering of imaging data, a brain ontology viewer, and an external database query system. The synchronization of all components according to stereotaxic coordinates, region name, hierarchical ontology, and genetic labels is achieved via a comprehensive BrainMapper functionality, which directly maps between position, structure name, database, and functional connectivity information. This environment is freely available, portable, and extensible, and may prove very useful for neurobiologists, neurogenetisists, brain mappers, and for other clinical, pedagogical, and research endeavors.

  14. An approach for access differentiation design in medical distributed applications built on databases.

    PubMed

    Shoukourian, S K; Vasilyan, A M; Avagyan, A A; Shukurian, A K

    1999-01-01

    A formalized "top to bottom" design approach was described in [1] for distributed applications built on databases, which were considered as a medium between virtual and real user environments for a specific medical application. Merging different components within a unified distributed application posits new essential problems for software. Particularly protection tools, which are sufficient separately, become deficient during the integration due to specific additional links and relationships not considered formerly. E.g., it is impossible to protect a shared object in the virtual operating room using only DBMS protection tools, if the object is stored as a record in DB tables. The solution of the problem should be found only within the more general application framework. Appropriate tools are absent or unavailable. The present paper suggests a detailed outline of a design and testing toolset for access differentiation systems (ADS) in distributed medical applications which use databases. The appropriate formal model as well as tools for its mapping to a DMBS are suggested. Remote users connected via global networks are considered too.

  15. LINKING GPS DATA TO GIS DATABASES IN NATURALISTIC STUDIES: EXAMPLES FROM DRIVERS WITH OBSTRUCTIVE SLEEP APNEA

    PubMed Central

    Dawson, Jeffrey D.; Yu, Lixi; Sewell, Kelly; Skibbe, Adam; Aksan, Nazan S.; Tippin, Jon; Rizzo, Matthew

    2015-01-01

    Summary In naturalistic studies, it is vital to give appropriate context when analyzing driving behaviors. Such contextualization can help address the hypotheses that explore a) how drivers perform within specific types of environment (e.g., road types, speed limits, etc.), and b) how often drivers are exposed to such specific environments. In order to perform this contextualization in an automated fashion, we are using Global Positioning System (GPS) data obtained at 1 Hz and merging this with Geographic Information Systems (GIS) databases maintained by the Iowa Department of Transportation (DOT). In this paper, we demonstrate our methods of doing this based on data from 43 drivers with obstructive sleep apnea (OSA). We also use maps from GIS software to illustrate how information can be displayed at the individual drive or day level, and we provide examples of some of the challenges that still need to be addressed. PMID:26665183

  16. Make Your Own Mashup Maps

    ERIC Educational Resources Information Center

    Lucking, Robert A.; Christmann, Edwin P.; Whiting, Mervyn J.

    2008-01-01

    "Mashup" is a new technology term used to describe a web application that combines data or technology from several different sources. You can apply this concept in your classroom by having students create their own mashup maps. Google Maps provides you with the simple tools, map databases, and online help you'll need to quickly master this…

  17. Geologic map and digital database of the Conejo Well 7.5 minute quadrangle, Riverside County, Southern California

    USGS Publications Warehouse

    Powell, Robert E.

    2001-01-01

    This data set maps and describes the geology of the Conejo Well 7.5 minute quadrangle, Riverside County, southern California. The quadrangle, situated in Joshua Tree National Park in the eastern Transverse Ranges physiographic and structural province, encompasses part of the northern Eagle Mountains and part of the south flank of Pinto Basin. It is underlain by a basement terrane comprising Proterozoic metamorphic rocks, Mesozoic plutonic rocks, and Mesozoic and Mesozoic or Cenozoic hypabyssal dikes. The basement terrane is capped by a widespread Tertiary erosion surface preserved in remnants in the Eagle Mountains and buried beneath Cenozoic deposits in Pinto Basin. Locally, Miocene basalt overlies the erosion surface. A sequence of at least three Quaternary pediments is planed into the north piedmont of the Eagle Mountains, each in turn overlain by successively younger residual and alluvial deposits. The Tertiary erosion surface is deformed and broken by north-northwest-trending, high-angle, dip-slip faults in the Eagle Mountains and an east-west trending system of high-angle dip- and left-slip faults. In and adjacent to the Conejo Well quadrangle, faults of the northwest-trending set displace Miocene sedimentary rocks and basalt deposited on the Tertiary erosion surface and Pliocene and (or) Pleistocene deposits that accumulated on the oldest pediment. Faults of this system appear to be overlain by Pleistocene deposits that accumulated on younger pediments. East-west trending faults are younger than and perhaps in part coeval with faults of the northwest-trending set. The Conejo Well database was created using ARCVIEW and ARC/INFO, which are geographical information system (GIS) software products of Envronmental Systems Research Institute (ESRI). The database consists of the following items: (1) a map coverage showing faults and geologic contacts and units, (2) a separate coverage showing dikes, (3) a coverage showing structural data, (4) a point coverage containing line ornamentation, and (5) a scanned topographic base at a scale of 1:24,000. The coverages include attribute tables for geologic units (polygons and regions), contacts (arcs), and site-specific data (points). The database, accompanied by a pamphlet file and this metadata file, also includes the following graphic and text products: (1) A portable document file (.pdf) containing a navigable graphic of the geologic map on a 1:24,000 topographic base. The map is accompanied by a marginal explanation consisting of a Description of Map and Database Units (DMU), a Correlation of Map and Database Units (CMU), and a key to point-and line-symbols. (2) Separate .pdf files of the DMU and CMU, individually. (3) A PostScript graphic-file containing the geologic map on a 1:24,000 topographic base accompanied by the marginal explanation. (4) A pamphlet that describes the database and how to access it. Within the database, geologic contacts , faults, and dikes are represented as lines (arcs), geologic units as polygons and regions, and site-specific data as points. Polygon, arc, and point attribute tables (.pat, .aat, and .pat, respectively) uniquely identify each geologic datum and link it to other tables (.rel) that provide more detailed geologic information.

  18. Database resources of the National Center for Biotechnology Information: 2002 update

    PubMed Central

    Wheeler, David L.; Church, Deanna M.; Lash, Alex E.; Leipe, Detlef D.; Madden, Thomas L.; Pontius, Joan U.; Schuler, Gregory D.; Schriml, Lynn M.; Tatusova, Tatiana A.; Wagner, Lukas; Rapp, Barbara A.

    2002-01-01

    In addition to maintaining the GenBank nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides data analysis and retrieval resources that operate on the data in GenBank and a variety of other biological data made available through NCBI’s web site. NCBI data retrieval resources include Entrez, PubMed, LocusLink and the Taxonomy Browser. Data analysis resources include BLAST, Electronic PCR, OrfFinder, RefSeq, UniGene, HomoloGene, Database of Single Nucleotide Polymorphisms (dbSNP), Human Genome Sequencing, Human MapViewer, Human¡VMouse Homology Map, Cancer Chromosome Aberration Project (CCAP), Entrez Genomes, Clusters of Orthologous Groups (COGs) database, Retroviral Genotyping Tools, SAGEmap, Gene Expression Omnibus (GEO), Online Mendelian Inheritance in Man (OMIM), the Molecular Modeling Database (MMDB) and the Conserved Domain Database (CDD). Augmenting many of the web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at http://www.ncbi.nlm.nih.gov. PMID:11752242

  19. Traffic Sign Inventory from Google Street View Images

    NASA Astrophysics Data System (ADS)

    Tsai, Victor J. D.; Chen, Jyun-Han; Huang, Hsun-Sheng

    2016-06-01

    Traffic sign detection and recognition (TSDR) has drawn considerable attention on developing intelligent transportation systems (ITS) and autonomous vehicle driving systems (AVDS) since 1980's. Unlikely to the general TSDR systems that deal with real-time images captured by the in-vehicle cameras, this research aims on developing techniques for detecting, extracting, and positioning of traffic signs from Google Street View (GSV) images along user-selected routes for low-cost, volumetric and quick establishment of the traffic sign infrastructural database that may be associated with Google Maps. The framework and techniques employed in the proposed system are described.

  20. Monitoring and analysis of the change process in curriculum mapping compared to the National Competency-based Learning Objective Catalogue for Undergraduate Medical Education (NKLM) at four medical faculties. Part I: Conducive resources and structures

    PubMed Central

    Lammerding-Koeppel, Maria; Giesler, Marianne; Gornostayeva, Maryna; Narciss, Elisabeth; Wosnik, Annette; Zipfel, Stephan; Griewatz, Jan; Fritze, Olaf

    2017-01-01

    Objective: After passing of the National Competency-based Learning Objectives Catalogue in Medicine (Nationaler Kompetenzbasierter Lernzielkatalog Medizin, [NKLM, retrieved on 22.03.2016]), the German medical faculties must take inventory and develop their curricula. NKLM contents are expected to be present, but not linked well or sensibly enough in locally grown curricula. Learning and examination formats must be reviewed for appropriateness and coverage of the competences. The necessary curricular transparency is best achieved by systematic curriculum mapping, combined with effective change management. Mapping a complex existing curriculum and convincing a faculty that this will have benefits is not easy. Headed by Tübingen, the faculties of Freiburg, Heidelberg, Mannheim and Tübingen take inventory by mapping their curricula in comparison to the NKLM, using the dedicated web-based MERLIN-database. This two-part article analyses and summarises how NKLM curriculum mapping could be successful in spite of resistance at the faculties. The target is conveying the widest possible overview of beneficial framework conditions, strategies and results. Part I of the article shows the beneficial resources and structures required for implementation of curriculum mapping at the faculties. Part II describes key factors relevant for motivating faculties and teachers during the mapping process. Method: The network project was systematically planned in advance according to steps of project and change management, regularly reflected on and adjusted together in workshops and semi-annual project meetings. From the beginning of the project, a grounded-theory approach was used to systematically collect detailed information on structures, measures and developments at the faculties using various sources and methods, to continually analyse them and to draw a final conclusion (sources: surveys among the project participants with questionnaires, semi-structured group interviews and discussions, guideline-supported individual interviews, informal surveys, evaluation of target agreements and protocols, openly discernible local, regional or over-regional structure-relevant events). Results: The following resources and structures support implementation of curriculum mapping at a faculty: Setting up a coordination agency (≥50% of a full position; support by student assistants), systematic project management, and development of organisation and communication structures with integration of the dean of study and teaching and pilot departments, as well as development of a user-friendly web-based mapping instrument. Acceptance of the mapping was increased particularly by visualisation of the results and early insight into indicative results relevant for the department. Conclusion: Successful NKLM curriculum mapping requires trained staff for coordination, resilient communication structures and a user-oriented mapping database. In alignment with literature, recommendations can be derived to support other faculties that want to map their curriculum. PMID:28293674

  1. Monitoring and analysis of the change process in curriculum mapping compared to the National Competency-based Learning Objective Catalogue for Undergraduate Medical Education (NKLM) at four medical faculties. Part I: Conducive resources and structures.

    PubMed

    Lammerding-Koeppel, Maria; Giesler, Marianne; Gornostayeva, Maryna; Narciss, Elisabeth; Wosnik, Annette; Zipfel, Stephan; Griewatz, Jan; Fritze, Olaf

    2017-01-01

    Objective: After passing of the National Competency-based Learning Objectives Catalogue in Medicine (Nationaler Kompetenzbasierter Lernzielkatalog Medizin, [NKLM, retrieved on 22.03.2016]), the German medical faculties must take inventory and develop their curricula. NKLM contents are expected to be present, but not linked well or sensibly enough in locally grown curricula. Learning and examination formats must be reviewed for appropriateness and coverage of the competences. The necessary curricular transparency is best achieved by systematic curriculum mapping, combined with effective change management. Mapping a complex existing curriculum and convincing a faculty that this will have benefits is not easy. Headed by Tübingen, the faculties of Freiburg, Heidelberg, Mannheim and Tübingen take inventory by mapping their curricula in comparison to the NKLM, using the dedicated web-based MER LIN -database. This two-part article analyses and summarises how NKLM curriculum mapping could be successful in spite of resistance at the faculties. The target is conveying the widest possible overview of beneficial framework conditions, strategies and results. Part I of the article shows the beneficial resources and structures required for implementation of curriculum mapping at the faculties. Part II describes key factors relevant for motivating faculties and teachers during the mapping process. Method: The network project was systematically planned in advance according to steps of project and change management, regularly reflected on and adjusted together in workshops and semi-annual project meetings. From the beginning of the project, a grounded-theory approach was used to systematically collect detailed information on structures, measures and developments at the faculties using various sources and methods, to continually analyse them and to draw a final conclusion (sources: surveys among the project participants with questionnaires, semi-structured group interviews and discussions, guideline-supported individual interviews, informal surveys, evaluation of target agreements and protocols, openly discernible local, regional or over-regional structure-relevant events). Results: The following resources and structures support implementation of curriculum mapping at a faculty: Setting up a coordination agency (≥50% of a full position; support by student assistants), systematic project management, and development of organisation and communication structures with integration of the dean of study and teaching and pilot departments, as well as development of a user-friendly web-based mapping instrument. Acceptance of the mapping was increased particularly by visualisation of the results and early insight into indicative results relevant for the department. Conclusion: Successful NKLM curriculum mapping requires trained staff for coordination, resilient communication structures and a user-oriented mapping database. In alignment with literature, recommendations can be derived to support other faculties that want to map their curriculum.

  2. Geologic Map of the State of Hawai`i

    USGS Publications Warehouse

    Sherrod, David R.; Sinton, John M.; Watkins, Sarah E.; Brunt, Kelly M.

    2007-01-01

    About This Map The State's geology is presented on eight full-color map sheets, one for each of the major islands. These map sheets, the illustrative meat of the publication, can be downloaded in pdf format, ready to print. Map scale is 1:100,000 for most of the islands, so that each map is about 27 inches by 36 inches. The Island of Hawai`i, largest of the islands, is depicted at a smaller scale, 1:250,000, so that it, too, can be shown on 36-inch-wide paper. The new publication isn't limited strictly to its map depictions. Twenty years have passed since David Clague and Brent Dalrymple published a comprehensive report that summarized the geology of all the islands, and it has been even longer since the last edition of Gordon Macdonald's book, Islands in the Sea, was revised. Therefore the new statewide geologic map includes an 83-page explanatory pamphlet that revisits many of the concepts that have evolved in our geologic understanding of the eight main islands. The pamphlet includes simplified page-size geologic maps for each island, summaries of all the radiometric ages that have been gathered since about 1960, generalized depictions of geochemical analyses for each volcano's eruptive stages, and discussion of some outstanding topics that remain controversial or deserving of additional research. The pamphlet also contains a complete description of map units, which enumerates the characteristics for each of the state's many stratigraphic formations shown on the map sheets. Since the late 1980s, the audience for geologic maps has grown as desktop computers and map-based software have become increasingly powerful. Those who prefer the convenience and access offered by Geographic Information Systems (GIS) can also feast on this publication. An electronic database, suitable for most GIS software applications, is available for downloading. The GIS database is in an Earth projection widely employed throughout the State of Hawai`i, using the North American datum of 1983 and the Universal Transverse Mercator system projection to zone 4. 'This digital statewide map allows engineers, consultants, and scientists from many different fields to take advantage of the geologic database,' said John Sinton, a geology professor at the University of Hawai`i, whose new mapping of the Wai`anae Range (West O`ahu) appears on the map. Indeed, when a testing version was first made available, most requests came from biologists, archaeologists, and soil scientists interested in applying the map's GIS database to their ongoing investigations. Another area newly depicted on the map, in addition to the Wai`anae Range, is Haleakala volcano, East Maui. So too for the active lava flows of Kilauea volcano, Island of Hawai`i, where the landscape has continued to evolve in the ten years since publication of the Big Island's revised geologic map. For the other islands, much of the map is compiled from mapping published in the 1930-1960s. This reliance stems partly from shortage of funding to undertake entirely new mapping but is warranted by the exemplary mapping of those early experts. The boundaries of all map units are digitized to show correctly on modern topographic maps.

  3. Lunar and Planetary Science XXXV: Moon and Mercury

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The session" Moon and Mercury" included the following reports:Helium Production of Prompt Neutrinos on the Moon; Vapor Deposition and Solar Wind Implantation on Lunar Soil-Grain Surfaces as Comparable Processes; A New Lunar Geologic Mapping Program; Physical Backgrounds to Measure Instantaneous Spin Components of Terrestrial Planets from Earth with Arcsecond Accuracy; Preliminary Findings of a Study of the Lunar Global Megaregolith; Maps Characterizing the Lunar Regolith Maturity; Probable Model of Anomalies in the Polar Regions of Mercury; Parameters of the Maximum of Positive Polarization of the Moon; Database Structure Development for Space Surveying Results by Moon -Zond Program; CM2-type Micrometeoritic Lunar Winds During the Late Heavy Bombardment; A Comparison of Textural and Chemical Features of Spinel Within Lunar Mare Basalts; The Reiner Gamma Formation as Characterized by Earth-based Photometry at Large Phase Angles; The Significance of the Geometries of Linear Graben for the Widths of Shallow Dike Intrusions on the Moon; Lunar Prospector Data, Surface Roughness and IR Thermal Emission of the Moon; The Influence of a Magma Ocean on the Lunar Global Stress Field Due to Tidal Interaction Between the Earth and Moon; Variations of the Mercurian Photometric Relief; A Model of Positive Polarization of Regolith; Ground Truth and Lunar Global Thorium Map Calibration: Are We There Yet?;and Space Weathering of Apollo 16 Sample 62255: Lunar Rocks as Witness Plates for Deciphering Regolith Formation Processes.

  4. BAID: The Barrow Area Information Database - an interactive web mapping portal and cyberinfrastructure for science and land management in the vicinity of Utqiaġvik (Barrow) on the North Slope of Alaska.

    NASA Astrophysics Data System (ADS)

    Cody, R. P.; Escarzaga, S. M.; Gaylord, A. G.; Kassin, A.; Barba, M.; Tweedie, C. E.

    2017-12-01

    The Utqiaġvik (Barrow) area of northern Alaska is one of the most intensely researched locations in the Arctic and the Barrow Area Information Database (BAID, www.barrowmapped.org) tracks and facilitates a gamut of research, management, and educational activities in the area. BAID is a cyberinfrastructure (CI) that details much of the historic and extant research undertaken within in the Barrow region in a suite of interactive web-based mapping and information portals (geobrowsers). The BAID user community and target audience for BAID is diverse and includes research scientists, science logisticians, land managers, educators, students, and the general public. BAID contains information on more than 18,000 Barrow area research sites that extend back to the 1940's and more than 640 remote sensing images and geospatial datasets. In a web-based setting, users can zoom, pan, query, measure distance, save or print maps and query results, and filter or view information by space, time, and/or other tags. Recent advances include provision of differential global positioning (dGPS) system and high resolution aerial imagery support to visiting scientists, analysis and multitemporal mapping of over 120 km of coastline for erosion monitoring; maintenance of a wireless micrometeorological sensor network; links to Barrow area datasets housed at national data archives; a NOAA funded community outreach program for citizen science and public outreach on costal erosion; and substantial upgrades to the BAID website. Web mapping applications that have launched to the public include: an Imagery Time Viewer that allows users to compare imagery of the Barrow area between 1948 and the present; a Coastal Erosion Viewer that allows users to view long-term (1955-2015) and recent (2013-2015) rates of erosion for the Barrow area; and a Community Planning tool that allows users to view and print dynamic reports based on an array of basemaps including a new 0.5m resolution wetlands map designed to enhance decision making for development and land management.

  5. PineElm_SSRdb: a microsatellite marker database identified from genomic, chloroplast, mitochondrial and EST sequences of pineapple (Ananas comosus (L.) Merrill).

    PubMed

    Chaudhary, Sakshi; Mishra, Bharat Kumar; Vivek, Thiruvettai; Magadum, Santoshkumar; Yasin, Jeshima Khan

    2016-01-01

    Simple Sequence Repeats or microsatellites are resourceful molecular genetic markers. There are only few reports of SSR identification and development in pineapple. Complete genome sequence of pineapple available in the public domain can be used to develop numerous novel SSRs. Therefore, an attempt was made to identify SSRs from genomic, chloroplast, mitochondrial and EST sequences of pineapple which will help in deciphering genetic makeup of its germplasm resources. A total of 359511 SSRs were identified in pineapple (356385 from genome sequence, 45 from chloroplast sequence, 249 in mitochondrial sequence and 2832 from EST sequences). The list of EST-SSR markers and their details are available in the database. PineElm_SSRdb is an open source database available for non-commercial academic purpose at http://app.bioelm.com/ with a mapping tool which can develop circular maps of selected marker set. This database will be of immense use to breeders, researchers and graduates working on Ananas spp. and to others working on cross-species transferability of markers, investigating diversity, mapping and DNA fingerprinting.

  6. Geology of the Cape Mendocino, Eureka, Garberville, and Southwestern Part of the Hayfork 30 x 60 Minute Quadrangles and Adjacent Offshore Area, Northern California

    USGS Publications Warehouse

    McLaughlin, Robert J.; Ellen, S.D.; Blake, M.C.; Jayko, Angela S.; Irwin, W.P.; Aalto, K.R.; Carver, G.A.; Clarke, S.H.; Barnes, J.B.; Cecil, J.D.; Cyr, K.A.

    2000-01-01

    Introduction These geologic maps and accompanying structure sections depict the geology and structure of much of northwestern California and the adjacent continental margin. The map area includes the Mendocino triple junction, which is the juncture of the North American continental plate with two plates of the Pacific ocean basin. The map area also encompasses major geographic and geologic provinces of northwestern California. The maps incorporate much previously unpublished geologic mapping done between 1980 and 1995, as well as published mapping done between about 1950 and 1978. To construct structure sections to mid-crustal depths, we integrate the surface geology with interpretations of crustal structure based on seismicity, gravity and aeromagnetic data, offshore structure, and seismic reflection and refraction data. In addition to describing major geologic and structural features of northwestern California, the geologic maps have the potential to address a number of societally relevant issues, including hazards from earthquakes, landslides, and floods and problems related to timber harvest, wildlife habitat, and changing land use. All of these topics will continue to be of interest in the region, as changing land uses and population density interact with natural conditions. In these interactions, it is critical that the policies and practices affecting man and the environment integrate an adequate understanding of the geology. This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (ceghmf.ps, ceghmf.pdf, ceghmf.txt), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:100,000 or smaller.

  7. Active, capable, and potentially active faults - a paleoseismic perspective

    USGS Publications Warehouse

    Machette, M.N.

    2000-01-01

    Maps of faults (geologically defined source zones) may portray seismic hazards in a wide range of completeness depending on which types of faults are shown. Three fault terms - active, capable, and potential - are used in a variety of ways for different reasons or applications. Nevertheless, to be useful for seismic-hazards analysis, fault maps should encompass a time interval that includes several earthquake cycles. For example, if the common recurrence in an area is 20,000-50,000 years, then maps should include faults that are 50,000-100,000 years old (two to five typical earthquake cycles), thus allowing for temporal variability in slip rate and recurrence intervals. Conversely, in more active areas such as plate boundaries, maps showing faults that are <10,000 years old should include those with at least 2 to as many as 20 paleoearthquakes. For the International Lithosphere Programs' Task Group II-2 Project on Major Active Faults of the World our maps and database will show five age categories and four slip rate categories that allow one to select differing time spans and activity rates for seismic-hazard analysis depending on tectonic regime. The maps are accompanied by a database that describes evidence for Quaternary faulting, geomorphic expression, and paleoseismic parameters (slip rate, recurrence interval and time of most recent surface faulting). These maps and databases provide an inventory of faults that would be defined as active, capable, and potentially active for seismic-hazard assessments.

  8. 1986 Year End Report for Road Following at Carnegie-Mellon

    DTIC Science & Technology

    1987-05-01

    how to make them work efficiently. We designed a hierarchical structure and a monitor module which manages all parts of the hierarchy (see figure 1...database, called the Local Map, is managed by a program known as the Local Map Builder (LMB). Each module stores and retrieves information in the...knowledge-intensive modules, and a database manager that synchronizes the modules-is characteristic of a traditional blackboard system. Such a system is

  9. Geologic map of Alaska

    USGS Publications Warehouse

    Wilson, Frederic H.; Hults, Chad P.; Mull, Charles G.; Karl, Susan M.

    2015-12-31

    This Alaska compilation is unique in that it is integrated with a rich database of information provided in the spatial datasets and standalone attribute databases. Within the spatial files every line and polygon is attributed to its original source; the references to these sources are contained in related tables, as well as in stand-alone tables. Additional attributes include typical lithology, geologic setting, and age range for the map units. Also included are tables of radiometric ages.

  10. Computer-Aided Clinical Trial Recruitment Based on Domain-Specific Language Translation: A Case Study of Retinopathy of Prematurity

    PubMed Central

    2017-01-01

    Reusing the data from healthcare information systems can effectively facilitate clinical trials (CTs). How to select candidate patients eligible for CT recruitment criteria is a central task. Related work either depends on DBA (database administrator) to convert the recruitment criteria to native SQL queries or involves the data mapping between a standard ontology/information model and individual data source schema. This paper proposes an alternative computer-aided CT recruitment paradigm, based on syntax translation between different DSLs (domain-specific languages). In this paradigm, the CT recruitment criteria are first formally represented as production rules. The referenced rule variables are all from the underlying database schema. Then the production rule is translated to an intermediate query-oriented DSL (e.g., LINQ). Finally, the intermediate DSL is directly mapped to native database queries (e.g., SQL) automated by ORM (object-relational mapping). PMID:29065644

  11. e23D: database and visualization of A-to-I RNA editing sites mapped to 3D protein structures.

    PubMed

    Solomon, Oz; Eyal, Eran; Amariglio, Ninette; Unger, Ron; Rechavi, Gidi

    2016-07-15

    e23D, a database of A-to-I RNA editing sites from human, mouse and fly mapped to evolutionary related protein 3D structures, is presented. Genomic coordinates of A-to-I RNA editing sites are converted to protein coordinates and mapped onto 3D structures from PDB or theoretical models from ModBase. e23D allows visualization of the protein structure, modeling of recoding events and orientation of the editing with respect to nearby genomic functional sites from databases of disease causing mutations and genomic polymorphism. http://www.sheba-cancer.org.il/e23D CONTACT: oz.solomon@live.biu.ac.il or Eran.Eyal@sheba.health.gov.il. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Computer-Aided Clinical Trial Recruitment Based on Domain-Specific Language Translation: A Case Study of Retinopathy of Prematurity.

    PubMed

    Zhang, Yinsheng; Zhang, Guoming; Shang, Qian

    2017-01-01

    Reusing the data from healthcare information systems can effectively facilitate clinical trials (CTs). How to select candidate patients eligible for CT recruitment criteria is a central task. Related work either depends on DBA (database administrator) to convert the recruitment criteria to native SQL queries or involves the data mapping between a standard ontology/information model and individual data source schema. This paper proposes an alternative computer-aided CT recruitment paradigm, based on syntax translation between different DSLs (domain-specific languages). In this paradigm, the CT recruitment criteria are first formally represented as production rules. The referenced rule variables are all from the underlying database schema. Then the production rule is translated to an intermediate query-oriented DSL (e.g., LINQ). Finally, the intermediate DSL is directly mapped to native database queries (e.g., SQL) automated by ORM (object-relational mapping).

  13. GEOGRAPHIC INFORMATION SYSTEM APPROACH FOR PLAY PORTFOLIOS TO IMPROVE OIL PRODUCTION IN THE ILLINOIS BASIN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beverly Seyler; John Grube

    2004-12-10

    Oil and gas have been commercially produced in Illinois for over 100 years. Existing commercial production is from more than fifty-two named pay horizons in Paleozoic rocks ranging in age from Middle Ordovician to Pennsylvanian. Over 3.2 billion barrels of oil have been produced. Recent calculations indicate that remaining mobile resources in the Illinois Basin may be on the order of several billion barrels. Thus, large quantities of oil, potentially recoverable using current technology, remain in Illinois oil fields despite a century of development. Many opportunities for increased production may have been missed due to complex development histories, multiple stackedmore » pays, and commingled production which makes thorough exploitation of pays and the application of secondary or improved/enhanced recovery strategies difficult. Access to data, and the techniques required to evaluate and manage large amounts of diverse data are major barriers to increased production of critical reserves in the Illinois Basin. These constraints are being alleviated by the development of a database access system using a Geographic Information System (GIS) approach for evaluation and identification of underdeveloped pays. The Illinois State Geological Survey has developed a methodology that is being used by industry to identify underdeveloped areas (UDAs) in and around petroleum reservoirs in Illinois using a GIS approach. This project utilizes a statewide oil and gas Oracle{reg_sign} database to develop a series of Oil and Gas Base Maps with well location symbols that are color-coded by producing horizon. Producing horizons are displayed as layers and can be selected as separate or combined layers that can be turned on and off. Map views can be customized to serve individual needs and page size maps can be printed. A core analysis database with over 168,000 entries has been compiled and assimilated into the ISGS Enterprise Oracle database. Maps of wells with core data have been generated. Data from over 1,700 Illinois waterflood units and waterflood areas have been entered into an Access{reg_sign} database. The waterflood area data has also been assimilated into the ISGS Oracle database for mapping and dissemination on the ArcIMS website. Formation depths for the Beech Creek Limestone, Ste. Genevieve Limestone and New Albany Shale in all of the oil producing region of Illinois have been calculated and entered into a digital database. Digital contoured structure maps have been constructed, edited and added to the ILoil website as map layers. This technology/methodology addresses the long-standing constraints related to information access and data management in Illinois by significantly simplifying the laborious process that industry presently must use to identify underdeveloped pay zones in Illinois.« less

  14. Map and database of Quaternary faults in Venezuela and its offshore regions

    USGS Publications Warehouse

    Audemard, F.A.; Machette, M.N.; Cox, J.W.; Dart, R.L.; Haller, K.M.

    2000-01-01

    As part of the International Lithosphere Program’s “World Map of Major Active Faults,” the U.S. Geological Survey is assisting in the compilation of a series of digital maps of Quaternary faults and folds in Western Hemisphere countries. The maps show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds. They are accompanied by databases that describe these features and document current information on their activity in the Quaternary. The project is a key part of the Global Seismic Hazards Assessment Program (ILP Project II-0) for the International Decade for Natural Hazard Disaster Reduction.The project is sponsored by the International Lithosphere Program and funded by the USGS’s National Earthquake Hazards Reduction Program. The primary elements of the project are general supervision and interpretation of geologic/tectonic information, data compilation and entry for fault catalog, database design and management, and digitization and manipulation of data in †ARCINFO. For the compilation of data, we engaged experts in Quaternary faulting, neotectonics, paleoseismology, and seismology.

  15. NADM Conceptual Model 1.0 -- A Conceptual Model for Geologic Map Information

    USGS Publications Warehouse

    ,

    2004-01-01

    Executive Summary -- The NADM Data Model Design Team was established in 1999 by the North American Geologic Map Data Model Steering Committee (NADMSC) with the purpose of drafting a geologic map data model for consideration as a standard for developing interoperable geologic map-centered databases by state, provincial, and federal geological surveys. The model is designed to be a technology-neutral conceptual model that can form the basis for a web-based interchange format using evolving information technology (e.g., XML, RDF, OWL), and guide implementation of geoscience databases in a common conceptual framework. The intended purpose is to allow geologic information sharing between geologic map data providers and users, independent of local information system implementation. The model emphasizes geoscience concepts and relationships related to information presented on geologic maps. Design has been guided by an informal requirements analysis, documentation of existing databases, technology developments, and other standardization efforts in the geoscience and computer-science communities. A key aspect of the model is the notion that representation of the conceptual framework (ontology) that underlies geologic map data must be part of the model, because this framework changes with time and understanding, and varies between information providers. The top level of the model distinguishes geologic concepts, geologic representation concepts, and metadata. The geologic representation part of the model provides a framework for representing the ontology that underlies geologic map data through a controlled vocabulary, and for establishing the relationships between this vocabulary and a geologic map visualization or portrayal. Top-level geologic classes in the model are Earth material (substance), geologic unit (parts of the Earth), geologic age, geologic structure, fossil, geologic process, geologic relation, and geologic event.

  16. A computational visual saliency model based on statistics and machine learning.

    PubMed

    Lin, Ru-Je; Lin, Wei-Song

    2014-08-01

    Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.

  17. In campus location finder using mobile application services

    NASA Astrophysics Data System (ADS)

    Fai, Low Weng; Audah, Lukman

    2017-09-01

    Navigation services become very common in this era, the application include Google Map, Waze and etc. Although navigation application contains the main routing service in open area but not all of the buildings are recorded in the database. In this project, an application is made for the indoor and outdoor navigation in Universiti Tun Hussein Onn Malaysia (UTHM). It is used to help outsider and new incoming students by navigating them from their current location to destination using mobile application name "U Finder". Thunkable website has been used to build the application for outdoor and indoor navigation. Outdoor navigation is linked to the Google Map and indoor navigation is using the QR code for positioning and routing picture for navigation. The outdoor navigation can route user to the main faculties in UTHM and indoor navigation is only done for the G1 building in UTHM.

  18. Geologic and geophysical maps of the El Casco 7.5′ quadrangle, Riverside County, southern California, with accompanying geologic-map database

    USGS Publications Warehouse

    Matti, J.C.; Morton, D.M.; Langenheim, V.E.

    2015-01-01

    Geologic information contained in the El Casco database is general-purpose data applicable to land-related investigations in the earth and biological sciences. The term “general-purpose” means that all geologic-feature classes have minimal information content adequate to characterize their general geologic characteristics and to interpret their general geologic history. However, no single feature class has enough information to definitively characterize its properties and origin. For this reason the database cannot be used for site-specific geologic evaluations, although it can be used to plan and guide investigations at the site-specific level.

  19. The National Map - Orthoimagery Layer

    USGS Publications Warehouse

    ,

    2007-01-01

    Many Federal, State, and local agencies use a common set of framework geographic information databases as a tool for economic and community development, land and natural resource management, and health and safety services. Emergency management and homeland security applications rely on this information. Private industry, nongovernmental organizations, and individual citizens use the same geographic data. Geographic information underpins an increasingly large part of the Nation's economy. The U.S. Geological Survey (USGS) is developing The National Map to be a seamless, continually maintained, and nationally consistent set of online, public domain, framework geographic information databases. The National Map will serve as a foundation for integrating, sharing, and using data easily and consistently. The data will be the source of revised paper topographic maps. The National Map includes digital orthorectified imagery; elevation data; vector data for hydrography, transportation, boundary, and structure features; geographic names; and land cover information.

  20. Map and database of Quaternary faults and folds in Colombia and its offshore regions

    USGS Publications Warehouse

    Paris, Gabriel; Machette, Michael N.; Dart, Richard L.; Haller, Kathleen M.

    2000-01-01

    As part of the International Lithosphere Program’s “World Map of Major Active Faults,” the U.S. Geological Survey (USGS) is assisting in the compilation of a series of digital maps of Quaternary faults and folds in Western Hemisphere countries. The maps show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds. They are accompanied by databases that describe these features and document current information on their activity in the Quaternary. Top date, the project has published fault and fold maps for Costa Rica (Montero and others, 1998), Panama (Cowan and others, 1998), Venezuela (Audemard and others, 2000), Bolovia/Chile (Lavenu, and others, 2000), and Argentina (Costa and others, 2000). The project is a key part of the Global Seismic Hazards Assessment Program (ILP Project II-0) for the International Decade for Natural Hazard Disaster Reduction.

  1. Geologic map of Chickasaw National Recreation Area, Murray County, Oklahoma

    USGS Publications Warehouse

    Blome, Charles D.; Lidke, David J.; Wahl, Ronald R.; Golab, James A.

    2013-01-01

    This 1:24,000-scale geologic map is a compilation of previous geologic maps and new geologic mapping of areas in and around Chickasaw National Recreation Area. The geologic map includes revisions of numerous unit contacts and faults and a number of previously “undifferentiated” rock units were subdivided in some areas. Numerous circular-shaped hills in and around Chickasaw National Recreation Area are probably the result of karst-related collapse and may represent the erosional remnants of large, exhumed sinkholes. Geospatial registration of existing, smaller scale (1:72,000- and 1:100,000-scale) geologic maps of the area and construction of an accurate Geographic Information System (GIS) database preceded 2 years of fieldwork wherein previously mapped geology (unit contacts and faults) was verified and new geologic mapping was carried out. The geologic map of Chickasaw National Recreation Area and this pamphlet include information pertaining to how the geologic units and structural features in the map area relate to the formation of the northern Arbuckle Mountains and its Arbuckle-Simpson aquifer. The development of an accurate geospatial GIS database and the use of a handheld computer in the field greatly increased both the accuracy and efficiency in producing the 1:24,000-scale geologic map.

  2. Linking NCBI to Wikipedia: a wiki-based approach.

    PubMed

    Page, Roderic D M

    2011-03-31

    The NCBI Taxonomy underpins many bioinformatics and phyloinformatics databases, but by itself provides limited information on the taxa it contains. One readily available source of information on many taxa is Wikipedia. This paper describes iPhylo Linkout, a Semantic wiki that maps taxa in NCBI's taxonomy database onto corresponding pages in Wikipedia. Storing the mapping in a wiki makes it easy to edit, correct, or otherwise annotate the links between NCBI and Wikipedia. The mapping currently comprises some 53,000 taxa, and is available at http://iphylo.org/linkout. The links between NCBI and Wikipedia are also made available to NCBI users through the NCBI LinkOut service.

  3. Design and implementation of a CORBA-based genome mapping system prototype.

    PubMed

    Hu, J; Mungall, C; Nicholson, D; Archibald, A L

    1998-01-01

    CORBA (Common Object Request Broker Architecture), as an open standard, is considered to be a good solution for the development and deployment of applications in distributed heterogeneous environments. This technology can be applied in the bioinformatics area to enhance utilization, management and interoperation between biological resources. This paper investigates issues in developing CORBA applications for genome mapping information systems in the Internet environment with emphasis on database connectivity and graphical user interfaces. The design and implementation of a CORBA prototype for an animal genome mapping database are described. The prototype demonstration is available via: http://www.ri.bbsrc.ac.uk/ark_corba/. jian.hu@bbsrc.ac.uk

  4. GIS applications for military operations in coastal zones

    USGS Publications Warehouse

    Fleming, S.; Jordan, T.; Madden, M.; Usery, E.L.; Welch, R.

    2009-01-01

    In order to successfully support current and future US military operations in coastal zones, geospatial information must be rapidly integrated and analyzed to meet ongoing force structure evolution and new mission directives. Coastal zones in a military-operational environment are complex regions that include sea, land and air features that demand high-volume databases of extreme detail within relatively narrow geographic corridors. Static products in the form of analog maps at varying scales traditionally have been used by military commanders and their operational planners. The rapidly changing battlefield of 21st Century warfare, however, demands dynamic mapping solutions. Commercial geographic information system (GIS) software for military-specific applications is now being developed and employed with digital databases to provide customized digital maps of variable scale, content and symbolization tailored to unique demands of military units. Research conducted by the Center for Remote Sensing and Mapping Science at the University of Georgia demonstrated the utility of GIS-based analysis and digital map creation when developing large-scale (1:10,000) products from littoral warfare databases. The methodology employed-selection of data sources (including high resolution commercial images and Lidar), establishment of analysis/modeling parameters, conduct of vehicle mobility analysis, development of models and generation of products (such as a continuous sea-land DEM and geo-visualization of changing shorelines with tidal levels)-is discussed. Based on observations and identified needs from the National Geospatial-Intelligence Agency, formerly the National Imagery and Mapping Agency, and the Department of Defense, prototype GIS models for military operations in sea, land and air environments were created from multiple data sets of a study area at US Marine Corps Base Camp Lejeune, North Carolina. Results of these models, along with methodologies for developing large-scale littoral warfare databases, aid the National Geospatial-Intelligence Agency in meeting littoral warfare analysis, modeling and map generation requirements for US military organizations. ?? 2008 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).

  5. GIS applications for military operations in coastal zones

    NASA Astrophysics Data System (ADS)

    Fleming, S.; Jordan, T.; Madden, M.; Usery, E. L.; Welch, R.

    In order to successfully support current and future US military operations in coastal zones, geospatial information must be rapidly integrated and analyzed to meet ongoing force structure evolution and new mission directives. Coastal zones in a military-operational environment are complex regions that include sea, land and air features that demand high-volume databases of extreme detail within relatively narrow geographic corridors. Static products in the form of analog maps at varying scales traditionally have been used by military commanders and their operational planners. The rapidly changing battlefield of 21st Century warfare, however, demands dynamic mapping solutions. Commercial geographic information system (GIS) software for military-specific applications is now being developed and employed with digital databases to provide customized digital maps of variable scale, content and symbolization tailored to unique demands of military units. Research conducted by the Center for Remote Sensing and Mapping Science at the University of Georgia demonstrated the utility of GIS-based analysis and digital map creation when developing large-scale (1:10,000) products from littoral warfare databases. The methodology employed-selection of data sources (including high resolution commercial images and Lidar), establishment of analysis/modeling parameters, conduct of vehicle mobility analysis, development of models and generation of products (such as a continuous sea-land DEM and geo-visualization of changing shorelines with tidal levels)-is discussed. Based on observations and identified needs from the National Geospatial-Intelligence Agency, formerly the National Imagery and Mapping Agency, and the Department of Defense, prototype GIS models for military operations in sea, land and air environments were created from multiple data sets of a study area at US Marine Corps Base Camp Lejeune, North Carolina. Results of these models, along with methodologies for developing large-scale littoral warfare databases, aid the National Geospatial-Intelligence Agency in meeting littoral warfare analysis, modeling and map generation requirements for US military organizations.

  6. Implementing managed alcohol programs in hospital settings: A review of academic and grey literature.

    PubMed

    Brooks, Hannah L; Kassam, Shehzad; Salvalaggio, Ginetta; Hyshka, Elaine

    2018-04-01

    People with severe alcohol use disorders are at increased risk of poor acute-care outcomes, in part due to difficulties maintaining abstinence from alcohol while hospitalised. Managed alcohol programs (MAP), which administer controlled doses of beverage alcohol to prevent withdrawal and stabilise drinking patterns, are one strategy for increasing adherence to treatment, and improving health outcomes for hospital inpatients with severe alcohol use disorders. Minimal research has examined the implementation of MAPs in hospital settings. We conducted a scoping review to describe extant literature on MAPs in community settings, as well as the therapeutic provision of alcohol to hospital inpatients, to assess the feasibility of implementing formal MAPs in hospital settings and identify knowledge gaps requiring further study. Four academic and 10 grey literature databases were searched. Evidence was synthesised using quantitative and qualitative approaches. Forty-two studies met review inclusion criteria. Twenty-eight examined the administration of alcohol to hospital inpatients, with most reporting positive outcomes related to prevention or treatment of alcohol withdrawal. Fourteen studies examined MAPs in the community and reported that they help stabilise drinking patterns, reduce alcohol-related harms and facilitate non-judgemental health and social care. MAPs in the community have been well described and research has documented effective provision of alcohol in hospital settings for addressing withdrawal. Implementing MAPs as a harm reduction approach in hospital settings is potentially feasible. However, there remains a need to build off extant literature and develop and evaluate standardised MAP protocols tailored to acute-care settings. © 2018 Australasian Professional Society on Alcohol and other Drugs.

  7. Geology of the Palo Alto 30 x 60 minute quadrangle, California: A digital database

    USGS Publications Warehouse

    Brabb, Earl E.; Graymer, R.W.; Jones, David Lawrence

    1998-01-01

    This map database represents the integration of previously published and unpublished maps by several workers (see Sources of Data index map on Sheet 2 and the corresponding table below) and new geologic mapping and field checking by the authors with the previously published geologic map of San Mateo County (Brabb and Pampeyan, 1983) and Santa Cruz County (Brabb, 1989, Brabb and others, 1997), and various sources in a small part of Santa Clara County. These new data are released in digital form to provide an opportunity for regional planners, local, state, and federal agencies, teachers, consultants, and others interested in geologic data to have the new data long before a traditional paper map is published. The new data include a new depiction of Quaternary units in the San Francisco Bay plain emphasizing depositional environment, important new observations between the San Andreas and Pilarcitos faults, and a new interpretation of structural and stratigraphic relationships of rock packages (Assemblages).

  8. A new catalog of planetary maps

    NASA Technical Reports Server (NTRS)

    Batson, R. M.; Inge, J. L.

    1991-01-01

    A single, concise reference to all existing planetary maps, including lunar ones, is being prepared that will allow map users to identify and locate maps of their areas of interest. This will be the first such comprehensive listing of planetary maps. Although the USGS shows index maps on the collar of each map sheet, periodically publishes index maps of Mars, and provides informal listings of the USGS map database, no tabulation exists that identifies all planetary maps, including those published by DMA and other organizations. The catalog will consist of a booklet containing small-scale image maps with superimposed quadrangle boundaries and map data tabulations.

  9. A digital geologic map database for the state of Oklahoma

    USGS Publications Warehouse

    Heran, William D.; Green, Gregory N.; Stoeser, Douglas B.

    2003-01-01

    This dataset is a composite of part or all of the 12 1:250,000 scale quadrangles that make up Oklahoma. The result looks like a geologic map of the State of Oklahoma. But it is only an Oklahoma shaped map clipped from the 1:250,000 geologic maps. This is not a new geologic map. No new mapping took place. The geologic information from each quadrangle is available within the composite dataset.

  10. From 20th century metabolic wall charts to 21st century systems biology: database of mammalian metabolic enzymes.

    PubMed

    Corcoran, Callan C; Grady, Cameron R; Pisitkun, Trairak; Parulekar, Jaya; Knepper, Mark A

    2017-03-01

    The organization of the mammalian genome into gene subsets corresponding to specific functional classes has provided key tools for systems biology research. Here, we have created a web-accessible resource called the Mammalian Metabolic Enzyme Database ( https://hpcwebapps.cit.nih.gov/ESBL/Database/MetabolicEnzymes/MetabolicEnzymeDatabase.html) keyed to the biochemical reactions represented on iconic metabolic pathway wall charts created in the previous century. Overall, we have mapped 1,647 genes to these pathways, representing ~7 percent of the protein-coding genome. To illustrate the use of the database, we apply it to the area of kidney physiology. In so doing, we have created an additional database ( Database of Metabolic Enzymes in Kidney Tubule Segments: https://hpcwebapps.cit.nih.gov/ESBL/Database/MetabolicEnzymes/), mapping mRNA abundance measurements (mined from RNA-Seq studies) for all metabolic enzymes to each of 14 renal tubule segments. We carry out bioinformatics analysis of the enzyme expression pattern among renal tubule segments and mine various data sources to identify vasopressin-regulated metabolic enzymes in the renal collecting duct. Copyright © 2017 the American Physiological Society.

  11. Archetype relational mapping - a practical openEHR persistence solution.

    PubMed

    Wang, Li; Min, Lingtong; Wang, Rui; Lu, Xudong; Duan, Huilong

    2015-11-05

    One of the primary obstacles to the widespread adoption of openEHR methodology is the lack of practical persistence solutions for future-proof electronic health record (EHR) systems as described by the openEHR specifications. This paper presents an archetype relational mapping (ARM) persistence solution for the archetype-based EHR systems to support healthcare delivery in the clinical environment. First, the data requirements of the EHR systems are analysed and organized into archetype-friendly concepts. The Clinical Knowledge Manager (CKM) is queried for matching archetypes; when necessary, new archetypes are developed to reflect concepts that are not encompassed by existing archetypes. Next, a template is designed for each archetype to apply constraints related to the local EHR context. Finally, a set of rules is designed to map the archetypes to data tables and provide data persistence based on the relational database. A comparison study was conducted to investigate the differences among the conventional database of an EHR system from a tertiary Class A hospital in China, the generated ARM database, and the Node + Path database. Five data-retrieving tests were designed based on clinical workflow to retrieve exams and laboratory tests. Additionally, two patient-searching tests were designed to identify patients who satisfy certain criteria. The ARM database achieved better performance than the conventional database in three of the five data-retrieving tests, but was less efficient in the remaining two tests. The time difference of query executions conducted by the ARM database and the conventional database is less than 130 %. The ARM database was approximately 6-50 times more efficient than the conventional database in the patient-searching tests, while the Node + Path database requires far more time than the other two databases to execute both the data-retrieving and the patient-searching tests. The ARM approach is capable of generating relational databases using archetypes and templates for archetype-based EHR systems, thus successfully adapting to changes in data requirements. ARM performance is similar to that of conventionally-designed EHR systems, and can be applied in a practical clinical environment. System components such as ARM can greatly facilitate the adoption of openEHR architecture within EHR systems.

  12. DBMap: a TreeMap-based framework for data navigation and visualization of brain research registry

    NASA Astrophysics Data System (ADS)

    Zhang, Ming; Zhang, Hong; Tjandra, Donny; Wong, Stephen T. C.

    2003-05-01

    The purpose of this study is to investigate and apply a new, intuitive and space-conscious visualization framework to facilitate efficient data presentation and exploration of large-scale data warehouses. We have implemented the DBMap framework for the UCSF Brain Research Registry. Such a novel utility would facilitate medical specialists and clinical researchers in better exploring and evaluating a number of attributes organized in the brain research registry. The current UCSF Brain Research Registry consists of a federation of disease-oriented database modules, including Epilepsy, Brain Tumor, Intracerebral Hemorrphage, and CJD (Creuzfeld-Jacob disease). These database modules organize large volumes of imaging and non-imaging data to support Web-based clinical research. While the data warehouse supports general information retrieval and analysis, there lacks an effective way to visualize and present the voluminous and complex data stored. This study investigates whether the TreeMap algorithm can be adapted to display and navigate categorical biomedical data warehouse or registry. TreeMap is a space constrained graphical representation of large hierarchical data sets, mapped to a matrix of rectangles, whose size and color represent interested database fields. It allows the display of a large amount of numerical and categorical information in limited real estate of computer screen with an intuitive user interface. The paper will describe, DBMap, the proposed new data visualization framework for large biomedical databases. Built upon XML, Java and JDBC technologies, the prototype system includes a set of software modules that reside in the application server tier and provide interface to backend database tier and front-end Web tier of the brain registry.

  13. Chapter 4 - The LANDFIRE Prototype Project reference database

    Treesearch

    John F. Caratti

    2006-01-01

    This chapter describes the data compilation process for the Landscape Fire and Resource Management Planning Tools Prototype Project (LANDFIRE Prototype Project) reference database (LFRDB) and explains the reference data applications for LANDFIRE Prototype maps and models. The reference database formed the foundation for all LANDFIRE tasks. All products generated by the...

  14. Transport Statistics - Transport - UNECE

    Science.gov Websites

    Statistics and Data Online Infocards Database SDG Papers E-Road Census Traffic Census Map Traffic Census 2015 available. Two new datasets have been added to the transport statistics database: bus and coach statistics Database Evaluations Follow UNECE Facebook Rss Twitter You tube Contact us Instagram Flickr Google+ Â

  15. [Effects of soil data and map scale on assessment of total phosphorus storage in upland soils.

    PubMed

    Li, Heng Rong; Zhang, Li Ming; Li, Xiao di; Yu, Dong Sheng; Shi, Xue Zheng; Xing, Shi He; Chen, Han Yue

    2016-06-01

    Accurate assessment of total phosphorus storage in farmland soils is of great significance to sustainable agricultural and non-point source pollution control. However, previous studies haven't considered the estimation errors from mapping scales and various databases with different sources of soil profile data. In this study, a total of 393×10 4 hm 2 of upland in the 29 counties (or cities) of North Jiangsu was cited as a case for study. Analysis was performed of how the four sources of soil profile data, namely, "Soils of County", "Soils of Prefecture", "Soils of Province" and "Soils of China", and the six scales, i.e. 1:50000, 1:250000, 1:500000, 1:1000000, 1:4000000 and1:10000000, used in the 24 soil databases established for the four soil journals, affected assessment of soil total phosphorus. Compared with the most detailed 1:50000 soil database established with 983 upland soil profiles, relative deviation of the estimates of soil total phosphorus density (STPD) and soil total phosphorus storage (STPS) from the other soil databases varied from 4.8% to 48.9% and from 1.6% to 48.4%, respectively. The estimated STPD and STPS based on the 1:50000 database of "Soils of County" and most of the estimates based on the databases of each scale in "Soils of County" and "Soils of Prefecture" were different, with the significance levels of P<0.001 or P<0.05. Extremely significant differences (P<0.001) existed between the estimates based on the 1:50000 database of "Soils of County" and the estimates based on the databases of each scale in "Soils of Province" and "Soils of China". This study demonstrated the significance of appropriate soil data sources and appropriate mapping scales in estimating STPS.

  16. Pattern-based, multi-scale segmentation and regionalization of EOSD land cover

    NASA Astrophysics Data System (ADS)

    Niesterowicz, Jacek; Stepinski, Tomasz F.

    2017-10-01

    The Earth Observation for Sustainable Development of Forests (EOSD) map is a 25 m resolution thematic map of Canadian forests. Because of its large spatial extent and relatively high resolution the EOSD is difficult to analyze using standard GIS methods. In this paper we propose multi-scale segmentation and regionalization of EOSD as new methods for analyzing EOSD on large spatial scales. Segments, which we refer to as forest land units (FLUs), are delineated as tracts of forest characterized by cohesive patterns of EOSD categories; we delineated from 727 to 91,885 FLUs within the spatial extent of EOSD depending on the selected scale of a pattern. Pattern of EOSD's categories within each FLU is described by 1037 landscape metrics. A shapefile containing boundaries of all FLUs together with an attribute table listing landscape metrics make up an SQL-searchable spatial database providing detailed information on composition and pattern of land cover types in Canadian forest. Shapefile format and extensive attribute table pertaining to the entire legend of EOSD are designed to facilitate broad range of investigations in which assessment of composition and pattern of forest over large areas is needed. We calculated four such databases using different spatial scales of pattern. We illustrate the use of FLU database for producing forest regionalization maps of two Canadian provinces, Quebec and Ontario. Such maps capture the broad scale variability of forest at the spatial scale of the entire province. We also demonstrate how FLU database can be used to map variability of landscape metrics, and thus the character of landscape, over the entire Canada.

  17. Preliminary maps of Quaternary deposits and liquefaction susceptibility, nine-county San Francisco Bay region, California: a digital database

    USGS Publications Warehouse

    Knudsen, Keith L.; Sowers, Janet M.; Witter, Robert C.; Wentworth, Carl M.; Helley, Edward J.; Nicholson, Robert S.; Wright, Heather M.; Brown, Katherine H.

    2000-01-01

    This report presents a preliminary map and database of Quaternary deposits and liquefaction susceptibility for the nine-county San Francisco Bay region, together with a digital compendium of ground effects associated with past earthquakes in the region. The report consists of (1) a spatial database of fivedata layers (Quaternary deposits, quadrangle index, and three ground effects layers) and two text layers (a labels and leaders layer for Quaternary deposits and for ground effects), (2) two small-scale colored maps (Quaternary deposits and liquefaction susceptibility), (3) a text describing the Quaternary map, liquefaction interpretation, and the ground effects compendium, and (4) the databse description pamphlet. The nine counties surrounding San Francisco Bay straddle the San Andreas fault system, which exposes the region to serious earthquake hazard (Working Group on California Earthquake Probabilities, 1999). Much of the land adjacent to the Bay and the major rivers and streams is underlain by unconsolidated deposits that are particularly vulnerable to earthquake shaking and liquefaction of water-saturated granular sediment. This new map provides a modern and regionally consistent treatment of Quaternary surficial deposits that builds on the pioneering mapping of Helley and Lajoie (Helley and others, 1979) and such intervening work as Atwater (1982), Helley and others (1994), and Helley and Graymer (1997a and b). Like these earlier studies, the current mapping uses geomorphic expression, pedogenic soils, and inferred depositional environments to define and distinguish the map units. In contrast to the twelve map units of Helley and Lajoie, however, this new map uses a complex stratigraphy of some forty units, which permits a more realistic portrayal of the Quaternary depositional system. The two colored maps provide a regional summary of the new mapping at a scale of 1:275,000, a scale that is sufficient to show the general distribution and relationships of the map units but cannot distinguish the more detailed elements that are present in the database. The report is the product of years of cooperative work by the USGS National Earthquake Hazards Reduction Program (NEHRP) and National Cooperative Geologic Mapping Program, William Lettis and & Associates, Inc. (WLA) and, more recently, by the California Division of Mines and Geology as well. An earlier version was submitted to the Geological Survey by WLA as a final report for a NEHRP grant (Knudsen and others, 2000). The mapping has been carried out by WLA geologists under contract to the NEHRP Earthquake Program (Grants #14-08-0001-G2129, 1434-94-G-2499, 1434-HQ-97-GR-03121, and 99-HQ-GR-0095) and with other limited support from the County of Napa, and recently also by the California Division of Mines and Geology. The current map consists of this new mapping and revisions of previous USGS mapping.

  18. Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.

    PubMed

    Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.

  19. ReMap 2018: an updated atlas of regulatory regions from an integrative analysis of DNA-binding ChIP-seq experiments

    PubMed Central

    Chèneby, Jeanne; Gheorghe, Marius; Artufel, Marie

    2018-01-01

    Abstract With this latest release of ReMap (http://remap.cisreg.eu), we present a unique collection of regulatory regions in human, as a result of a large-scale integrative analysis of ChIP-seq experiments for hundreds of transcriptional regulators (TRs) such as transcription factors, transcriptional co-activators and chromatin regulators. In 2015, we introduced the ReMap database to capture the genome regulatory space by integrating public ChIP-seq datasets, covering 237 TRs across 13 million (M) peaks. In this release, we have extended this catalog to constitute a unique collection of regulatory regions. Specifically, we have collected, analyzed and retained after quality control a total of 2829 ChIP-seq datasets available from public sources, covering a total of 485 TRs with a catalog of 80M peaks. Additionally, the updated database includes new search features for TR names as well as aliases, including cell line names and the ability to navigate the data directly within genome browsers via public track hubs. Finally, full access to this catalog is available online together with a TR binding enrichment analysis tool. ReMap 2018 provides a significant update of the ReMap database, providing an in depth view of the complexity of the regulatory landscape in human. PMID:29126285

  20. Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface

    PubMed Central

    Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321

  1. Digital Mapping Techniques '07 - Workshop Proceedings

    USGS Publications Warehouse

    Soller, David R.

    2008-01-01

    The Digital Mapping Techniques '07 (DMT'07) workshop was attended by 85 technical experts from 49 agencies, universities, and private companies, including representatives from 27 state geological surveys. This year's meeting, the tenth in the annual series, was hosted by the South Carolina Geological Survey, from May 20-23, 2007, on the University of South Carolina campus in Columbia, South Carolina. Each DMT workshop has been coordinated by the U.S. Geological Survey's National Geologic Map Database Project and the Association of American State Geologists (AASG). As in previous year's meetings, the objective was to foster informal discussion and exchange of technical information, principally in order to develop more efficient methods for digital mapping, cartography, GIS analysis, and information management. At this meeting, oral and poster presentations and special discussion sessions emphasized: 1) methods for creating and publishing map products (here, 'publishing' includes Web-based release); 2) field data capture software and techniques, including the use of LIDAR; 3) digital cartographic techniques; 4) migration of digital maps into ArcGIS Geodatabase format; 5) analytical GIS techniques; and 6) continued development of the National Geologic Map Database.

  2. Geographical Distribution of Woody Biomass Carbon in Tropical Africa: An Updated Database for 2000 (NDP-055.2007, NDP-055b))

    DOE Data Explorer

    Gibbs, Holly K. [Center for Sustainability and the Global Environment (SAGE), University of Wisconsin, Madison, WI (USA); Brown, Sandra [Winrock International, Arlington, VA (USA); Olsen, L. M. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory, Oak Ridge, TN (USA); Boden, Thomas A. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory, Oak Ridge, TN (USA)

    2007-09-01

    Maps of biomass density are critical inputs for estimating carbon emissions from deforestation and degradation of tropical forests. Brown and Gatson (1996) pioneered methods to use GIS analysis to map forest biomass based on forest inventory data (ndp055). This database is an update of ndp055 (which represent conditions in circa 1980) and accounts for land cover changes occurring up to the year 2000.

  3. CMD: a Cotton Microsatellite Database resource for Gossypium genomics

    PubMed Central

    Blenda, Anna; Scheffler, Jodi; Scheffler, Brian; Palmer, Michael; Lacape, Jean-Marc; Yu, John Z; Jesudurai, Christopher; Jung, Sook; Muthukumar, Sriram; Yellambalase, Preetham; Ficklin, Stephen; Staton, Margaret; Eshelman, Robert; Ulloa, Mauricio; Saha, Sukumar; Burr, Ben; Liu, Shaolin; Zhang, Tianzhen; Fang, Deqiu; Pepper, Alan; Kumpatla, Siva; Jacobs, John; Tomkins, Jeff; Cantrell, Roy; Main, Dorrie

    2006-01-01

    Background The Cotton Microsatellite Database (CMD) is a curated and integrated web-based relational database providing centralized access to publicly available cotton microsatellites, an invaluable resource for basic and applied research in cotton breeding. Description At present CMD contains publication, sequence, primer, mapping and homology data for nine major cotton microsatellite projects, collectively representing 5,484 microsatellites. In addition, CMD displays data for three of the microsatellite projects that have been screened against a panel of core germplasm. The standardized panel consists of 12 diverse genotypes including genetic standards, mapping parents, BAC donors, subgenome representatives, unique breeding lines, exotic introgression sources, and contemporary Upland cottons with significant acreage. A suite of online microsatellite data mining tools are accessible at CMD. These include an SSR server which identifies microsatellites, primers, open reading frames, and GC-content of uploaded sequences; BLAST and FASTA servers providing sequence similarity searches against the existing cotton SSR sequences and primers, a CAP3 server to assemble EST sequences into longer transcripts prior to mining for SSRs, and CMap, a viewer for comparing cotton SSR maps. Conclusion The collection of publicly available cotton SSR markers in a centralized, readily accessible and curated web-enabled database provides a more efficient utilization of microsatellite resources and will help accelerate basic and applied research in molecular breeding and genetic mapping in Gossypium spp. PMID:16737546

  4. Spatial Databases for CalVO Volcanoes: Current Status and Future Directions

    NASA Astrophysics Data System (ADS)

    Ramsey, D. W.

    2013-12-01

    The U.S. Geological Survey (USGS) California Volcano Observatory (CalVO) aims to advance scientific understanding of volcanic processes and to lessen harmful impacts of volcanic activity in California and Nevada. Within CalVO's area of responsibility, ten volcanoes or volcanic centers have been identified by a national volcanic threat assessment in support of developing the U.S. National Volcano Early Warning System (NVEWS) as posing moderate, high, or very high threats to surrounding communities based on their recent eruptive histories and their proximity to vulnerable people, property, and infrastructure. To better understand the extent of potential hazards at these and other volcanoes and volcanic centers, the USGS Volcano Science Center (VSC) is continually compiling spatial databases of volcano information, including: geologic mapping, hazards assessment maps, locations of geochemical and geochronological samples, and the distribution of volcanic vents. This digital mapping effort has been ongoing for over 15 years and early databases are being converted to match recent datasets compiled with new data models designed for use in: 1) generating hazard zones, 2) evaluating risk to population and infrastructure, 3) numerical hazard modeling, and 4) display and query on the CalVO as well as other VSC and USGS websites. In these capacities, spatial databases of CalVO volcanoes and their derivative map products provide an integrated and readily accessible framework of VSC hazards science to colleagues, emergency managers, and the general public.

  5. Rice Annotation Project Database (RAP-DB): an integrative and interactive database for rice genomics.

    PubMed

    Sakai, Hiroaki; Lee, Sung Shin; Tanaka, Tsuyoshi; Numa, Hisataka; Kim, Jungsok; Kawahara, Yoshihiro; Wakimoto, Hironobu; Yang, Ching-chia; Iwamoto, Masao; Abe, Takashi; Yamada, Yuko; Muto, Akira; Inokuchi, Hachiro; Ikemura, Toshimichi; Matsumoto, Takashi; Sasaki, Takuji; Itoh, Takeshi

    2013-02-01

    The Rice Annotation Project Database (RAP-DB, http://rapdb.dna.affrc.go.jp/) has been providing a comprehensive set of gene annotations for the genome sequence of rice, Oryza sativa (japonica group) cv. Nipponbare. Since the first release in 2005, RAP-DB has been updated several times along with the genome assembly updates. Here, we present our newest RAP-DB based on the latest genome assembly, Os-Nipponbare-Reference-IRGSP-1.0 (IRGSP-1.0), which was released in 2011. We detected 37,869 loci by mapping transcript and protein sequences of 150 monocot species. To provide plant researchers with highly reliable and up to date rice gene annotations, we have been incorporating literature-based manually curated data, and 1,626 loci currently incorporate literature-based annotation data, including commonly used gene names or gene symbols. Transcriptional activities are shown at the nucleotide level by mapping RNA-Seq reads derived from 27 samples. We also mapped the Illumina reads of a Japanese leading japonica cultivar, Koshihikari, and a Chinese indica cultivar, Guangluai-4, to the genome and show alignments together with the single nucleotide polymorphisms (SNPs) and gene functional annotations through a newly developed browser, Short-Read Assembly Browser (S-RAB). We have developed two satellite databases, Plant Gene Family Database (PGFD) and Integrative Database of Cereal Gene Phylogeny (IDCGP), which display gene family and homologous gene relationships among diverse plant species. RAP-DB and the satellite databases offer simple and user-friendly web interfaces, enabling plant and genome researchers to access the data easily and facilitating a broad range of plant research topics.

  6. Software Engineering Laboratory (SEL) database organization and user's guide, revision 2

    NASA Technical Reports Server (NTRS)

    Morusiewicz, Linda; Bristow, John

    1992-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base table is described. In addition, techniques for accessing the database through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL) are discussed.

  7. Software Engineering Laboratory (SEL) database organization and user's guide

    NASA Technical Reports Server (NTRS)

    So, Maria; Heller, Gerard; Steinberg, Sandra; Spiegel, Douglas

    1989-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base tables is described. In addition, techniques for accessing the database, through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL), are discussed.

  8. Geologic map of the Patagonia Mountains, Santa Cruz County, Arizona

    USGS Publications Warehouse

    Graybeal, Frederick T.; Moyer, Lorre A.; Vikre, Peter; Dunlap, Pamela; Wallis, John C.

    2015-01-01

    Several spatial databases provide data for the geologic map of the Patagonia Mountains in Arizona. The data can be viewed and queried in ArcGIS 10, a geographic information system; a geologic map is also available in PDF format. All products are available online only.

  9. Staff - April M. Woolery | Alaska Division of Geological & Geophysical

    Science.gov Websites

    SurveysA> Skip to content State of Alaska myAlaska My Government Resident Business in Alaska Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic Geologic Mapping Advisory Board STATEMAP Publications Geophysics Program Information Geophysical Survey

  10. The Protein Disease Database of human body fluids: II. Computer methods and data issues.

    PubMed

    Lemkin, P F; Orr, G A; Goldstein, M P; Creed, G J; Myrick, J E; Merril, C R

    1995-01-01

    The Protein Disease Database (PDD) is a relational database of proteins and diseases. With this database it is possible to screen for quantitative protein abnormalities associated with disease states. These quantitative relationships use data drawn from the peer-reviewed biomedical literature. Assays may also include those observed in high-resolution electrophoretic gels that offer the potential to quantitate many proteins in a single test as well as data gathered by enzymatic or immunologic assays. We are using the Internet World Wide Web (WWW) and the Web browser paradigm as an access method for wide distribution and querying of the Protein Disease Database. The WWW hypertext transfer protocol and its Common Gateway Interface make it possible to build powerful graphical user interfaces that can support easy-to-use data retrieval using query specification forms or images. The details of these interactions are totally transparent to the users of these forms. Using a client-server SQL relational database, user query access, initial data entry and database maintenance are all performed over the Internet with a Web browser. We discuss the underlying design issues, mapping mechanisms and assumptions that we used in constructing the system, data entry, access to the database server, security, and synthesis of derived two-dimensional gel image maps and hypertext documents resulting from SQL database searches.

  11. Levelling and merging of two discrete national-scale geochemical databases: A case study showing the surficial expression of metalliferous black shales

    USGS Publications Warehouse

    Smith, Steven M.; Neilson, Ryan T.; Giles, Stuart A.

    2015-01-01

    Government-sponsored, national-scale, soil and sediment geochemical databases are used to estimate regional and local background concentrations for environmental issues, identify possible anthropogenic contamination, estimate mineral endowment, explore for new mineral deposits, evaluate nutrient levels for agriculture, and establish concentration relationships with human or animal health. Because of these different uses, it is difficult for any single database to accommodate all the needs of each client. Smith et al. (2013, p. 168) reviewed six national-scale soil and sediment geochemical databases for the United States (U.S.) and, for each, evaluated “its appropriateness as a national-scale geochemical database and its usefulness for national-scale geochemical mapping.” Each of the evaluated databases has strengths and weaknesses that were listed in that review.Two of these U.S. national-scale geochemical databases are similar in their sample media and collection protocols but have different strengths—primarily sampling density and analytical consistency. This project was implemented to determine whether those databases could be merged to produce a combined dataset that could be used for mineral resource assessments. The utility of the merged database was tested to see whether mapped distributions could identify metalliferous black shales at a national scale.

  12. Engineering With Nature Geographic Project Mapping Tool (EWN ProMap)

    DTIC Science & Technology

    2015-07-01

    EWN ProMap database provides numerous case studies for infrastructure projects such as breakwaters, river engineering dikes, and seawalls that have...the EWN Project Mapping Tool (EWN ProMap) is to assist users in their search for case study information that can be valuable for developing EWN ideas...Essential elements of EWN include: (1) using science and engineering to produce operational efficiencies supporting sustainable delivery of

  13. Using a spatial and tabular database to generate statistics from terrain and spectral data for soil surveys

    USGS Publications Warehouse

    Horvath , E.A.; Fosnight, E.A.; Klingebiel, A.A.; Moore, D.G.; Stone, J.E.; Reybold, W.U.; Petersen, G.W.

    1987-01-01

    A methodology has been developed to create a spatial database by referencing digital elevation, Landsat multispectral scanner data, and digitized soil premap delineations of a number of adjacent 7.5-min quadrangle areas to a 30-m Universal Transverse Mercator projection. Slope and aspect transformations are calculated from elevation data and grouped according to field office specifications. An unsupervised classification is performed on a brightness and greenness transformation of the spectral data. The resulting spectral, slope, and aspect maps of each of the 7.5-min quadrangle areas are then plotted and submitted to the field office to be incorporated into the soil premapping stages of a soil survey. A tabular database is created from spatial data by generating descriptive statistics for each data layer within each soil premap delineation. The tabular data base is then entered into a data base management system to be accessed by the field office personnel during the soil survey and to be used for subsequent resource management decisions.Large amounts of data are collected and archived during resource inventories for public land management. Often these data are stored as stacks of maps or folders in a file system in someone's office, with the maps in a variety of formats, scales, and with various standards of accuracy depending on their purpose. This system of information storage and retrieval is cumbersome at best when several categories of information are needed simultaneously for analysis or as input to resource management models. Computers now provide the resource scientist with the opportunity to design increasingly complex models that require even more categories of resource-related information, thus compounding the problem.Recently there has been much emphasis on the use of geographic information systems (GIS) as an alternative method for map data archives and as a resource management tool. Considerable effort has been devoted to the generation of tabular databases, such as the U.S. Department of Agriculture's SCS/S015 (Soil Survey Staff, 1983), to archive the large amounts of information that are collected in conjunction with mapping of natural resources in an easily retrievable manner.During the past 4 years the U.S. Geological Survey's EROS Data Center, in a cooperative effort with the Bureau of Land Management (BLM) and the Soil Conservation Service (SCS), developed a procedure that uses spatial and tabular databases to generate elevation, slope, aspect, and spectral map products that can be used during soil premapping. The procedure results in tabular data, residing in a database management system, that are indexed to the final soil delineations and help quantify soil map unit composition.The procedure was developed and tested on soil surveys on over 600 000 ha in Wyoming, Nevada, and Idaho. A transfer of technology from the EROS Data Center to the BLM will enable the Denver BLM Service Center to use this procedure in soil survey operations on BLM lands. Also underway is a cooperative effort between the EROS Data Center and SCS to define and evaluate maps that can be produced as derivatives of digital elevation data for 7.5-min quadrangle areas, such as those used during the premapping stage of the soil surveys mentioned above, the idea being to make such products routinely available.The procedure emphasizes the applications of digital elevation and spectral data to order-three soil surveys on rangelands, and will:Incorporate digital terrain and spectral data into a spatial database for soil surveys.Provide hardcopy products (that can be generated from digital elevation model and spectral data) that are useful during the soil pre-mapping process.Incorporate soil premaps into a spatial database that can be accessed during the soil survey process along with terrain and spectral data.Summarize useful quantitative information for soil mapping and for making interpretations for resource management.

  14. Publications - AR 2015 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic publication sales page for more information. Quadrangle(s): Alaska General Bibliographic Reference DGGS Staff

  15. Publications - GMC 280 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic please see our publication sales page for more information. Bibliographic Reference Piggott, Neil, and

  16. Analysis of national and regional landslide inventories in Europe

    NASA Astrophysics Data System (ADS)

    Hervás, J.; Van Den Eeckhaut, M.

    2012-04-01

    A landslide inventory can be defined as a detailed register of the distribution and characteristics of past landslides in an area. Today most landslide inventories have the form of digital databases including landslide distribution maps and associated alphanumeric information for each landslide. While landslide inventories are of the utmost importance for land use planning and risk management through the generation of landslide zonation (susceptibility, hazard and risk) maps, landslide databases are thought to greatly differ from one country to another and often also within the same country. This hampers the generation of comparable, harmonised landslide zonation maps at national and continental scales, which is needed for policy and decision making at EU level as regarded for instance in the INSPIRE Directive and the Thematic Strategy for Soil Protection. In order to have a clear understanding of the landslide inventories available in Europe and their potential to produce landslide zonation maps as well as to draw recommendations to improve harmonisation and interoperability between landslide databases, we have surveyed 37 countries. In total, information has been collected and analysed for 24 national databases in 22 countries (Albania, Andorra, Austria, Bosnia and Herzegovina, Bulgaria, Czech Republic, Former Yugoslav Republic of Macedonia, France, Greece, Hungary, Iceland, Ireland, Italy, Norway, Poland, Portugal, Slovakia, Slovenia, Spain, Sweden, Switzerland and UK) and 22 regional databases in 10 countries. At the moment, over 633,000 landslides are recorded in national databases, representing on average less than 50% of the estimated landslides occurred in these countries. The sample of regional databases included over 103,000 landslides, with an estimated completeness substantially higher than that of national databases, as more attention can be paid for data collection over smaller regions. Yet, both for national and regional coverage, the data collection methods only occasionally included advanced technologies such as remote sensing. With regard to the inventory maps of most databases, the analysis illustrates the high variability of scales (between 1:10 000 and 1:1 M for national inventories, and from 1:10 000 to 1:25 000 for regional inventories), landslide classification systems and representation symbology. It also shows the difficulties to precisely locate landslides referred to in historical documents only. In addition, information on landslide magnitude, geometrical characteristics and age reported in national and regional databases greatly differs, even within the same database, as it strongly depends on the objectives of the database, the data collection methods used, the resources employed and the remaining landslide expression. In particular, landslide initiation and/or reactivation dates are generally estimated in less than 25% of records, thus making hazard and hence risk assessment difficult. In most databases, scarce information on landslide impact (damage and casualties) further hinders risk assessment at regional and national scales. Estimated landslide activity, which is very relevant to early warning and emergency management, is only included in half of the national databases and restricted to part of the landslides registered. Moreover, the availability of this information is not substantially higher in regional databases than in national ones. Most landslide databases further included information on geo-environmental characteristics at the landslide site, which is very important for modelling landslide zoning. Although a number of national and regional agencies provide free web-GIS visualisation services, the potential of existing landslide databases is often not fully exploited as, in many cases, access by the general public and external researchers is restricted. Additionally, the availability of information only in the national or local language is common to most national and regional databases, thus hampering consultation for most foreigners. Finally, some suggestions for a minimum set of attributes to be collected and made available by European countries for building up a continental landslide database in support of EU policies are presented. This study has been conducted in the framework of the EU-FP7 project SafeLand (Grant Agreement 22647).

  17. Extending GIS Technology to Study Karst Features of Southeastern Minnesota

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Tipping, R. G.; Alexander, E. C.; Alexander, S. C.

    2001-12-01

    This paper summarizes ongoing research on karst feature distribution of southeastern Minnesota. The main goals of this interdisciplinary research are: 1) to look for large-scale patterns in the rate and distribution of sinkhole development; 2) to conduct statistical tests of hypotheses about the formation of sinkholes; 3) to create management tools for land-use managers and planners; and 4) to deliver geomorphic and hydrogeologic criteria for making scientifically valid land-use policies and ethical decisions in karst areas of southeastern Minnesota. Existing county and sub-county karst feature datasets of southeastern Minnesota have been assembled into a large GIS-based database capable of analyzing the entire data set. The central database management system (DBMS) is a relational GIS-based system interacting with three modules: GIS, statistical and hydrogeologic modules. ArcInfo and ArcView were used to generate a series of 2D and 3D maps depicting karst feature distributions in southeastern Minnesota. IRIS ExplorerTM was used to produce satisfying 3D maps and animations using data exported from GIS-based database. Nearest-neighbor analysis has been used to test sinkhole distributions in different topographic and geologic settings. All current nearest-neighbor analyses testify that sinkholes in southeastern Minnesota are not evenly distributed in this area (i.e., they tend to be clustered). More detailed statistical methods such as cluster analysis, histograms, probability estimation, correlation and regression have been used to study the spatial distributions of some mapped karst features of southeastern Minnesota. A sinkhole probability map for Goodhue County has been constructed based on sinkhole distribution, bedrock geology, depth to bedrock, GIS buffer analysis and nearest-neighbor analysis. A series of karst features for Winona County including sinkholes, springs, seeps, stream sinks and outcrop has been mapped and entered into the Karst Feature Database of Southeastern Minnesota. The Karst Feature Database of Winona County is being expanded to include all the mapped karst features of southeastern Minnesota. Air photos from 1930s to 1990s of Spring Valley Cavern Area in Fillmore County were scanned and geo-referenced into our GIS system. This technology has been proved to be very useful to identify sinkholes and study the rate of sinkhole development.

  18. Mapping small molecule binding data to structural domains

    PubMed Central

    2012-01-01

    Background Large-scale bioactivity/SAR Open Data has recently become available, and this has allowed new analyses and approaches to be developed to help address the productivity and translational gaps of current drug discovery. One of the current limitations of these data is the relative sparsity of reported interactions per protein target, and complexities in establishing clear relationships between bioactivity and targets using bioinformatics tools. We detail in this paper the indexing of targets by the structural domains that bind (or are likely to bind) the ligand within a full-length protein. Specifically, we present a simple heuristic to map small molecule binding to Pfam domains. This profiling can be applied to all proteins within a genome to give some indications of the potential pharmacological modulation and regulation of all proteins. Results In this implementation of our heuristic, ligand binding to protein targets from the ChEMBL database was mapped to structural domains as defined by profiles contained within the Pfam-A database. Our mapping suggests that the majority of assay targets within the current version of the ChEMBL database bind ligands through a small number of highly prevalent domains, and conversely the majority of Pfam domains sampled by our data play no currently established role in ligand binding. Validation studies, carried out firstly against Uniprot entries with expert binding-site annotation and secondly against entries in the wwPDB repository of crystallographic protein structures, demonstrate that our simple heuristic maps ligand binding to the correct domain in about 90 percent of all assessed cases. Using the mappings obtained with our heuristic, we have assembled ligand sets associated with each Pfam domain. Conclusions Small molecule binding has been mapped to Pfam-A domains of protein targets in the ChEMBL bioactivity database. The result of this mapping is an enriched annotation of small molecule bioactivity data and a grouping of activity classes following the Pfam-A specifications of protein domains. This is valuable for data-focused approaches in drug discovery, for example when extrapolating potential targets of a small molecule with known activity against one or few targets, or in the assessment of a potential target for drug discovery or screening studies. PMID:23282026

  19. Where can cone penetrometer technology be applied? Development of a map of Europe regarding the soil penetrability.

    PubMed

    Fleischer, Matthias; van Ree, Derk; Leven, Carsten

    2014-01-01

    Over the past decades, significant efforts have been invested in the development of push-in technology for site characterization and monitoring for geotechnical and environmental purposes and have especially been undertaken in the Netherlands and Germany. These technologies provide the opportunity for faster, cheaper, and collection of more reliable subsurface data. However, to maximize the technology both from a development and implementation point of view, it is necessary to have an overview of the areas suitable for the application of this type of technology. Such an overview is missing and cannot simply be read from existing maps and material. This paper describes the development of a map showing the feasibility or applicability of Direct Push/Cone Penetrometer Technology (DPT/CPT) in Europe which depends on the subsurface and its extremely varying properties throughout Europe. Subsurface penetrability is dependent on a range of factors that have not been mapped directly or can easily be inferred from existing databases, especially the maximum depth reachable would be of interest. Among others, it mainly depends on the geology, the soil mechanical properties, the type of equipment used as well as soil-forming processes. This study starts by looking at different geological databases available at the European scale. Next, a scheme has been developed linking geological properties mapped to geotechnical properties to determine basic penetrability categories. From this, a map of soil penetrability is developed and presented. Validating the output by performing field tests was beyond the scope of this study, but for the country of the Netherlands, this map has been compared against a database containing actual cone penetrometer depth data to look for possible contradictory results that would negate the approach. The map for the largest part of Europe clearly shows that there is a much wider potential for the application of Direct Push Technology than is currently seen. The study also shows that there is a lack of large-scale databases that contain depth-resolved data as well as soil mechanical and physical properties that can be used for engineering purposes in relation to the subsurface.

  20. Reconstruction of biological pathways and metabolic networks from in silico labeled metabolites.

    PubMed

    Hadadi, Noushin; Hafner, Jasmin; Soh, Keng Cher; Hatzimanikatis, Vassily

    2017-01-01

    Reaction atom mappings track the positional changes of all of the atoms between the substrates and the products as they undergo the biochemical transformation. However, information on atom transitions in the context of metabolic pathways is not widely available in the literature. The understanding of metabolic pathways at the atomic level is of great importance as it can deconvolute the overlapping catabolic/anabolic pathways resulting in the observed metabolic phenotype. The automated identification of atom transitions within a metabolic network is a very challenging task since the degree of complexity of metabolic networks dramatically increases when we transit from metabolite-level studies to atom-level studies. Despite being studied extensively in various approaches, the field of atom mapping of metabolic networks is lacking an automated approach, which (i) accounts for the information of reaction mechanism for atom mapping and (ii) is extendable from individual atom-mapped reactions to atom-mapped reaction networks. Hereby, we introduce a computational framework, iAM.NICE (in silico Atom Mapped Network Integrated Computational Explorer), for the systematic atom-level reconstruction of metabolic networks from in silico labelled substrates. iAM.NICE is to our knowledge the first automated atom-mapping algorithm that is based on the underlying enzymatic biotransformation mechanisms, and its application goes beyond individual reactions and it can be used for the reconstruction of atom-mapped metabolic networks. We illustrate the applicability of our method through the reconstruction of atom-mapped reactions of the KEGG database and we provide an example of an atom-level representation of the core metabolic network of E. coli. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Morphotectonic architecture of the Transantarctic Mountains rift flank between the Royal Society Range and the Churchill Mountains based on geomorphic analysis

    USGS Publications Warehouse

    Demyanick, Elizabeth; Wilson, Terry J.

    2007-01-01

    Extensional forces within the Antarctic Plate have produced the Transantarctic Mountains rift-flank uplift along the West Antarctic rift margin. Large-scale linear morphologic features within the mountains are controlled by bedrock structure and can be recognized and mapped from satellite imagery and digital elevation models (DEMs). This study employed the Antarctic Digital Database DEM to obtain slope steepness and aspect maps of the Transantarctic Mountains (TAM) between the Royal Society Range and the Churchill Mountains, allowing definition of the position and orientation of the morphological axis of the rift-flank. The TAM axis, interpreted as a fault-controlled escarpment formed by coast-parallel retreat, provides a marker for the orientation of the faulted boundary between the TAM and the rift system. Changes in position and orientation of the TAM axis suggests the rift flank is segmented into tectonic blocks bounded by relay ramps and transverse accommodation zones. The transverse boundaries coincide with major outlet glaciers, supporting interpretation of rift structures between them. The pronounced morphological change across Byrd Glacier points to control by structures inherited from the Ross orogen.

  2. Some thoughts on cartographic and geographic information systems for the 1980's

    USGS Publications Warehouse

    Starr, L.E.; Anderson, Kirk E.

    1981-01-01

    The U.S. Geological Survey is adopting computer techniques to meet the expanding need for cartographic base category data. Digital methods are becoming increasingly important in the mapmaking process, and the demand is growing for physical, social, and economic data. Recognizing these emerging needs, the National Mapping Division began, several years ago, an active program to develop advanced digital methods to support cartographic and geographic data processing. An integrated digital cartographic database would meet the anticipated needs. Such a database would contain data from various sources, and could provide a variety of standard and customized map and digital data file products. This cartographic database soon will be technologically feasible. The present trends in the economics of cartographic and geographic data handling and the growing needs for integrated physical, social, and economic data make such a database virtually mandatory.

  3. BioBarcode: a general DNA barcoding database and server platform for Asian biodiversity resources.

    PubMed

    Lim, Jeongheui; Kim, Sang-Yoon; Kim, Sungmin; Eo, Hae-Seok; Kim, Chang-Bae; Paek, Woon Kee; Kim, Won; Bhak, Jong

    2009-12-03

    DNA barcoding provides a rapid, accurate, and standardized method for species-level identification using short DNA sequences. Such a standardized identification method is useful for mapping all the species on Earth, particularly when DNA sequencing technology is cheaply available. There are many nations in Asia with many biodiversity resources that need to be mapped and registered in databases. We have built a general DNA barcode data processing system, BioBarcode, with open source software - which is a general purpose database and server. It uses mySQL RDBMS 5.0, BLAST2, and Apache httpd server. An exemplary database of BioBarcode has around 11,300 specimen entries (including GenBank data) and registers the biological species to map their genetic relationships. The BioBarcode database contains a chromatogram viewer which improves the performance in DNA sequence analyses. Asia has a very high degree of biodiversity and the BioBarcode database server system aims to provide an efficient bioinformatics protocol that can be freely used by Asian researchers and research organizations interested in DNA barcoding. The BioBarcode promotes the rapid acquisition of biological species DNA sequence data that meet global standards by providing specialized services, and provides useful tools that will make barcoding cheaper and faster in the biodiversity community such as standardization, depository, management, and analysis of DNA barcode data. The system can be downloaded upon request, and an exemplary server has been constructed with which to build an Asian biodiversity system http://www.asianbarcode.org.

  4. Effective Use of Java Data Objects in Developing Database Applications; Advantages and Disadvantages

    DTIC Science & Technology

    2004-06-01

    DATA OBJECTS IN DEVELOPING DATABASE APPLICATIONS. ADVANTAGES AND DISADVANTAGES Paschalis Zilidis June 2004 Thesis Advisor: Thomas...Objects in Developing Database Applications. Advantages and Disadvantages 6. AUTHOR(S) Paschalis ZILIDIS 5. FUNDING NUMBERS 7. PERFORMING...database for the backend datastore. The major disadvantage of this approach is the well-known “impedance mismatch” in which some form of mapping is

  5. GIS Database and Google Map of the Population at Risk of Cholangiocarcinoma in Mueang Yang District, Nakhon Ratchasima Province of Thailand.

    PubMed

    Kaewpitoon, Soraya J; Rujirakul, Ratana; Joosiri, Apinya; Jantakate, Sirinun; Sangkudloa, Amnat; Kaewthani, Sarochinee; Chimplee, Kanokporn; Khemplila, Kritsakorn; Kaewpitoon, Natthawut

    2016-01-01

    Cholangiocarcinoma (CCA) is a serious problem in Thailand, particularly in the northeastern and northern regions. Database of population at risk are need required for monitoring, surveillance, home health care, and home visit. Therefore, this study aimed to develop a geographic information system (GIS) database and Google map of the population at risk of CCA in Mueang Yang district, Nakhon Ratchasima province, northeastern Thailand during June to October 2015. Populations at risk were screened using the Korat CCA verbal screening test (KCVST). Software included Microsoft Excel, ArcGIS, and Google Maps. The secondary data included the point of villages, sub-district boundaries, district boundaries, point of hospital in Mueang Yang district, used for created the spatial databese. The populations at risk for CCA and opisthorchiasis were used to create an arttribute database. Data were tranfered to WGS84 UTM ZONE 48. After the conversion, all of the data were imported into Google Earth using online web pages www.earthpoint.us. Some 222 from a 4,800 population at risk for CCA constituted a high risk group. Geo-visual display available at following www.google.com/maps/d/u/0/ edit?mid=zPxtcHv_iDLo.kvPpxl5mAs90 and hl=th. Geo-visual display 5 layers including: layer 1, village location and number of the population at risk for CCA; layer 2, sub-district health promotion hospital in Mueang Yang district and number of opisthorchiasis; layer 3, sub-district district and the number of population at risk for CCA; layer 4, district hospital and the number of population at risk for CCA and number of opisthorchiasis; and layer 5, district and the number of population at risk for CCA and number of opisthorchiasis. This GIS database and Google map production process is suitable for further monitoring, surveillance, and home health care for CCA sufferers.

  6. ReMatch: a web-based tool to construct, store and share stoichiometric metabolic models with carbon maps for metabolic flux analysis.

    PubMed

    Pitkänen, Esa; Akerlund, Arto; Rantanen, Ari; Jouhten, Paula; Ukkonen, Esko

    2008-08-25

    ReMatch is a web-based, user-friendly tool that constructs stoichiometric network models for metabolic flux analysis, integrating user-developed models into a database collected from several comprehensive metabolic data resources, including KEGG, MetaCyc and CheBI. Particularly, ReMatch augments the metabolic reactions of the model with carbon mappings to facilitate (13)C metabolic flux analysis. The construction of a network model consisting of biochemical reactions is the first step in most metabolic modelling tasks. This model construction can be a tedious task as the required information is usually scattered to many separate databases whose interoperability is suboptimal, due to the heterogeneous naming conventions of metabolites in different databases. Another, particularly severe data integration problem is faced in (13)C metabolic flux analysis, where the mappings of carbon atoms from substrates into products in the model are required. ReMatch has been developed to solve the above data integration problems. First, ReMatch matches the imported user-developed model against the internal ReMatch database while considering a comprehensive metabolite name thesaurus. This, together with wild card support, allows the user to specify the model quickly without having to look the names up manually. Second, ReMatch is able to augment reactions of the model with carbon mappings, obtained either from the internal database or given by the user with an easy-touse tool. The constructed models can be exported into 13C-FLUX and SBML file formats. Further, a stoichiometric matrix and visualizations of the network model can be generated. The constructed models of metabolic networks can be optionally made available to the other users of ReMatch. Thus, ReMatch provides a common repository for metabolic network models with carbon mappings for the needs of metabolic flux analysis community. ReMatch is freely available for academic use at http://www.cs.helsinki.fi/group/sysfys/software/rematch/.

  7. An offline-online Web-GIS Android application for fast data acquisition of landslide hazard and risk

    NASA Astrophysics Data System (ADS)

    Olyazadeh, Roya; Sudmeier-Rieux, Karen; Jaboyedoff, Michel; Derron, Marc-Henri; Devkota, Sanjaya

    2017-04-01

    Regional landslide assessments and mapping have been effectively pursued by research institutions, national and local governments, non-governmental organizations (NGOs), and different stakeholders for some time, and a wide range of methodologies and technologies have consequently been proposed. Land-use mapping and hazard event inventories are mostly created by remote-sensing data, subject to difficulties, such as accessibility and terrain, which need to be overcome. Likewise, landslide data acquisition for the field navigation can magnify the accuracy of databases and analysis. Open-source Web and mobile GIS tools can be used for improved ground-truthing of critical areas to improve the analysis of hazard patterns and triggering factors. This paper reviews the implementation and selected results of a secure mobile-map application called ROOMA (Rapid Offline-Online Mapping Application) for the rapid data collection of landslide hazard and risk. This prototype assists the quick creation of landslide inventory maps (LIMs) by collecting information on the type, feature, volume, date, and patterns of landslides using open-source Web-GIS technologies such as Leaflet maps, Cordova, GeoServer, PostgreSQL as the real DBMS (database management system), and PostGIS as its plug-in for spatial database management. This application comprises Leaflet maps coupled with satellite images as a base layer, drawing tools, geolocation (using GPS and the Internet), photo mapping, and event clustering. All the features and information are recorded into a GeoJSON text file in an offline version (Android) and subsequently uploaded to the online mode (using all browsers) with the availability of Internet. Finally, the events can be accessed and edited after approval by an administrator and then be visualized by the general public.

  8. Ensemble of ground subsidence hazard maps using fuzzy logic

    NASA Astrophysics Data System (ADS)

    Park, Inhye; Lee, Jiyeong; Saro, Lee

    2014-06-01

    Hazard maps of ground subsidence around abandoned underground coal mines (AUCMs) in Samcheok, Korea, were constructed using fuzzy ensemble techniques and a geographical information system (GIS). To evaluate the factors related to ground subsidence, a spatial database was constructed from topographic, geologic, mine tunnel, land use, groundwater, and ground subsidence maps. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 70/30 for training and validation of the models. The relationships between the detected ground-subsidence area and the factors were identified and quantified by frequency ratio (FR), logistic regression (LR) and artificial neural network (ANN) models. The relationships were used as factor ratings in the overlay analysis to create ground-subsidence hazard indexes and maps. The three GSH maps were then used as new input factors and integrated using fuzzy-ensemble methods to make better hazard maps. All of the hazard maps were validated by comparison with known subsidence areas that were not used directly in the analysis. As the result, the ensemble model was found to be more effective in terms of prediction accuracy than the individual model.

  9. Next Generation Clustered Heat Maps | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    Next-Generation (Clustered) Heat Maps are interactive heat maps that enable the user to zoom and pan across the heatmap, alter its color scheme, generate production quality PDFs, and link out from rows, columns, and individual heatmap entries to related statistics, databases and other information.

  10. Geologic map of the Sauvie Island quadrangle, Multnomah and Columbia Counties, Oregon, and Clark County, Washington

    USGS Publications Warehouse

    Evarts, Russell C.; O'Connor, Jim; Cannon, Charles M.

    2016-03-02

    This map contributes to a U.S. Geological Survey program to improve the geologic database for the Portland region of the Pacific Northwest urban corridor. The map and ancillary data will support assessments of seismic risk, ground-failure hazards, and resource availability.

  11. Publications - AR 2008 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic ; Geophysical Surveys Ordering Info: Download below or please see our publication sales page for more

  12. Publications - AR 2007 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic ; Geophysical Surveys Ordering Info: Download below or please see our publication sales page for more

  13. Publications - AR 2001 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic ; Geophysical Surveys Ordering Info: Download below or please see our publication sales page for more

  14. Publications - GMC 379 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic Info: Download below or please see our publication sales page for more information. Quadrangle(s

  15. Publications - AR 2002 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic ; Geophysical Surveys Ordering Info: Download below or please see our publication sales page for more

  16. Map and digital database of sedimentary basins and indications of petroleum in the Central Alaska Province

    USGS Publications Warehouse

    Troutman, Sandra M.; Stanley, Richard G.

    2003-01-01

    This database and accompanying text depict historical and modern reported occurrences of petroleum both in wells and at the surface within the boundaries of the Central Alaska Province. These data were compiled from previously published and unpublished sources and were prepared for use in the 2002 U.S. Geological Survey petroleum assessment of Central Alaska, Yukon Flats region. Indications of petroleum are described as oil or gas shows in wells, oil or gas seeps, or outcrops of oil shale or oil-bearing rock and include confirmed and unconfirmed reports. The scale of the source map limits the spatial resolution (scale) of the database to 1:2,500,000 or smaller.

  17. DIMA 3.0: Domain Interaction Map.

    PubMed

    Luo, Qibin; Pagel, Philipp; Vilne, Baiba; Frishman, Dmitrij

    2011-01-01

    Domain Interaction MAp (DIMA, available at http://webclu.bio.wzw.tum.de/dima) is a database of predicted and known interactions between protein domains. It integrates 5807 structurally known interactions imported from the iPfam and 3did databases and 46,900 domain interactions predicted by four computational methods: domain phylogenetic profiling, domain pair exclusion algorithm correlated mutations and domain interaction prediction in a discriminative way. Additionally predictions are filtered to exclude those domain pairs that are reported as non-interacting by the Negatome database. The DIMA Web site allows to calculate domain interaction networks either for a domain of interest or for entire organisms, and to explore them interactively using the Flash-based Cytoscape Web software.

  18. Preliminary integrated geologic map databases for the United States : Central states : Montana, Wyoming, Colorado, New Mexico, Kansas, Oklahoma, Texas, Missouri, Arkansas, and Louisiana

    USGS Publications Warehouse

    Stoeser, Douglas B.; Green, Gregory N.; Morath, Laurie C.; Heran, William D.; Wilson, Anna B.; Moore, David W.; Van Gosen, Bradley S.

    2005-01-01

    The growth in the use of Geographic Information Systems (GIS) has highlighted the need for regional and national digital geologic maps attributed with age and lithology information. Such maps can be conveniently used to generate derivative maps for purposes including mineral-resource assessment, metallogenic studies, tectonic studies, and environmental research. This Open-File Report is a preliminary version of part of a series of integrated state geologic map databases that cover the entire United States. The only national-scale digital geologic maps that portray most or all of the United States for the conterminous U.S. are the digital version of the King and Beikman (1974a, b) map at a scale of 1:2,500,000, as digitized by Schruben and others (1994) and the digital version of the Geologic Map of North America (Reed and others, 2005a, b) compiled at a scale of 1:5,000,000 which is currently being prepared by the U.S. Geological Survey. The present series of maps is intended to provide the next step in increased detail. State geologic maps that range in scale from 1:100,000 to 1:1,000,000 are available for most of the country, and digital versions of these state maps are the basis of this product. In a few cases, new digital compilations were prepared (e.g. OH, SC, SD) or existing paper maps were digitized (e.g. KY, TX). For Alaska and Hawaii, new regional maps are being compiled and ultimately new state maps will be produced. The digital geologic maps are presented in standardized formats as ARC/INFO (.e00) export files and as ArcView shape (.shp) files. Accompanying these spatial databases are a set of five supplemental data tables that relate the map units to detailed lithologic and age information. The maps for the CONUS have been fitted to a common set of state boundaries based on the 1:100,000 topographic map series of the United States Geological Survey (USGS). When the individual state maps are merged, the combined attribute tables can be used directly with the merged maps to make derivative maps. No attempt has been made to reconcile differences in mapped geology across state lines. This is the first version of this product and it will be subsequently updated to include four additional states (North Dakota, South Dakota, Nebraska, and Iowa)

  19. GrainGenes: Changing Times, Changing Databases, Digital Evolution.

    USDA-ARS?s Scientific Manuscript database

    The GrainGenes database is one of few agricultural databases that had an early start on the Internet and that has changed with the times. Initial goals were to collect a wide range of data relating to the developing maps and attributes of small grains crops, and to make them easily accessible. The ...

  20. Modernization and multiscale databases at the U.S. geological survey

    USGS Publications Warehouse

    Morrison, J.L.

    1992-01-01

    The U.S. Geological Survey (USGS) has begun a digital cartographic modernization program. Keys to that program are the creation of a multiscale database, a feature-based file structure that is derived from a spatial data model, and a series of "templates" or rules that specify the relationships between instances of entities in reality and features in the database. The database will initially hold data collected from the USGS standard map products at scales of 1:24,000, 1:100,000, and 1:2,000,000. The spatial data model is called the digital line graph-enhanced model, and the comprehensive rule set consists of collection rules, product generation rules, and conflict resolution rules. This modernization program will affect the USGS mapmaking process because both digital and graphic products will be created from the database. In addition, non-USGS map users will have more flexibility in uses of the databases. These remarks are those of the session discussant made in response to the six papers and the keynote address given in the session. ?? 1992.

  1. Database resources of the National Center for Biotechnology Information

    PubMed Central

    Wheeler, David L.; Church, Deanna M.; Lash, Alex E.; Leipe, Detlef D.; Madden, Thomas L.; Pontius, Joan U.; Schuler, Gregory D.; Schriml, Lynn M.; Tatusova, Tatiana A.; Wagner, Lukas; Rapp, Barbara A.

    2001-01-01

    In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides data analysis and retrieval resources that operate on the data in GenBank and a variety of other biological data made available through NCBI’s Web site. NCBI data retrieval resources include Entrez, PubMed, LocusLink and the Taxonomy Browser. Data analysis resources include BLAST, Electronic PCR, OrfFinder, RefSeq, UniGene, HomoloGene, Database of Single Nucleotide Polymorphisms (dbSNP), Human Genome Sequencing, Human MapViewer, GeneMap’99, Human–Mouse Homology Map, Cancer Chromosome Aberration Project (CCAP), Entrez Genomes, Clusters of Orthologous Groups (COGs) database, Retroviral Genotyping Tools, Cancer Genome Anatomy Project (CGAP), SAGEmap, Gene Expression Omnibus (GEO), Online Mendelian Inheri­tance in Man (OMIM), the Molecular Modeling Database (MMDB) and the Conserved Domain Database (CDD). Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at: http://www.ncbi.nlm.nih.gov. PMID:11125038

  2. Database resources of the National Center for Biotechnology

    PubMed Central

    Wheeler, David L.; Church, Deanna M.; Federhen, Scott; Lash, Alex E.; Madden, Thomas L.; Pontius, Joan U.; Schuler, Gregory D.; Schriml, Lynn M.; Sequeira, Edwin; Tatusova, Tatiana A.; Wagner, Lukas

    2003-01-01

    In addition to maintaining the GenBank(R) nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides data analysis and retrieval resources for the data in GenBank and other biological data made available through NCBI's Web site. NCBI resources include Entrez, PubMed, PubMed Central (PMC), LocusLink, the NCBITaxonomy Browser, BLAST, BLAST Link (BLink), Electronic PCR (e-PCR), Open Reading Frame (ORF) Finder, References Sequence (RefSeq), UniGene, HomoloGene, ProtEST, Database of Single Nucleotide Polymorphisms (dbSNP), Human/Mouse Homology Map, Cancer Chromosome Aberration Project (CCAP), Entrez Genomes and related tools, the Map Viewer, Model Maker (MM), Evidence Viewer (EV), Clusters of Orthologous Groups (COGs) database, Retroviral Genotyping Tools, SAGEmap, Gene Expression Omnibus (GEO), Online Mendelian Inheritance in Man (OMIM), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), and the Conserved Domain Architecture Retrieval Tool (CDART). Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at: http://www.ncbi.nlm.nih.gov. PMID:12519941

  3. Publications - GMC 322 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic Ordering Info: Download below or please see our publication sales page for more information. Quadrangle(s

  4. Drainage identification analysis and mapping, phase 2.

    DOT National Transportation Integrated Search

    2017-01-01

    Drainage Identification, Analysis and Mapping System (DIAMS) is a computerized database that captures and : stores relevant information associated with all aboveground and underground hydraulic structures belonging to : the New Jersey Department of T...

  5. Karst mapping in the United States: Past, present and future

    USGS Publications Warehouse

    Weary, David J.; Doctor, Daniel H.

    2015-01-01

    The earliest known comprehensive karst map of the entire USA was published by Stringfield and LeGrand (1969), based on compilations of William E. Davies of the U.S. Geological Survey (USGS). Various versions of essentially the same map have been published since. The USGS recently published new digital maps and databases depicting the extent of known karst, potential karst, and pseudokarst areas of the United States of America including Puerto Rico and the U.S. Virgin Islands (Weary and Doctor, 2014). These maps are based primarily on the extent of potentially karstic soluble rock types, and rocks with physical properties conducive to the formation of pseudokarst features. These data were compiled and refined from multiple sources at various spatial resolutions, mostly as digital data supplied by state geological surveys. The database includes polygons delineating areas with potential for karst and that are tagged with attributes intended to facilitate classification of karst regions. Approximately 18% of the surface of the fifty United States is underlain by significantly soluble bedrock. In the eastern United States the extent of outcrop of soluble rocks provides a good first-approximation of the distribution of karst and potential karst areas. In the arid western states, the extent of soluble rock outcrop tends to overestimate the extent of regions that might be considered as karst under current climatic conditions, but the new dataset encompasses those regions nonetheless. This database will be revised as needed, and the present map will be updated as new information is incorporated.

  6. A Spatiotemporal Database to Track Human Scrub Typhus Using the VectorMap Application

    PubMed Central

    Kelly, Daryl J.; Foley, Desmond H.; Richards, Allen L.

    2015-01-01

    Scrub typhus is a potentially fatal mite-borne febrile illness, primarily of the Asia-Pacific Rim. With an endemic area greater than 13 million km2 and millions of people at risk, scrub typhus remains an underreported, often misdiagnosed febrile illness. A comprehensive, updatable map of the true distribution of cases has been lacking, and therefore the true risk of disease within the very large endemic area remains unknown. The purpose of this study was to establish a database and map to track human scrub typhus. An online search using PubMed and the United States Armed Forces Pest Management Board Literature Retrieval System was performed to identify articles describing human scrub typhus cases both within and outside the traditionally accepted endemic regions. Using World Health Organization guidelines, stringent criteria were used to establish diagnoses for inclusion in the database. The preliminary screening of 181 scrub typhus publications yielded 145 publications that met the case criterion, 267 case records, and 13 serosurvey records that could be georeferenced, describing 13,739 probable or confirmed human cases in 28 countries. A map service has been established within VectorMap (www.vectormap.org) to explore the role that relative location of vectors, hosts, and the pathogen play in the transmission of mite-borne scrub typhus. The online display of scrub typhus cases in VectorMap illustrates their presence and provides an up-to-date geographic distribution of proven scrub typhus cases. PMID:26678263

  7. A Spatiotemporal Database to Track Human Scrub Typhus Using the VectorMap Application.

    PubMed

    Kelly, Daryl J; Foley, Desmond H; Richards, Allen L

    2015-12-01

    Scrub typhus is a potentially fatal mite-borne febrile illness, primarily of the Asia-Pacific Rim. With an endemic area greater than 13 million km2 and millions of people at risk, scrub typhus remains an underreported, often misdiagnosed febrile illness. A comprehensive, updatable map of the true distribution of cases has been lacking, and therefore the true risk of disease within the very large endemic area remains unknown. The purpose of this study was to establish a database and map to track human scrub typhus. An online search using PubMed and the United States Armed Forces Pest Management Board Literature Retrieval System was performed to identify articles describing human scrub typhus cases both within and outside the traditionally accepted endemic regions. Using World Health Organization guidelines, stringent criteria were used to establish diagnoses for inclusion in the database. The preliminary screening of 181 scrub typhus publications yielded 145 publications that met the case criterion, 267 case records, and 13 serosurvey records that could be georeferenced, describing 13,739 probable or confirmed human cases in 28 countries. A map service has been established within VectorMap (www.vectormap.org) to explore the role that relative location of vectors, hosts, and the pathogen play in the transmission of mite-borne scrub typhus. The online display of scrub typhus cases in VectorMap illustrates their presence and provides an up-to-date geographic distribution of proven scrub typhus cases.

  8. Applying manifold learning techniques to the CAESAR database

    NASA Astrophysics Data System (ADS)

    Mendoza-Schrock, Olga; Patrick, James; Arnold, Gregory; Ferrara, Matthew

    2010-04-01

    Understanding and organizing data is the first step toward exploiting sensor phenomenology for dismount tracking. What image features are good for distinguishing people and what measurements, or combination of measurements, can be used to classify the dataset by demographics including gender, age, and race? A particular technique, Diffusion Maps, has demonstrated the potential to extract features that intuitively make sense [1]. We want to develop an understanding of this tool by validating existing results on the Civilian American and European Surface Anthropometry Resource (CAESAR) database. This database, provided by the Air Force Research Laboratory (AFRL) Human Effectiveness Directorate and SAE International, is a rich dataset which includes 40 traditional, anthropometric measurements of 4400 human subjects. If we could specifically measure the defining features for classification, from this database, then the future question will then be to determine a subset of these features that can be measured from imagery. This paper briefly describes the Diffusion Map technique, shows potential for dimension reduction of the CAESAR database, and describes interesting problems to be further explored.

  9. Benchmarking database performance for genomic data.

    PubMed

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc. © 2015 Wiley Periodicals, Inc.

  10. Rice proteome analysis: a step toward functional analysis of the rice genome.

    PubMed

    Komatsu, Setsuko; Tanaka, Naoki

    2005-03-01

    The technique of proteome analysis using 2-DE has the power to monitor global changes that occur in the protein complement of tissues and subcellular compartments. In this review, we describe construction of the rice proteome database, the cataloging of rice proteins, and the functional characterization of some of the proteins identified. Initially, proteins extracted from various tissues and organelles were separated by 2-DE and an image analyzer was used to construct a display or reference map of the proteins. The rice proteome database currently contains 23 reference maps based on 2-DE of proteins from different rice tissues and subcellular compartments. These reference maps comprise 13 129 rice proteins, and the amino acid sequences of 5092 of these proteins are entered in the database. Major proteins involved in growth or stress responses have been identified by using a proteomics approach and some of these proteins have unique functions. Furthermore, initial work has also begun on analyzing the phosphoproteome and protein-protein interactions in rice. The information obtained from the rice proteome database will aid in the molecular cloning of rice genes and in predicting the function of unknown proteins.

  11. Proteome reference map and regulation network of neonatal rat cardiomyocyte

    PubMed Central

    Li, Zi-jian; Liu, Ning; Han, Qi-de; Zhang, You-yi

    2011-01-01

    Aim: To study and establish a proteome reference map and regulation network of neonatal rat cardiomyocyte. Methods: Cultured cardiomyocytes of neonatal rats were used. All proteins expressed in the cardiomyocytes were separated and identified by two-dimensional polyacrylamide gel electrophoresis (2-DE) and matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS). Biological networks and pathways of the neonatal rat cardiomyocytes were analyzed using the Ingenuity Pathway Analysis (IPA) program (www.ingenuity.com). A 2-DE database was made accessible on-line by Make2ddb package on a web server. Results: More than 1000 proteins were separated on 2D gels, and 148 proteins were identified. The identified proteins were used for the construction of an extensible markup language-based database. Biological networks and pathways were constructed to analyze the functions associate with cardiomyocyte proteins in the database. The 2-DE database of rat cardiomyocyte proteins can be accessed at http://2d.bjmu.edu.cn. Conclusion: A proteome reference map and regulation network of the neonatal rat cardiomyocytes have been established, which may serve as an international platform for storage, analysis and visualization of cardiomyocyte proteomic data. PMID:21841810

  12. MagnaportheDB: a federated solution for integrating physical and genetic map data with BAC end derived sequences for the rice blast fungus Magnaporthe grisea.

    PubMed

    Martin, Stanton L; Blackmon, Barbara P; Rajagopalan, Ravi; Houfek, Thomas D; Sceeles, Robert G; Denn, Sheila O; Mitchell, Thomas K; Brown, Douglas E; Wing, Rod A; Dean, Ralph A

    2002-01-01

    We have created a federated database for genome studies of Magnaporthe grisea, the causal agent of rice blast disease, by integrating end sequence data from BAC clones, genetic marker data and BAC contig assembly data. A library of 9216 BAC clones providing >25-fold coverage of the entire genome was end sequenced and fingerprinted by HindIII digestion. The Image/FPC software package was then used to generate an assembly of 188 contigs covering >95% of the genome. The database contains the results of this assembly integrated with hybridization data of genetic markers to the BAC library. AceDB was used for the core database engine and a MySQL relational database, populated with numerical representations of BAC clones within FPC contigs, was used to create appropriately scaled images. The database is being used to facilitate sequencing efforts. The database also allows researchers mapping known genes or other sequences of interest, rapid and easy access to the fundamental organization of the M.grisea genome. This database, MagnaportheDB, can be accessed on the web at http://www.cals.ncsu.edu/fungal_genomics/mgdatabase/int.htm.

  13. Evaluation and comparison of bioinformatic tools for the enrichment analysis of metabolomics data.

    PubMed

    Marco-Ramell, Anna; Palau-Rodriguez, Magali; Alay, Ania; Tulipani, Sara; Urpi-Sarda, Mireia; Sanchez-Pla, Alex; Andres-Lacueva, Cristina

    2018-01-02

    Bioinformatic tools for the enrichment of 'omics' datasets facilitate interpretation and understanding of data. To date few are suitable for metabolomics datasets. The main objective of this work is to give a critical overview, for the first time, of the performance of these tools. To that aim, datasets from metabolomic repositories were selected and enriched data were created. Both types of data were analysed with these tools and outputs were thoroughly examined. An exploratory multivariate analysis of the most used tools for the enrichment of metabolite sets, based on a non-metric multidimensional scaling (NMDS) of Jaccard's distances, was performed and mirrored their diversity. Codes (identifiers) of the metabolites of the datasets were searched in different metabolite databases (HMDB, KEGG, PubChem, ChEBI, BioCyc/HumanCyc, LipidMAPS, ChemSpider, METLIN and Recon2). The databases that presented more identifiers of the metabolites of the dataset were PubChem, followed by METLIN and ChEBI. However, these databases had duplicated entries and might present false positives. The performance of over-representation analysis (ORA) tools, including BioCyc/HumanCyc, ConsensusPathDB, IMPaLA, MBRole, MetaboAnalyst, Metabox, MetExplore, MPEA, PathVisio and Reactome and the mapping tool KEGGREST, was examined. Results were mostly consistent among tools and between real and enriched data despite the variability of the tools. Nevertheless, a few controversial results such as differences in the total number of metabolites were also found. Disease-based enrichment analyses were also assessed, but they were not found to be accurate probably due to the fact that metabolite disease sets are not up-to-date and the difficulty of predicting diseases from a list of metabolites. We have extensively reviewed the state-of-the-art of the available range of tools for metabolomic datasets, the completeness of metabolite databases, the performance of ORA methods and disease-based analyses. Despite the variability of the tools, they provided consistent results independent of their analytic approach. However, more work on the completeness of metabolite and pathway databases is required, which strongly affects the accuracy of enrichment analyses. Improvements will be translated into more accurate and global insights of the metabolome.

  14. Variations in Medical Subject Headings (MeSH) mapping: from the natural language of patron terms to the controlled vocabulary of mapped lists*

    PubMed Central

    Gault, Lora V.; Shultz, Mary; Davies, Kathy J.

    2002-01-01

    Objectives: This study compared the mapping of natural language patron terms to the Medical Subject Headings (MeSH) across six MeSH interfaces for the MEDLINE database. Methods: Test data were obtained from search requests submitted by patrons to the Library of the Health Sciences, University of Illinois at Chicago, over a nine-month period. Search request statements were parsed into separate terms or phrases. Using print sources from the National Library of Medicine, Each parsed patron term was assigned corresponding MeSH terms. Each patron term was entered into each of the selected interfaces to determine how effectively they mapped to MeSH. Data were collected for mapping success, accessibility of MeSH term within mapped list, and total number of MeSH choices within each list. Results: The selected MEDLINE interfaces do not map the same patron term in the same way, nor do they consistently lead to what is considered the appropriate MeSH term. Conclusions: If searchers utilize the MEDLINE database to its fullest potential by mapping to MeSH, the results of the mapping will vary between interfaces. This variance may ultimately impact the search results. These differences should be considered when choosing a MEDLINE interface and when instructing end users. PMID:11999175

  15. Variations in Medical Subject Headings (MeSH) mapping: from the natural language of patron terms to the controlled vocabulary of mapped lists.

    PubMed

    Gault, Lora V; Shultz, Mary; Davies, Kathy J

    2002-04-01

    This study compared the mapping of natural language patron terms to the Medical Subject Headings (MeSH) across six MeSH interfaces for the MEDLINE database. Test data were obtained from search requests submitted by patrons to the Library of the Health Sciences, University of Illinois at Chicago, over a nine-month period. Search request statements were parsed into separate terms or phrases. Using print sources from the National Library of Medicine, Each parsed patron term was assigned corresponding MeSH terms. Each patron term was entered into each of the selected interfaces to determine how effectively they mapped to MeSH. Data were collected for mapping success, accessibility of MeSH term within mapped list, and total number of MeSH choices within each list. The selected MEDLINE interfaces do not map the same patron term in the same way, nor do they consistently lead to what is considered the appropriate MeSH term. If searchers utilize the MEDLINE database to its fullest potential by mapping to MeSH, the results of the mapping will vary between interfaces. This variance may ultimately impact the search results. These differences should be considered when choosing a MEDLINE interface and when instructing end users.

  16. The World Karst Aquifer Mapping project: concept, mapping procedure and map of Europe

    NASA Astrophysics Data System (ADS)

    Chen, Zhao; Auler, Augusto S.; Bakalowicz, Michel; Drew, David; Griger, Franziska; Hartmann, Jens; Jiang, Guanghui; Moosdorf, Nils; Richts, Andrea; Stevanovic, Zoran; Veni, George; Goldscheider, Nico

    2017-05-01

    Karst aquifers contribute substantially to freshwater supplies in many regions of the world, but are vulnerable to contamination and difficult to manage because of their unique hydrogeological characteristics. Many karst systems are hydraulically connected over wide areas and require transboundary exploration, protection and management. In order to obtain a better global overview of karst aquifers, to create a basis for sustainable international water-resources management, and to increase the awareness in the public and among decision makers, the World Karst Aquifer Mapping (WOKAM) project was established. The goal is to create a world map and database of karst aquifers, as a further development of earlier maps. This paper presents the basic concepts and the detailed mapping procedure, using France as an example to illustrate the step-by-step workflow, which includes generalization, differentiation of continuous and discontinuous carbonate and evaporite rock areas, and the identification of non-exposed karst aquifers. The map also shows selected caves and karst springs, which are collected in an associated global database. The draft karst aquifer map of Europe shows that 21.6% of the European land surface is characterized by the presence of (continuous or discontinuous) carbonate rocks; about 13.8% of the land surface is carbonate rock outcrop.

  17. Optimal Path Planning Program for Autonomous Speed Sprayer in Orchard Using Order-Picking Algorithm

    NASA Astrophysics Data System (ADS)

    Park, T. S.; Park, S. J.; Hwang, K. Y.; Cho, S. I.

    This study was conducted to develop a software program which computes optimal path for autonomous navigation in orchard, especially for speed sprayer. Possibilities of autonomous navigation in orchard were shown by other researches which have minimized distance error between planned path and performed path. But, research of planning an optimal path for speed sprayer in orchard is hardly founded. In this study, a digital map and a database for orchard which contains GPS coordinate information (coordinates of trees and boundary of orchard) and entity information (heights and widths of trees, radius of main stem of trees, disease of trees) was designed. An orderpicking algorithm which has been used for management of warehouse was used to calculate optimum path based on the digital map. Database for digital map was created by using Microsoft Access and graphic interface for database was made by using Microsoft Visual C++ 6.0. It was possible to search and display information about boundary of an orchard, locations of trees, daily plan for scattering chemicals and plan optimal path on different orchard based on digital map, on each circumstance (starting speed sprayer in different location, scattering chemicals for only selected trees).

  18. [A basic research to share Fourier transform near-infrared spectrum information resource].

    PubMed

    Zhang, Lu-Da; Li, Jun-Hui; Zhao, Long-Lian; Zhao, Li-Li; Qin, Fang-Li; Yan, Yan-Lu

    2004-08-01

    A method to share the information resource in the database of Fourier transform near-infrared(FTNIR) spectrum information of agricultural products and utilize the spectrum information sufficiently is explored in this paper. Mapping spectrum information from one instrument to another is studied to express the spectrum information accurately between the instruments. Then mapping spectrum information is used to establish a mathematical model of quantitative analysis without including standard samples. The analysis result is that the relative coefficient r is 0.941 and the relative error is 3.28% between the model estimate values and the Kjeldahl's value for the protein content of twenty-two wheat samples, while the relative coefficient r is 0.963 and the relative error is 2.4% for the other model, which is established by using standard samples. It is shown that the spectrum information can be shared by using the mapping spectrum information. So it can be concluded that the spectrum information in one FTNIR spectrum information database can be transformed to another instrument's mapping spectrum information, which makes full use of the information resource in the database of FTNIR spectrum information to realize the resource sharing between different instruments.

  19. Identification of positive selection in disease response genes within members of the Poaceae.

    PubMed

    Rech, Gabriel E; Vargas, Walter A; Sukno, Serenella A; Thon, Michael R

    2012-12-01

    Millions of years of coevolution between plants and pathogens can leave footprints on their genomes and genes involved on this interaction are expected to show patterns of positive selection in which novel, beneficial alleles are rapidly fixed within the population. Using information about upregulated genes in maize during Colletotrichum graminicola infection and resources available in the Phytozome database, we looked for evidence of positive selection in the Poaceae lineage, acting on protein coding sequences related with plant defense. We found six genes with evidence of positive selection and another eight with sites showing episodic selection. Some of them have already been described as evolving under positive selection, but others are reported here for the first time including genes encoding isocitrate lyase, dehydrogenases, a multidrug transporter, a protein containing a putative leucine-rich repeat and other proteins with unknown functions. Mapping positively selected residues onto the predicted 3-D structure of proteins showed that most of them are located on the surface, where proteins are in contact with other molecules. We present here a set of Poaceae genes that are likely to be involved in plant defense mechanisms and have evidence of positive selection. These genes are excellent candidates for future functional validation.

  20. Toward uniform probabilistic seismic hazard assessments for Southeast Asia

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Wang, Y.; Shi, X.; Ornthammarath, T.; Warnitchai, P.; Kosuwan, S.; Thant, M.; Nguyen, P. H.; Nguyen, L. M.; Solidum, R., Jr.; Irsyam, M.; Hidayati, S.; Sieh, K.

    2017-12-01

    Although most Southeast Asian countries have seismic hazard maps, various methodologies and quality result in appreciable mismatches at national boundaries. We aim to conduct a uniform assessment across the region by through standardized earthquake and fault databases, ground-shaking scenarios, and regional hazard maps. Our earthquake database contains earthquake parameters obtained from global and national seismic networks, harmonized by removal of duplicate events and the use of moment magnitude. Our active-fault database includes fault parameters from previous studies and from the databases implemented for national seismic hazard maps. Another crucial input for seismic hazard assessment is proper evaluation of ground-shaking attenuation. Since few ground-motion prediction equations (GMPEs) have used local observations from this region, we evaluated attenuation by comparison of instrumental observations and felt intensities for recent earthquakes with predicted ground shaking from published GMPEs. We then utilize the best-fitting GMPEs and site conditions into our seismic hazard assessments. Based on the database and proper GMPEs, we have constructed regional probabilistic seismic hazard maps. The assessment shows highest seismic hazard levels near those faults with high slip rates, including the Sagaing Fault in central Myanmar, the Sumatran Fault in Sumatra, the Palu-Koro, Matano and Lawanopo Faults in Sulawesi, and the Philippine Fault across several islands of the Philippines. In addition, our assessment demonstrates the important fact that regions with low earthquake probability may well have a higher aggregate probability of future earthquakes, since they encompass much larger areas than the areas of high probability. The significant irony then is that in areas of low to moderate probability, where building codes are usually to provide less seismic resilience, seismic risk is likely to be greater. Infrastructural damage in East Malaysia during the 2015 Sabah earthquake offers a case in point.

  1. Digital Mapping Techniques '11–12 workshop proceedings

    USGS Publications Warehouse

    Soller, David R.

    2014-01-01

    At these meetings, oral and poster presentations and special discussion sessions emphasized: (1) methods for creating and publishing map products (here, "publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase formats; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.

  2. Development of a database system for mapping insertional mutations onto the mouse genome with large-scale experimental data

    PubMed Central

    2009-01-01

    Background Insertional mutagenesis is an effective method for functional genomic studies in various organisms. It can rapidly generate easily tractable mutations. A large-scale insertional mutagenesis with the piggyBac (PB) transposon is currently performed in mice at the Institute of Developmental Biology and Molecular Medicine (IDM), Fudan University in Shanghai, China. This project is carried out via collaborations among multiple groups overseeing interconnected experimental steps and generates a large volume of experimental data continuously. Therefore, the project calls for an efficient database system for recording, management, statistical analysis, and information exchange. Results This paper presents a database application called MP-PBmice (insertional mutation mapping system of PB Mutagenesis Information Center), which is developed to serve the on-going large-scale PB insertional mutagenesis project. A lightweight enterprise-level development framework Struts-Spring-Hibernate is used here to ensure constructive and flexible support to the application. The MP-PBmice database system has three major features: strict access-control, efficient workflow control, and good expandability. It supports the collaboration among different groups that enter data and exchange information on daily basis, and is capable of providing real time progress reports for the whole project. MP-PBmice can be easily adapted for other large-scale insertional mutation mapping projects and the source code of this software is freely available at http://www.idmshanghai.cn/PBmice. Conclusion MP-PBmice is a web-based application for large-scale insertional mutation mapping onto the mouse genome, implemented with the widely used framework Struts-Spring-Hibernate. This system is already in use by the on-going genome-wide PB insertional mutation mapping project at IDM, Fudan University. PMID:19958505

  3. Physiographic rim of the Grand Canyon, Arizona: a digital database

    USGS Publications Warehouse

    Billingsley, George H.; Hampton, Haydee M.

    1999-01-01

    This Open-File report is a digital physiographic map database. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. The report does include, however, PostScript and PDF format plot files, each containing an image of the map. For those interested in a paper plot of information contained in the database or in obtaining the PostScript plot files, please see the section entitled "For Those Who Don't Use Digital Geologic Map Databases" below. This physiographic map of the Grand Canyon is modified from previous versions by Billingsley and Hendricks (1989), and Billingsley and others (1997). The boundary is drawn approximately along the topographic rim of the Grand Canyon and its tributary canyons between Lees Ferry and Lake Mead (shown in red). Several isolated small mesas, buttes, and plateaus are within this area, which overall encompasses about 2,600 square miles. The Grand Canyon lies within the southwestern part of the Colorado Plateaus of northern Arizona between Lees Ferry, Colorado River Mile 0, and Lake Mead, Colorado River Mile 277. The Colorado River is the corridor for raft trips through the Grand Canyon. Limestone rocks of the Kaibab Formation form most of the north and south rims of the Grand Canyon, and a few volcanic rocks form the north rim of parts of the Uinkaret and Shivwits Plateaus. Limestones of the Redwall Limestone and lower Supai Group form the rim of the Hualapai Plateau area, and Limestones of Devonian and Cambrian age form the boundary rim near the mouth of Grand Canyon at the Lake Mead. The natural physiographic boundary of the Grand Canyon is roughly the area a visitor would first view any part of the Grand Canyon and its tributaries.

  4. Report on the Project for Establishment of the Standardized Korean Laboratory Terminology Database, 2015.

    PubMed

    Jung, Bo Kyeung; Kim, Jeeyong; Cho, Chi Hyun; Kim, Ju Yeon; Nam, Myung Hyun; Shin, Bong Kyung; Rho, Eun Youn; Kim, Sollip; Sung, Heungsup; Kim, Shinyoung; Ki, Chang Seok; Park, Min Jung; Lee, Kap No; Yoon, Soo Young

    2017-04-01

    The National Health Information Standards Committee was established in 2004 in Korea. The practical subcommittee for laboratory test terminology was placed in charge of standardizing laboratory medicine terminology in Korean. We aimed to establish a standardized Korean laboratory terminology database, Korea-Logical Observation Identifier Names and Codes (K-LOINC) based on former products sponsored by this committee. The primary product was revised based on the opinions of specialists. Next, we mapped the electronic data interchange (EDI) codes that were revised in 2014, to the corresponding K-LOINC. We established a database of synonyms, including the laboratory codes of three reference laboratories and four tertiary hospitals in Korea. Furthermore, we supplemented the clinical microbiology section of K-LOINC using an alternative mapping strategy. We investigated other systems that utilize laboratory codes in order to investigate the compatibility of K-LOINC with statistical standards for a number of tests. A total of 48,990 laboratory codes were adopted (21,539 new and 16,330 revised). All of the LOINC synonyms were translated into Korean, and 39,347 Korean synonyms were added. Moreover, 21,773 synonyms were added from reference laboratories and tertiary hospitals. Alternative strategies were established for mapping within the microbiology domain. When we applied these to a smaller hospital, the mapping rate was successfully increased. Finally, we confirmed K-LOINC compatibility with other statistical standards, including a newly proposed EDI code system. This project successfully established an up-to-date standardized Korean laboratory terminology database, as well as an updated EDI mapping to facilitate the introduction of standard terminology into institutions. © 2017 The Korean Academy of Medical Sciences.

  5. Colorado Late Cenozoic Fault and Fold Database and Internet Map Server: User-friendly technology for complex information

    USGS Publications Warehouse

    Morgan, K.S.; Pattyn, G.J.; Morgan, M.L.

    2005-01-01

    Internet mapping applications for geologic data allow simultaneous data delivery and collection, enabling quick data modification while efficiently supplying the end user with information. Utilizing Web-based technologies, the Colorado Geological Survey's Colorado Late Cenozoic Fault and Fold Database was transformed from a monothematic, nonspatial Microsoft Access database into a complex information set incorporating multiple data sources. The resulting user-friendly format supports easy analysis and browsing. The core of the application is the Microsoft Access database, which contains information compiled from available literature about faults and folds that are known or suspected to have moved during the late Cenozoic. The database contains nonspatial fields such as structure type, age, and rate of movement. Geographic locations of the fault and fold traces were compiled from previous studies at 1:250,000 scale to form a spatial database containing information such as length and strike. Integration of the two databases allowed both spatial and nonspatial information to be presented on the Internet as a single dataset (http://geosurvey.state.co.us/pubs/ceno/). The user-friendly interface enables users to view and query the data in an integrated manner, thus providing multiple ways to locate desired information. Retaining the digital data format also allows continuous data updating and quick delivery of newly acquired information. This dataset is a valuable resource to anyone interested in earthquake hazards and the activity of faults and folds in Colorado. Additional geologic hazard layers and imagery may aid in decision support and hazard evaluation. The up-to-date and customizable maps are invaluable tools for researchers or the public.

  6. Enhancing the GABI-Kat Arabidopsis thaliana T-DNA Insertion Mutant Database by Incorporating Araport11 Annotation.

    PubMed

    Kleinboelting, Nils; Huep, Gunnar; Weisshaar, Bernd

    2017-01-01

    SimpleSearch provides access to a database containing information about T-DNA insertion lines of the GABI-Kat collection of Arabidopsis thaliana mutants. These mutants are an important tool for reverse genetics, and GABI-Kat is the second largest collection of such T-DNA insertion mutants. Insertion sites were deduced from flanking sequence tags (FSTs), and the database contains information about mutant plant lines as well as insertion alleles. Here, we describe improvements within the interface (available at http://www.gabi-kat.de/db/genehits.php) and with regard to the database content that have been realized in the last five years. These improvements include the integration of the Araport11 genome sequence annotation data containing the recently updated A. thaliana structural gene descriptions, an updated visualization component that displays groups of insertions with very similar insertion positions, mapped confirmation sequences, and primers. The visualization component provides a quick way to identify insertions of interest, and access to improved data about the exact structure of confirmed insertion alleles. In addition, the database content has been extended by incorporating additional insertion alleles that were detected during the confirmation process, as well as by adding new FSTs that have been produced during continued efforts to complement gaps in FST availability. Finally, the current database content regarding predicted and confirmed insertion alleles as well as primer sequences has been made available as downloadable flat files. © The Author 2016. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists.

  7. Real-time Author Co-citation Mapping for Online Searching.

    ERIC Educational Resources Information Center

    Lin, Xia; White, Howard D.; Buzydlowski, Jan

    2003-01-01

    Describes the design and implementation of a prototype visualization system, AuthorLink, to enhance author searching. AuthorLink is based on author co-citation analysis and visualization mapping algorithms. AuthorLink produces interactive author maps in real time from a database of 1.26 million records supplied by the Institute for Scientific…

  8. SenseLab

    PubMed Central

    Crasto, Chiquito J.; Marenco, Luis N.; Liu, Nian; Morse, Thomas M.; Cheung, Kei-Hoi; Lai, Peter C.; Bahl, Gautam; Masiar, Peter; Lam, Hugo Y.K.; Lim, Ernest; Chen, Huajin; Nadkarni, Prakash; Migliore, Michele; Miller, Perry L.; Shepherd, Gordon M.

    2009-01-01

    This article presents the latest developments in neuroscience information dissemination through the SenseLab suite of databases: NeuronDB, CellPropDB, ORDB, OdorDB, OdorMapDB, ModelDB and BrainPharm. These databases include information related to: (i) neuronal membrane properties and neuronal models, and (ii) genetics, genomics, proteomics and imaging studies of the olfactory system. We describe here: the new features for each database, the evolution of SenseLab’s unifying database architecture and instances of SenseLab database interoperation with other neuroscience online resources. PMID:17510162

  9. Recently Active Traces of the Berryessa Fault, California: A Digital Database

    USGS Publications Warehouse

    Lienkaemper, James J.

    2012-01-01

    The purpose of this map is to show the location of and evidence for recent movement on active fault traces within the Berryessa section and parts of adjacent sections of the Green Valley Fault Zone, California. The location and recency of the mapped traces is primarily based on geomorphic expression of the fault as interpreted from large-scale 2010 aerial photography and from 2007 and 2011 0.5 and 1.0 meter bare-earth LiDAR imagery (that is, high-resolution topographic data). In a few places, evidence of fault creep and offset Holocene strata in trenches and natural exposures have confirmed the activity of some of these traces. This publication is formatted both as a digital database for use within a geographic information system (GIS) and for broader public access as map images that may be browsed on-line or download a summary map. The report text describes the types of scientific observations used to make the map, gives references pertaining to the fault and the evidence of faulting, and provides guidance for use of and limitations of the map.

  10. Digital Database of Recently Active Traces of the Hayward Fault, California

    USGS Publications Warehouse

    Lienkaemper, James J.

    2006-01-01

    The purpose of this map is to show the location of and evidence for recent movement on active fault traces within the Hayward Fault Zone, California. The mapped traces represent the integration of the following three different types of data: (1) geomorphic expression, (2) creep (aseismic fault slip),and (3) trench exposures. This publication is a major revision of an earlier map (Lienkaemper, 1992), which both brings up to date the evidence for faulting and makes it available formatted both as a digital database for use within a geographic information system (GIS) and for broader public access interactively using widely available viewing software. The pamphlet describes in detail the types of scientific observations used to make the map, gives references pertaining to the fault and the evidence of faulting, and provides guidance for use of and limitations of the map. [Last revised Nov. 2008, a minor update for 2007 LiDAR and recent trench investigations; see version history below.

  11. [Implementation of Oncomelania hupensis monitoring system based on Baidu Map].

    PubMed

    Zhi-Hua, Chen; Yi-Sheng, Zhu; Zhi-Qiang, Xue; Xue-Bing, Li; Yi-Min, Ding; Li-Jun, Bi; Kai-Min, Gao; You, Zhang

    2017-10-25

    To construct the Oncomelania hupensis snail monitoring system based on the Baidu Map. The environmental basic information about historical snail environment and existing snail environment, etc. was collected with the monitoring data about different kinds of O. hupensis snails, and then the O. hupensis snail monitoring system was built. Geographic Information System (GIS) and the electronic fence technology and Application Program Interface (API) were applied to set up the electronic fence of the snail surveillance environments, and the electronic fence was connected to the database of the snail surveillance. The O. hupensis snail monitoring system based on the Baidu Map were built up, including three modules of O. hupensis Snail Monitoring Environmental Database, Dynamic Monitoring Platform and Electronic Map. The information about monitoring O. hupensis snails could be obtained through the computer and smartphone simultaneously. The O. hupensis snail monitoring system, which is based on Baidu Map, is a visible platform to follow the process of snailsearching and molluscaciding.

  12. Linking Supermarket Sales Data To Nutritional Information: An Informatics Feasibility Study

    PubMed Central

    Brinkerhoff, Kristina M.; Brewster, Philip J.; Clark, Edward B.; Jordan, Kristine C.; Cummins, Mollie R.; Hurdle, John F.

    2011-01-01

    Grocery sales are a data source of potential value to dietary assessment programs in public health informatics. However, the lack of a computable method for mapping between nutrient and food item information represents a major obstacle. We studied the feasibility of linking point-of-sale data to USDA-SR nutrient database information in a sustainable way. We analyzed 2,009,533 de-identified sales items purchased by 32,785 customers over a two-week period. We developed a method using the item category hierarchy in the supermarket’s database to link purchased items to records from the USDA-SR. We describe our methodology and its rationale and limitations. Approximately 70% of all items were mapped and linked to the SR; approximately 90% of all items could be mapped with an equivalent expenditure of additional effort. 100% of all items were mapped to USDA standard food groups. We conclude that mapping grocery sales data to nutritional information is feasible. PMID:22195115

  13. National Water Quality Standards Database (NWQSD)

    EPA Pesticide Factsheets

    The National Water Quality Standards Database (WQSDB) provides access to EPA and state water quality standards (WQS) information in text, tables, and maps. This data source was last updated in December 2007 and will no longer be updated.

  14. Influence of neighbourhood information on 'Local Climate Zone' mapping in heterogeneous cities

    NASA Astrophysics Data System (ADS)

    Verdonck, Marie-Leen; Okujeni, Akpona; van der Linden, Sebastian; Demuzere, Matthias; De Wulf, Robert; Van Coillie, Frieke

    2017-10-01

    Local climate zone (LCZ) mapping is an emerging field in urban climate research. LCZs potentially provide an objective framework to assess urban form and function worldwide. The scheme is currently being used to globally map LCZs as a part of the World Urban Database and Access Portal Tools (WUDAPT) initiative. So far, most of the LCZ maps lack proper quantitative assessment, challenging the generic character of the WUDAPT workflow. Using the standard method introduced by the WUDAPT community difficulties arose concerning the built zones due to high levels of heterogeneity. To overcome this problem a contextual classifier is adopted in the mapping process. This paper quantitatively assesses the influence of neighbourhood information on the LCZ mapping result of three cities in Belgium: Antwerp, Brussels and Ghent. Overall accuracies for the maps were respectively 85.7 ± 0.5, 79.6 ± 0.9, 90.2 ± 0.4%. The approach presented here results in overall accuracies of 93.6 ± 0.2, 92.6 ± 0.3 and 95.6 ± 0.3% for Antwerp, Brussels and Ghent. The results thus indicate a positive influence of neighbourhood information for all study areas with an increase in overall accuracies of 7.9, 13.0 and 5.4%. This paper reaches two main conclusions. Firstly, evidence was introduced on the relevance of a quantitative accuracy assessment in LCZ mapping, showing that the accuracies reported in previous papers are not easily achieved. Secondly, the method presented in this paper proves to be highly effective in Belgian cities, and given its open character shows promise for application in other heterogeneous cities worldwide.

  15. ReMap 2018: an updated atlas of regulatory regions from an integrative analysis of DNA-binding ChIP-seq experiments.

    PubMed

    Chèneby, Jeanne; Gheorghe, Marius; Artufel, Marie; Mathelier, Anthony; Ballester, Benoit

    2018-01-04

    With this latest release of ReMap (http://remap.cisreg.eu), we present a unique collection of regulatory regions in human, as a result of a large-scale integrative analysis of ChIP-seq experiments for hundreds of transcriptional regulators (TRs) such as transcription factors, transcriptional co-activators and chromatin regulators. In 2015, we introduced the ReMap database to capture the genome regulatory space by integrating public ChIP-seq datasets, covering 237 TRs across 13 million (M) peaks. In this release, we have extended this catalog to constitute a unique collection of regulatory regions. Specifically, we have collected, analyzed and retained after quality control a total of 2829 ChIP-seq datasets available from public sources, covering a total of 485 TRs with a catalog of 80M peaks. Additionally, the updated database includes new search features for TR names as well as aliases, including cell line names and the ability to navigate the data directly within genome browsers via public track hubs. Finally, full access to this catalog is available online together with a TR binding enrichment analysis tool. ReMap 2018 provides a significant update of the ReMap database, providing an in depth view of the complexity of the regulatory landscape in human. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. Digital Bedrock Compilation: A Geodatabase Covering Forest Service Lands in California

    NASA Astrophysics Data System (ADS)

    Elder, D.; de La Fuente, J. A.; Reichert, M.

    2010-12-01

    This digital database contains bedrock geologic mapping for Forest Service lands within California. This compilation began in 2004 and the first version was completed in 2005. Second publication of this geodatabase was completed in 2010 and filled major gaps in the southern Sierra Nevada and Modoc/Medicine Lake/Warner Mountains areas. This digital map database was compiled from previously published and unpublished geologic mapping, with source mapping and review from California Geological Survey, the U.S. Geological Survey and others. Much of the source data was itself compilation mapping. This geodatabase is huge, containing ~107,000 polygons and ~ 280,000 arcs. Mapping was compiled from more than one thousand individual sources and covers over 41,000,000 acres (~166,000 km2). It was compiled from source maps at various scales - from ~ 1:4,000 to 1:250,000 and represents the best available geologic mapping at largest scale possible. An estimated 70-80% of the source information was digitized from geologic mapping at 1:62,500 scale or better. Forest Service ACT2 Enterprise Team compiled the bedrock mapping and developed a geodatabase to store this information. This geodatabase supports feature classes for polygons (e.g, map units), lines (e.g., contacts, boundaries, faults and structural lines) and points (e.g., orientation data, structural symbology). Lookup tables provide detailed information for feature class items. Lookup/type tables contain legal values and hierarchical groupings for geologic ages and lithologies. Type tables link coded values with descriptions for line and point attributes, such as line type, line location and point type. This digital mapping is at the core of many quantitative analyses and derivative map products. Queries of the database are used to produce maps and to quantify rock types of interest. These include the following: (1) ultramafic rocks - where hazards from naturally occurring asbestos are high, (2) granitic rocks - increased erosion hazards, (3) limestone, chert, sedimentary rocks - paleontological resources (Potential Fossil Yield Classification maps), (4) calcareous rocks (cave resources, water chemistry), and (5) lava flows - lava tubes (more caves). Map unit groupings (e.g., belts, terranes, tectonic & geomorphic provinces) can also be derived from the geodatabase. Digital geologic mapping was used in ground water modeling to predict effects of tunneling through the San Bernardino Mountains. Bedrock mapping is used in models that characterize watershed sediment regimes and quantify anthropogenic influences. When combined with digital geomorphology mapping, this geodatabase helps to assess landslide hazards.

  17. The role of rivers in ancient societies, or how man transformed the alluvial landscapes of Khuzestan (SW Iran)

    NASA Astrophysics Data System (ADS)

    Walstra, J.; Heyvaert, V.; Verkinderen, P.

    2012-04-01

    For many thousands of years the alluvial plains of Khuzestan (SW Iran) have been subject to intensive settlement and agriculture. Ancient societies depended on the position of major rivers for their economic survival and hence, there is ample evidence of human activities trying to control the distribution of water. Throughout the plains ancient irrigation and settlement patterns are visible, although traces are rapidly disappearing due to expanding modern land use. Aim of this study is to unlock and integrate the rich information on landscape and archaeology, which only survives through the available historical imagery and some limited archaeological surveys. A GIS-based geomorphological mapping procedure was developed, using a variety of imagery, including historical aerial photographs, CORONA, Landsat and SPOT images. In addition, supported by the evidence from previous geological field surveys, archaeological elements were identified, mapped and included in a GIS database. The resulting map layers display the positions of successive palaeochannel belts and extensive irrigation networks, together indicating a complex alluvial history characterized by avulsions and significant human impact. As shown in several case-studies, integrating information from multiple disciplines provides valuable insights in the complex landscape evolution of this region, both from geological and historical perspectives. Remote sensing and GIS are essential tools in such a research context. The presented work was undertaken within the framework of the Interuniversity Attraction Pole "Greater Mesopotamia: Reconstruction of its Environment and History" (IAP 6/34), funded by the Belgian Science Policy.

  18. One-Step Nucleic Acid Amplification (OSNA): A fast molecular test based on CK19 mRNA concentration for assessment of lymph-nodes metastases in early stage endometrial cancer.

    PubMed

    Fanfani, Francesco; Monterossi, Giorgia; Ghizzoni, Viola; Rossi, Esther D; Dinoi, Giorgia; Inzani, Frediano; Fagotti, Anna; Gueli Alletti, Salvatore; Scarpellini, Francesca; Nero, Camilla; Santoro, Angela; Scambia, Giovanni; Zannoni, Gian F

    2018-01-01

    The aim of the current study is to evaluate the detection rate of micro- and macro-metastases of the One-Step Nucleic Acid Amplification (OSNA) compared to frozen section examination and subsequent ultra-staging examination in early stage endometrial cancer (EC). From March 2016 to June 2016, data of 40 consecutive FIGO stage I EC patients were prospectively collected in an electronic database. The sentinel lymph node mapping was performed in all patients. All mapped nodes were removed and processed. Sentinel lymph nodes were sectioned and alternate sections were respectively examined by OSNA and by frozen section analysis. After frozen section, the residual tissue from each block was processed with step-level sections (each step at 200 micron) including H&E and IHC slides. Sentinel lymph nodes mapping was successful in 29 patients (72.5%). In the remaining 11 patients (27.5%), a systematic pelvic lymphadenectomy was performed. OSNA assay sensitivity and specificity were 87.5% and 100% respectively. Positive and negative predictive values were 100% and 99% respectively, with a diagnostic accuracy of 99%. As far as frozen section examination and subsequent ultra-staging analysis was concerned, we reported sensitivity and specificity of 50% and 94.4% respectively; positive and negative predictive values were 14.3% and 99%, respectively, with an accuracy of 93.6%. In one patient, despite negative OSNA and frozen section analysis of the sentinel node, a macro-metastasis in 1 non-sentinel node was found. The combination of OSNA procedure with the sentinel lymph node mapping could represent an efficient intra-operative tool for the selection of early-stage EC patients to be submitted to systematic lymphadenectomy.

  19. Digital Mapping Techniques '08—Workshop Proceedings, Moscow, Idaho, May 18–21, 2008

    USGS Publications Warehouse

    Soller, David R.

    2009-01-01

    The Digital Mapping Techniques '08 (DMT'08) workshop was attended by more than 100 technical experts from 40 agencies, universities, and private companies, including representatives from 24 State geological surveys. This year's meeting, the twelfth in the annual series, was hosted by the Idaho Geological Survey, from May 18-21, 2008, on the University of Idaho campus in Moscow, Idaho. Each DMT workshop has been coordinated by the U.S. Geological Survey's National Geologic Map Database Project and the Association of American State Geologists (AASG). As in previous years' meetings, the objective was to foster informal discussion and exchange of technical information, principally in order to develop more efficient methods for digital mapping, cartography, GIS analysis, and information management. At this meeting, oral and poster presentations and special discussion sessions emphasized (1) methods for creating and publishing map products (here, "publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase format; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.

  20. BioBarcode: a general DNA barcoding database and server platform for Asian biodiversity resources

    PubMed Central

    2009-01-01

    Background DNA barcoding provides a rapid, accurate, and standardized method for species-level identification using short DNA sequences. Such a standardized identification method is useful for mapping all the species on Earth, particularly when DNA sequencing technology is cheaply available. There are many nations in Asia with many biodiversity resources that need to be mapped and registered in databases. Results We have built a general DNA barcode data processing system, BioBarcode, with open source software - which is a general purpose database and server. It uses mySQL RDBMS 5.0, BLAST2, and Apache httpd server. An exemplary database of BioBarcode has around 11,300 specimen entries (including GenBank data) and registers the biological species to map their genetic relationships. The BioBarcode database contains a chromatogram viewer which improves the performance in DNA sequence analyses. Conclusion Asia has a very high degree of biodiversity and the BioBarcode database server system aims to provide an efficient bioinformatics protocol that can be freely used by Asian researchers and research organizations interested in DNA barcoding. The BioBarcode promotes the rapid acquisition of biological species DNA sequence data that meet global standards by providing specialized services, and provides useful tools that will make barcoding cheaper and faster in the biodiversity community such as standardization, depository, management, and analysis of DNA barcode data. The system can be downloaded upon request, and an exemplary server has been constructed with which to build an Asian biodiversity system http://www.asianbarcode.org. PMID:19958506

  1. Developing a Global Database of Historic Flood Events to Support Machine Learning Flood Prediction in Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Tellman, B.; Sullivan, J.; Kettner, A.; Brakenridge, G. R.; Slayback, D. A.; Kuhn, C.; Doyle, C.

    2016-12-01

    There is an increasing need to understand flood vulnerability as the societal and economic effects of flooding increases. Risk models from insurance companies and flood models from hydrologists must be calibrated based on flood observations in order to make future predictions that can improve planning and help societies reduce future disasters. Specifically, to improve these models both traditional methods of flood prediction from physically based models as well as data-driven techniques, such as machine learning, require spatial flood observation to validate model outputs and quantify uncertainty. A key dataset that is missing for flood model validation is a global historical geo-database of flood event extents. Currently, the most advanced database of historical flood extent is hosted and maintained at the Dartmouth Flood Observatory (DFO) that has catalogued 4320 floods (1985-2015) but has only mapped 5% of these floods. We are addressing this data gap by mapping the inventory of floods in the DFO database to create a first-of- its-kind, comprehensive, global and historical geospatial database of flood events. To do so, we combine water detection algorithms on MODIS and Landsat 5,7 and 8 imagery in Google Earth Engine to map discrete flood events. The created database will be available in the Earth Engine Catalogue for download by country, region, or time period. This dataset can be leveraged for new data-driven hydrologic modeling using machine learning algorithms in Earth Engine's highly parallelized computing environment, and we will show examples for New York and Senegal.

  2. interPopula: a Python API to access the HapMap Project dataset

    PubMed Central

    2010-01-01

    Background The HapMap project is a publicly available catalogue of common genetic variants that occur in humans, currently including several million SNPs across 1115 individuals spanning 11 different populations. This important database does not provide any programmatic access to the dataset, furthermore no standard relational database interface is provided. Results interPopula is a Python API to access the HapMap dataset. interPopula provides integration facilities with both the Python ecology of software (e.g. Biopython and matplotlib) and other relevant human population datasets (e.g. Ensembl gene annotation and UCSC Known Genes). A set of guidelines and code examples to address possible inconsistencies across heterogeneous data sources is also provided. Conclusions interPopula is a straightforward and flexible Python API that facilitates the construction of scripts and applications that require access to the HapMap dataset. PMID:21210977

  3. Molecular Interaction Map of the Mammalian Cell Cycle Control and DNA Repair Systems

    PubMed Central

    Kohn, Kurt W.

    1999-01-01

    Eventually to understand the integrated function of the cell cycle regulatory network, we must organize the known interactions in the form of a diagram, map, and/or database. A diagram convention was designed capable of unambiguous representation of networks containing multiprotein complexes, protein modifications, and enzymes that are substrates of other enzymes. To facilitate linkage to a database, each molecular species is symbolically represented only once in each diagram. Molecular species can be located on the map by means of indexed grid coordinates. Each interaction is referenced to an annotation list where pertinent information and references can be found. Parts of the network are grouped into functional subsystems. The map shows how multiprotein complexes could assemble and function at gene promoter sites and at sites of DNA damage. It also portrays the richness of connections between the p53-Mdm2 subsystem and other parts of the network. PMID:10436023

  4. RF-Based Location Using Interpolation Functions to Reduce Fingerprint Mapping

    PubMed Central

    Ezpeleta, Santiago; Claver, José M.; Pérez-Solano, Juan J.; Martí, José V.

    2015-01-01

    Indoor RF-based localization using fingerprint mapping requires an initial training step, which represents a time consuming process. This location methodology needs a database conformed with RSSI (Radio Signal Strength Indicator) measures from the communication transceivers taken at specific locations within the localization area. But, the real world localization environment is dynamic and it is necessary to rebuild the fingerprint database when some environmental changes are made. This paper explores the use of different interpolation functions to complete the fingerprint mapping needed to achieve the sought accuracy, thereby reducing the effort in the training step. Also, different distributions of test maps and reference points have been evaluated, showing the validity of this proposal and necessary trade-offs. Results reported show that the same or similar localization accuracy can be achieved even when only 50% of the initial fingerprint reference points are taken. PMID:26516862

  5. Map Database for Surficial Materials in the Conterminous United States

    USGS Publications Warehouse

    Soller, David R.; Reheis, Marith C.; Garrity, Christopher P.; Van Sistine, D. R.

    2009-01-01

    The Earth's bedrock is overlain in many places by a loosely compacted and mostly unconsolidated blanket of sediments in which soils commonly are developed. These sediments generally were eroded from underlying rock, and then were transported and deposited. In places, they exceed 1000 ft (330 m) in thickness. Where the sediment blanket is absent, bedrock is either exposed or has been weathered to produce a residual soil. For the conterminous United States, a map by Soller and Reheis (2004, scale 1:5,000,000; http://pubs.usgs.gov/of/2003/of03-275/) shows these sediments and the weathered, residual material; for ease of discussion, these are referred to as 'surficial materials'. That map was produced as a PDF file, from an Adobe Illustrator-formatted version of the provisional GIS database. The provisional GIS files were further processed without modifying the content of the published map, and are here published.

  6. Generative Topographic Mapping (GTM): Universal Tool for Data Visualization, Structure-Activity Modeling and Dataset Comparison.

    PubMed

    Kireeva, N; Baskin, I I; Gaspar, H A; Horvath, D; Marcou, G; Varnek, A

    2012-04-01

    Here, the utility of Generative Topographic Maps (GTM) for data visualization, structure-activity modeling and database comparison is evaluated, on hand of subsets of the Database of Useful Decoys (DUD). Unlike other popular dimensionality reduction approaches like Principal Component Analysis, Sammon Mapping or Self-Organizing Maps, the great advantage of GTMs is providing data probability distribution functions (PDF), both in the high-dimensional space defined by molecular descriptors and in 2D latent space. PDFs for the molecules of different activity classes were successfully used to build classification models in the framework of the Bayesian approach. Because PDFs are represented by a mixture of Gaussian functions, the Bhattacharyya kernel has been proposed as a measure of the overlap of datasets, which leads to an elegant method of global comparison of chemical libraries. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Evaluation of a Myopic Normative Database for Analysis of Retinal Nerve Fiber Layer Thickness.

    PubMed

    Biswas, Sayantan; Lin, Chen; Leung, Christopher K S

    2016-09-01

    Analysis of retinal nerve fiber layer (RNFL) abnormalities with optical coherence tomography in eyes with high myopia has been complicated by high rates of false-positive errors. An understanding of whether the application of a myopic normative database can improve the specificity for detection of RNFL abnormalities in eyes with high myopia is relevant. To evaluate the diagnostic performance of a myopic normative database for detection of RNFL abnormalities in eyes with high myopia (spherical equivalent, -6.0 diopters [D] or less). In this cross-sectional study, 180 eyes with high myopia (mean [SD] spherical equivalent, -8.0 [1.8] D) from 180 healthy individuals were included in the myopic normative database. Another 46 eyes with high myopia from healthy individuals (mean [SD] spherical equivalent, -8.1 [1.8] D) and 74 eyes from patients with high myopia and glaucoma (mean [SD] spherical equivalent, -8.3 [1.9] D) were included for evaluation of specificity and sensitivity. The 95th and 99th percentiles of the mean and clock-hour circumpapillary RNFL thicknesses and the individual superpixel thicknesses of the RNFL thickness map measured by spectral-domain optical coherence tomography were calculated from the 180 eyes with high myopia. Participants were recruited from January 2, 2013, to December 30, 2015. The following 6 criteria of RNFL abnormalities were examined: (1) mean circumpapillary RNFL thickness below the lower 95th or (2) the lower 99th percentile; (3) one clock-hour or more for RNFL thickness below the lower 95th or (4) the lower 99th percentile; and (5) twenty contiguous superpixels or more of RNFL thickness in the RNFL thickness map below the lower 95th or (6) the lower 99th percentile. Specificities and sensitivities for detection of RNFL abnormalities. Of the 46 healthy eyes and 74 eyes with glaucoma studied (from 39 men and 38 women), the myopic normative database showed a higher specificity (63.0%-100%) than did the built-in normative database of the optical coherence tomography instrument (8.7%-87.0%) for detection of RNFL abnormalities across all the criteria examined (differences in specificities between 13.0% [95% CI, 1.1%-24.9%; P = .01] and 54.3% [95% CI, 37.8%-70.9%; P < .001]) except for the criterion of mean RNFL thickness below the lower 99th percentile, in which both normative databases had the same specificities (100%) but the myopic normative database exhibited a higher sensitivity (71.6% vs 86.5%; difference in sensitivities, 14.9% [95% CI, 4.6%-25.1%; P = .002]). The application of a myopic normative database improved the specificity without compromising the sensitivity compared with the optical coherence tomography instrument's built-in normative database for detection of RNFL abnormalities in eyes with high myopia. Inclusion of myopic normative databases should be considered in optical coherence tomography instruments.

  8. National Assessment of Oil and Gas Project: Areas of Historical Oil and Gas Exploration and Production in the United States

    USGS Publications Warehouse

    Biewick, Laura

    2008-01-01

    This report contains maps and associated spatial data showing historical oil and gas exploration and production in the United States. Because of the proprietary nature of many oil and gas well databases, the United States was divided into cells one-quarter square mile and the production status of all wells in a given cell was aggregated. Base-map reference data are included, using the U.S. Geological Survey (USGS) National Map, the USGS and American Geological Institute (AGI) Global GIS, and a World Shaded Relief map service from the ESRI Geography Network. A hardcopy map was created to synthesize recorded exploration data from 1859, when the first oil well was drilled in the U.S., to 2005. In addition to the hardcopy map product, the data have been refined and made more accessible through the use of Geographic Information System (GIS) tools. The cell data are included in a GIS database constructed for spatial analysis via the USGS Internet Map Service or by importing the data into GIS software such as ArcGIS. The USGS internet map service provides a number of useful and sophisticated geoprocessing and cartographic functions via an internet browser. Also included is a video clip of U.S. oil and gas exploration and production through time.

  9. A Spatial Division Clustering Method and Low Dimensional Feature Extraction Technique Based Indoor Positioning System

    PubMed Central

    Mo, Yun; Zhang, Zhongzhao; Meng, Weixiao; Ma, Lin; Wang, Yao

    2014-01-01

    Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC) method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect. PMID:24451470

  10. Municipal GIS incorporates database from pipe lines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-05-01

    League City, a coastal area community of about 35,000 population in Galveston County, Texas, has developed an impressive municipal GIS program. The system represents a textbook example of what a municipal GIS can represent and produce. In 1987, the city engineer was authorized to begin developing the area information system. City survey personnel used state-of-the-art Global Positioning System (GPS) technology to establish a first order monumentation program with a grid of 78 monuments set over 54 sq mi. Street, subdivision, survey, utilities, taxing criteria, hydrology, topography, environmental and other concerns were layered into the municipal GIS database program. Today, areamore » developers submit all layout, design, and land use plan data to the city in digital format without hard copy. Multi-color maps with high resolution graphics can be quickly generate for cross-referenced queries sensitive to political, environmental, engineering, taxing, and/or utility capacity jurisdictions. The design of both the GIS and data base system are described.« less

  11. Effects of Soil Data and Simulation Unit Resolution on Quantifying Changes of Soil Organic Carbon at Regional Scale with a Biogeochemical Process Model

    PubMed Central

    Zhang, Liming; Yu, Dongsheng; Shi, Xuezheng; Xu, Shengxiang; Xing, Shihe; Zhao, Yongcong

    2014-01-01

    Soil organic carbon (SOC) models were often applied to regions with high heterogeneity, but limited spatially differentiated soil information and simulation unit resolution. This study, carried out in the Tai-Lake region of China, defined the uncertainty derived from application of the DeNitrification-DeComposition (DNDC) biogeochemical model in an area with heterogeneous soil properties and different simulation units. Three different resolution soil attribute databases, a polygonal capture of mapping units at 1∶50,000 (P5), a county-based database of 1∶50,000 (C5) and county-based database of 1∶14,000,000 (C14), were used as inputs for regional DNDC simulation. The P5 and C5 databases were combined with the 1∶50,000 digital soil map, which is the most detailed soil database for the Tai-Lake region. The C14 database was combined with 1∶14,000,000 digital soil map, which is a coarse database and is often used for modeling at a national or regional scale in China. The soil polygons of P5 database and county boundaries of C5 and C14 databases were used as basic simulation units. Results project that from 1982 to 2000, total SOC change in the top layer (0–30 cm) of the 2.3 M ha of paddy soil in the Tai-Lake region was +1.48 Tg C, −3.99 Tg C and −15.38 Tg C based on P5, C5 and C14 databases, respectively. With the total SOC change as modeled with P5 inputs as the baseline, which is the advantages of using detailed, polygon-based soil dataset, the relative deviation of C5 and C14 were 368% and 1126%, respectively. The comparison illustrates that DNDC simulation is strongly influenced by choice of fundamental geographic resolution as well as input soil attribute detail. The results also indicate that improving the framework of DNDC is essential in creating accurate models of the soil carbon cycle. PMID:24523922

  12. The Evolution of Topics and Leading Trends over the Past 15 Years of Research on the Quality of Higher Education in China: Based on Keyword Co-Occurrence Knowledge Map Analysis of the Research Papers Published from 2000 to 2014 in the CSSCI Database

    ERIC Educational Resources Information Center

    Qu, Xia; Yang, Xiaotong

    2016-01-01

    Using CiteSpace to draw a keyword co-occurrence knowledge map for 1,048 research papers on the quality of higher education from 2000 to 2014 in the Chinese Social Sciences Citation Index database, we found that over the past 15 years, research on the quality of Chinese higher education was clearly oriented toward policies, and a good interactive…

  13. Failure mode and effects analysis outputs: are they valid?

    PubMed Central

    2012-01-01

    Background Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Methods Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: · Face validity: by comparing the FMEA participants’ mapped processes with observational work. · Content validity: by presenting the FMEA findings to other healthcare professionals. · Criterion validity: by comparing the FMEA findings with data reported on the trust’s incident report database. · Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Results Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust’s incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. Conclusion There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA’s methodology for scoring failures, there were discrepancies between the teams’ estimates and similar incidents reported on the trust’s incident database. Furthermore, the concept of multiplying ordinal scales to prioritise failures is mathematically flawed. Until FMEA’s validity is further explored, healthcare organisations should not solely depend on their FMEA results to prioritise patient safety issues. PMID:22682433

  14. Geologic map of Gunnison Gorge National Conservation Area, Delta and Montrose Counties, Colorado

    USGS Publications Warehouse

    Kellogg, Karl; Hansen, Wallace R.; Tucker, Karen S.; VanSistine, D. Paco

    2004-01-01

    This publication consists of a geologic map database and printed map sheet. The map sheet has a geologic map as the center piece, and accompanying text describes (1) the various geological units, (2) the uplift history of the region and how it relates to canyon downcutting, (3) the ecology of the gorge, and (4) human history. The map is intended to be used by the general public as well as scientists and goes hand-in-hand with a separate geological guide to Gunnison Gorge.

  15. A design for the geoinformatics system

    NASA Astrophysics Data System (ADS)

    Allison, M. L.

    2002-12-01

    Informatics integrates and applies information technologies with scientific and technical disciplines. A geoinformatics system targets the spatially based sciences. The system is not a master database, but will collect pertinent information from disparate databases distributed around the world. Seamless interoperability of databases promises quantum leaps in productivity not only for scientific researchers but also for many areas of society including business and government. The system will incorporate: acquisition of analog and digital legacy data; efficient information and data retrieval mechanisms (via data mining and web services); accessibility to and application of visualization, analysis, and modeling capabilities; online workspace, software, and tutorials; GIS; integration with online scientific journal aggregates and digital libraries; access to real time data collection and dissemination; user-defined automatic notification and quality control filtering for selection of new resources; and application to field techniques such as mapping. In practical terms, such a system will provide the ability to gather data over the Web from a variety of distributed sources, regardless of computer operating systems, database formats, and servers. Search engines will gather data about any geographic location, above, on, or below ground, covering any geologic time, and at any scale or detail. A distributed network of digital geolibraries can archive permanent copies of databases at risk of being discontinued and those that continue to be maintained by the data authors. The geoinformatics system will generate results from widely distributed sources to function as a dynamic data network. Instead of posting a variety of pre-made tables, charts, or maps based on static databases, the interactive dynamic system creates these products on the fly, each time an inquiry is made, using the latest information in the appropriate databases. Thus, in the dynamic system, a map generated today may differ from one created yesterday and one to be created tomorrow, because the databases used to make it are constantly (and sometimes automatically) being updated.

  16. Some issues in data model mapping

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Alsabbagh, Jamal R.

    1985-01-01

    Numerous data models have been reported in the literature since the early 1970's. They have been used as database interfaces and as conceptual design tools. The mapping between schemas expressed according to the same data model or according to different models is interesting for theoretical and practical purposes. This paper addresses some of the issues involved in such a mapping. Of special interest are the identification of the mapping parameters and some current approaches for handling the various situations that require a mapping.

  17. Very fast road database verification using textured 3D city models obtained from airborne imagery

    NASA Astrophysics Data System (ADS)

    Bulatov, Dimitri; Ziems, Marcel; Rottensteiner, Franz; Pohl, Melanie

    2014-10-01

    Road databases are known to be an important part of any geodata infrastructure, e.g. as the basis for urban planning or emergency services. Updating road databases for crisis events must be performed quickly and with the highest possible degree of automation. We present a semi-automatic algorithm for road verification using textured 3D city models, starting from aerial or even UAV-images. This algorithm contains two processes, which exchange input and output, but basically run independently from each other. These processes are textured urban terrain reconstruction and road verification. The first process contains a dense photogrammetric reconstruction of 3D geometry of the scene using depth maps. The second process is our core procedure, since it contains various methods for road verification. Each method represents a unique road model and a specific strategy, and thus is able to deal with a specific type of roads. Each method is designed to provide two probability distributions, where the first describes the state of a road object (correct, incorrect), and the second describes the state of its underlying road model (applicable, not applicable). Based on the Dempster-Shafer Theory, both distributions are mapped to a single distribution that refers to three states: correct, incorrect, and unknown. With respect to the interaction of both processes, the normalized elevation map and the digital orthophoto generated during 3D reconstruction are the necessary input - together with initial road database entries - for the road verification process. If the entries of the database are too obsolete or not available at all, sensor data evaluation enables classification of the road pixels of the elevation map followed by road map extraction by means of vectorization and filtering of the geometrically and topologically inconsistent objects. Depending on the time issue and availability of a geo-database for buildings, the urban terrain reconstruction procedure has semantic models of buildings, trees, and ground as output. Building s and ground are textured by means of available images. This facilitates the orientation in the model and the interactive verification of the road objects that where initially classified as unknown. The three main modules of the texturing algorithm are: Pose estimation (if the videos are not geo-referenced), occlusion analysis, and texture synthesis.

  18. Development of Mobile Mapping System for 3D Road Asset Inventory.

    PubMed

    Sairam, Nivedita; Nagarajan, Sudhagar; Ornitz, Scott

    2016-03-12

    Asset Management is an important component of an infrastructure project. A significant cost is involved in maintaining and updating the asset information. Data collection is the most time-consuming task in the development of an asset management system. In order to reduce the time and cost involved in data collection, this paper proposes a low cost Mobile Mapping System using an equipped laser scanner and cameras. First, the feasibility of low cost sensors for 3D asset inventory is discussed by deriving appropriate sensor models. Then, through calibration procedures, respective alignments of the laser scanner, cameras, Inertial Measurement Unit and GPS (Global Positioning System) antenna are determined. The efficiency of this Mobile Mapping System is experimented by mounting it on a truck and golf cart. By using derived sensor models, geo-referenced images and 3D point clouds are derived. After validating the quality of the derived data, the paper provides a framework to extract road assets both automatically and manually using techniques implementing RANSAC plane fitting and edge extraction algorithms. Then the scope of such extraction techniques along with a sample GIS (Geographic Information System) database structure for unified 3D asset inventory are discussed.

  19. Development of Mobile Mapping System for 3D Road Asset Inventory

    PubMed Central

    Sairam, Nivedita; Nagarajan, Sudhagar; Ornitz, Scott

    2016-01-01

    Asset Management is an important component of an infrastructure project. A significant cost is involved in maintaining and updating the asset information. Data collection is the most time-consuming task in the development of an asset management system. In order to reduce the time and cost involved in data collection, this paper proposes a low cost Mobile Mapping System using an equipped laser scanner and cameras. First, the feasibility of low cost sensors for 3D asset inventory is discussed by deriving appropriate sensor models. Then, through calibration procedures, respective alignments of the laser scanner, cameras, Inertial Measurement Unit and GPS (Global Positioning System) antenna are determined. The efficiency of this Mobile Mapping System is experimented by mounting it on a truck and golf cart. By using derived sensor models, geo-referenced images and 3D point clouds are derived. After validating the quality of the derived data, the paper provides a framework to extract road assets both automatically and manually using techniques implementing RANSAC plane fitting and edge extraction algorithms. Then the scope of such extraction techniques along with a sample GIS (Geographic Information System) database structure for unified 3D asset inventory are discussed. PMID:26985897

  20. Video Altimeter and Obstruction Detector for an Aircraft

    NASA Technical Reports Server (NTRS)

    Delgado, Frank J.; Abernathy, Michael F.; White, Janis; Dolson, William R.

    2013-01-01

    Video-based altimetric and obstruction detection systems for aircraft have been partially developed. The hardware of a system of this type includes a downward-looking video camera, a video digitizer, a Global Positioning System receiver or other means of measuring the aircraft velocity relative to the ground, a gyroscope based or other attitude-determination subsystem, and a computer running altimetric and/or obstruction-detection software. From the digitized video data, the altimetric software computes the pixel velocity in an appropriate part of the video image and the corresponding angular relative motion of the ground within the field of view of the camera. Then by use of trigonometric relationships among the aircraft velocity, the attitude of the camera, the angular relative motion, and the altitude, the software computes the altitude. The obstruction-detection software performs somewhat similar calculations as part of a larger task in which it uses the pixel velocity data from the entire video image to compute a depth map, which can be correlated with a terrain map, showing locations of potential obstructions. The depth map can be used as real-time hazard display and/or to update an obstruction database.

  1. The Topography of Names and Places.

    ERIC Educational Resources Information Center

    Morehead, Joe

    1999-01-01

    Discusses geographic naming with Geographic Information Systems (GIS) technology. Highlights include the Geographic Names Information System (GNIS) online database; United States Geological Survey (USGS) national mapping information; the USGS-Microsoft connection; and panoramic maps and the small LizardTech company. (AEF)

  2. TranscriptomeBrowser 3.0: introducing a new compendium of molecular interactions and a new visualization tool for the study of gene regulatory networks.

    PubMed

    Lepoivre, Cyrille; Bergon, Aurélie; Lopez, Fabrice; Perumal, Narayanan B; Nguyen, Catherine; Imbert, Jean; Puthier, Denis

    2012-01-31

    Deciphering gene regulatory networks by in silico approaches is a crucial step in the study of the molecular perturbations that occur in diseases. The development of regulatory maps is a tedious process requiring the comprehensive integration of various evidences scattered over biological databases. Thus, the research community would greatly benefit from having a unified database storing known and predicted molecular interactions. Furthermore, given the intrinsic complexity of the data, the development of new tools offering integrated and meaningful visualizations of molecular interactions is necessary to help users drawing new hypotheses without being overwhelmed by the density of the subsequent graph. We extend the previously developed TranscriptomeBrowser database with a set of tables containing 1,594,978 human and mouse molecular interactions. The database includes: (i) predicted regulatory interactions (computed by scanning vertebrate alignments with a set of 1,213 position weight matrices), (ii) potential regulatory interactions inferred from systematic analysis of ChIP-seq experiments, (iii) regulatory interactions curated from the literature, (iv) predicted post-transcriptional regulation by micro-RNA, (v) protein kinase-substrate interactions and (vi) physical protein-protein interactions. In order to easily retrieve and efficiently analyze these interactions, we developed In-teractomeBrowser, a graph-based knowledge browser that comes as a plug-in for Transcriptome-Browser. The first objective of InteractomeBrowser is to provide a user-friendly tool to get new insight into any gene list by providing a context-specific display of putative regulatory and physical interactions. To achieve this, InteractomeBrowser relies on a "cell compartments-based layout" that makes use of a subset of the Gene Ontology to map gene products onto relevant cell compartments. This layout is particularly powerful for visual integration of heterogeneous biological information and is a productive avenue in generating new hypotheses. The second objective of InteractomeBrowser is to fill the gap between interaction databases and dynamic modeling. It is thus compatible with the network analysis software Cytoscape and with the Gene Interaction Network simulation software (GINsim). We provide examples underlying the benefits of this visualization tool for large gene set analysis related to thymocyte differentiation. The InteractomeBrowser plugin is a powerful tool to get quick access to a knowledge database that includes both predicted and validated molecular interactions. InteractomeBrowser is available through the TranscriptomeBrowser framework and can be found at: http://tagc.univ-mrs.fr/tbrowser/. Our database is updated on a regular basis.

  3. Mapping the literature of transcultural nursing*

    PubMed Central

    Murphy, Sharon C.

    2006-01-01

    Overview: No bibliometric studies of the literature of the field of transcultural nursing have been published. This paper describes a citation analysis as part of the project undertaken by the Nursing and Allied Health Resources Section of the Medical Library Association to map the literature of nursing. Objective: The purpose of this study was to identify the core literature and determine which databases provided the most complete access to the transcultural nursing literature. Methods: Cited references from essential source journals were analyzed for a three-year period. Eight major databases were compared for indexing coverage of the identified core list of journals. Results: This study identifies 138 core journals. Transcultural nursing relies on journal literature from associated health sciences fields in addition to nursing. Books provide an important format. Nearly all cited references were from the previous 18 years. In comparing indexing coverage among 8 major databases, 3 databases rose to the top. Conclusions: No single database can claim comprehensive indexing coverage for this broad field. It is essential to search multiple databases. Based on this study, PubMed/MEDLINE, Social Sciences Citation Index, and CINAHL provide the best coverage. Collections supporting transcultural nursing require robust access to literature beyond nursing publications. PMID:16710461

  4. Faults, lineaments, and earthquake epicenters digital map of the Pahute Mesa 30' x 60' Quadrangle, Nevada

    USGS Publications Warehouse

    Minor, S.A.; Vick, G.S.; Carr, M.D.; Wahl, R.R.

    1996-01-01

    This map database, identified as Faults, lineaments, and earthquake epicenters digital map of the Pahute Mesa 30' X 60' quadrangle, Nevada, has been approved for release and publication by the Director of the USGS. Although this database has been subjected to rigorous review and is substantially complete, the USGS reserves the right to revise the data pursuant to further analysis and review. Furthermore, it is released on condition that neither the USGS nor the United States Government may be held liable for any damages resulting from its authorized or unauthorized use. This digital map compilation incorporates fault, air photo lineament, and earthquake epicenter data from within the Pahute Mesa 30' by 60' quadrangle, southern Nye County, Nevada (fig. 1). The compilation contributes to the U.S. Department of Energy's Yucca Mountain Project, established to determine whether or not the Yucca Mountain site is suitable for the disposal of high-level nuclear waste. Studies of local and regional faulting and earthquake activity, including the features depicted in this compilation, are carried out to help characterize seismic hazards and tectonic processes that may be relevant to the future stability of Yucca Mountain. The Yucca Mountain site is located in the central part of the Beatty 30' by 60' quadrangle approximately 15 km south of the south edge of the Pahute Mesa quadrangle (fig. 1). The U.S. Geological Survey participates in studies of the Yucca Mountain site under Interagency Agreement DE-AI08-78ET44802. The map compilation is only available on line as a digital database in ARC/INFO ASCII (Generate) and export formats. The database can be downloaded via 'anonymous ftp' from a USGS system named greenwood.cr.usgs.gov (136.177.48.5). The files are located in a directory named /pub/open-file-reports/ofr-96-0262. This directory contains a text document named 'README.1 ST' that contains database technical and explanatory documentation, including instructions for uncompressing the bundled (tar) file. In displaying the compilation it is important to note that the map data set is considered accurate when depicted at a scale of about 1:100,000; displaying the compilation at scales significantly larger than this may result in distortions and (or) mislocations of the data.

  5. Northeast India Helminth Parasite Information Database (NEIHPID): Knowledge Base for Helminth Parasites

    PubMed Central

    Debnath, Manish; Kharumnuid, Graciously; Thongnibah, Welfrank; Tandon, Veena

    2016-01-01

    Most metazoan parasites that invade vertebrate hosts belong to three phyla: Platyhelminthes, Nematoda and Acanthocephala. Many of the parasitic members of these phyla are collectively known as helminths and are causative agents of many debilitating, deforming and lethal diseases of humans and animals. The North-East India Helminth Parasite Information Database (NEIHPID) project aimed to document and characterise the spectrum of helminth parasites in the north-eastern region of India, providing host, geographical distribution, diagnostic characters and image data. The morphology-based taxonomic data are supplemented with information on DNA sequences of nuclear, ribosomal and mitochondrial gene marker regions that aid in parasite identification. In addition, the database contains raw next generation sequencing (NGS) data for 3 foodborne trematode parasites, with more to follow. The database will also provide study material for students interested in parasite biology. Users can search the database at various taxonomic levels (phylum, class, order, superfamily, family, genus, and species), or by host, habitat and geographical location. Specimen collection locations are noted as co-ordinates in a MySQL database and can be viewed on Google maps, using Google Maps JavaScript API v3. The NEIHPID database has been made freely available at http://nepiac.nehu.ac.in/index.php PMID:27285615

  6. Northeast India Helminth Parasite Information Database (NEIHPID): Knowledge Base for Helminth Parasites.

    PubMed

    Biswal, Devendra Kumar; Debnath, Manish; Kharumnuid, Graciously; Thongnibah, Welfrank; Tandon, Veena

    2016-01-01

    Most metazoan parasites that invade vertebrate hosts belong to three phyla: Platyhelminthes, Nematoda and Acanthocephala. Many of the parasitic members of these phyla are collectively known as helminths and are causative agents of many debilitating, deforming and lethal diseases of humans and animals. The North-East India Helminth Parasite Information Database (NEIHPID) project aimed to document and characterise the spectrum of helminth parasites in the north-eastern region of India, providing host, geographical distribution, diagnostic characters and image data. The morphology-based taxonomic data are supplemented with information on DNA sequences of nuclear, ribosomal and mitochondrial gene marker regions that aid in parasite identification. In addition, the database contains raw next generation sequencing (NGS) data for 3 foodborne trematode parasites, with more to follow. The database will also provide study material for students interested in parasite biology. Users can search the database at various taxonomic levels (phylum, class, order, superfamily, family, genus, and species), or by host, habitat and geographical location. Specimen collection locations are noted as co-ordinates in a MySQL database and can be viewed on Google maps, using Google Maps JavaScript API v3. The NEIHPID database has been made freely available at http://nepiac.nehu.ac.in/index.php.

  7. sscMap: an extensible Java application for connecting small-molecule drugs using gene-expression signatures.

    PubMed

    Zhang, Shu-Dong; Gant, Timothy W

    2009-07-31

    Connectivity mapping is a process to recognize novel pharmacological and toxicological properties in small molecules by comparing their gene expression signatures with others in a database. A simple and robust method for connectivity mapping with increased specificity and sensitivity was recently developed, and its utility demonstrated using experimentally derived gene signatures. This paper introduces sscMap (statistically significant connections' map), a Java application designed to undertake connectivity mapping tasks using the recently published method. The software is bundled with a default collection of reference gene-expression profiles based on the publicly available dataset from the Broad Institute Connectivity Map 02, which includes data from over 7000 Affymetrix microarrays, for over 1000 small-molecule compounds, and 6100 treatment instances in 5 human cell lines. In addition, the application allows users to add their custom collections of reference profiles and is applicable to a wide range of other 'omics technologies. The utility of sscMap is two fold. First, it serves to make statistically significant connections between a user-supplied gene signature and the 6100 core reference profiles based on the Broad Institute expanded dataset. Second, it allows users to apply the same improved method to custom-built reference profiles which can be added to the database for future referencing. The software can be freely downloaded from http://purl.oclc.org/NET/sscMap.

  8. Systematization of the protein sequence diversity in enzymes related to secondary metabolic pathways in plants, in the context of big data biology inspired by the KNApSAcK motorcycle database.

    PubMed

    Ikeda, Shun; Abe, Takashi; Nakamura, Yukiko; Kibinge, Nelson; Hirai Morita, Aki; Nakatani, Atsushi; Ono, Naoaki; Ikemura, Toshimichi; Nakamura, Kensuke; Altaf-Ul-Amin, Md; Kanaya, Shigehiko

    2013-05-01

    Biology is increasingly becoming a data-intensive science with the recent progress of the omics fields, e.g. genomics, transcriptomics, proteomics and metabolomics. The species-metabolite relationship database, KNApSAcK Core, has been widely utilized and cited in metabolomics research, and chronological analysis of that research work has helped to reveal recent trends in metabolomics research. To meet the needs of these trends, the KNApSAcK database has been extended by incorporating a secondary metabolic pathway database called Motorcycle DB. We examined the enzyme sequence diversity related to secondary metabolism by means of batch-learning self-organizing maps (BL-SOMs). Initially, we constructed a map by using a big data matrix consisting of the frequencies of all possible dipeptides in the protein sequence segments of plants and bacteria. The enzyme sequence diversity of the secondary metabolic pathways was examined by identifying clusters of segments associated with certain enzyme groups in the resulting map. The extent of diversity of 15 secondary metabolic enzyme groups is discussed. Data-intensive approaches such as BL-SOM applied to big data matrices are needed for systematizing protein sequences. Handling big data has become an inevitable part of biology.

  9. The IUGS/IAGC Task Group on Global Geochemical Baselines

    USGS Publications Warehouse

    Smith, David B.; Wang, Xueqiu; Reeder, Shaun; Demetriades, Alecos

    2012-01-01

    The Task Group on Global Geochemical Baselines, operating under the auspices of both the International Union of Geological Sciences (IUGS) and the International Association of Geochemistry (IAGC), has the long-term goal of establishing a global geochemical database to document the concentration and distribution of chemical elements in the Earth’s surface or near-surface environment. The database and accompanying element distribution maps represent a geochemical baseline against which future human-induced or natural changes to the chemistry of the land surface may be recognized and quantified. In order to accomplish this long-term goal, the activities of the Task Group include: (1) developing partnerships with countries conducting broad-scale geochemical mapping studies; (2) providing consultation and training in the form of workshops and short courses; (3) organizing periodic international symposia to foster communication among the geochemical mapping community; (4) developing criteria for certifying those projects whose data are acceptable in a global geochemical database; (5) acting as a repository for data collected by those projects meeting the criteria for standardization; (6) preparing complete metadata for the certified projects; and (7) preparing, ultimately, a global geochemical database. This paper summarizes the history and accomplishments of the Task Group since its first predecessor project was established in 1988.

  10. The LncRNA Connectivity Map: Using LncRNA Signatures to Connect Small Molecules, LncRNAs, and Diseases.

    PubMed

    Yang, Haixiu; Shang, Desi; Xu, Yanjun; Zhang, Chunlong; Feng, Li; Sun, Zeguo; Shi, Xinrui; Zhang, Yunpeng; Han, Junwei; Su, Fei; Li, Chunquan; Li, Xia

    2017-07-27

    Well characterized the connections among diseases, long non-coding RNAs (lncRNAs) and drugs are important for elucidating the key roles of lncRNAs in biological mechanisms in various biological states. In this study, we constructed a database called LNCmap (LncRNA Connectivity Map), available at http://www.bio-bigdata.com/LNCmap/ , to establish the correlations among diseases, physiological processes, and the action of small molecule therapeutics by attempting to describe all biological states in terms of lncRNA signatures. By reannotating the microarray data from the Connectivity Map database, the LNCmap obtained 237 lncRNA signatures of 5916 instances corresponding to 1262 small molecular drugs. We provided a user-friendly interface for the convenient browsing, retrieval and download of the database, including detailed information and the associations of drugs and corresponding affected lncRNAs. Additionally, we developed two enrichment analysis methods for users to identify candidate drugs for a particular disease by inputting the corresponding lncRNA expression profiles or an associated lncRNA list and then comparing them to the lncRNA signatures in our database. Overall, LNCmap could significantly improve our understanding of the biological roles of lncRNAs and provide a unique resource to reveal the connections among drugs, lncRNAs and diseases.

  11. Dynamic approximate entropy electroanatomic maps detect rotors in a simulated atrial fibrillation model.

    PubMed

    Ugarte, Juan P; Orozco-Duque, Andrés; Tobón, Catalina; Kremen, Vaclav; Novak, Daniel; Saiz, Javier; Oesterlein, Tobias; Schmitt, Clauss; Luik, Armin; Bustamante, John

    2014-01-01

    There is evidence that rotors could be drivers that maintain atrial fibrillation. Complex fractionated atrial electrograms have been located in rotor tip areas. However, the concept of electrogram fractionation, defined using time intervals, is still controversial as a tool for locating target sites for ablation. We hypothesize that the fractionation phenomenon is better described using non-linear dynamic measures, such as approximate entropy, and that this tool could be used for locating the rotor tip. The aim of this work has been to determine the relationship between approximate entropy and fractionated electrograms, and to develop a new tool for rotor mapping based on fractionation levels. Two episodes of chronic atrial fibrillation were simulated in a 3D human atrial model, in which rotors were observed. Dynamic approximate entropy maps were calculated using unipolar electrogram signals generated over the whole surface of the 3D atrial model. In addition, we optimized the approximate entropy calculation using two real multi-center databases of fractionated electrogram signals, labeled in 4 levels of fractionation. We found that the values of approximate entropy and the levels of fractionation are positively correlated. This allows the dynamic approximate entropy maps to localize the tips from stable and meandering rotors. Furthermore, we assessed the optimized approximate entropy using bipolar electrograms generated over a vicinity enclosing a rotor, achieving rotor detection. Our results suggest that high approximate entropy values are able to detect a high level of fractionation and to locate rotor tips in simulated atrial fibrillation episodes. We suggest that dynamic approximate entropy maps could become a tool for atrial fibrillation rotor mapping.

  12. The integrated web service and genome database for agricultural plants with biotechnology information.

    PubMed

    Kim, Changkug; Park, Dongsuk; Seol, Youngjoo; Hahn, Jangho

    2011-01-01

    The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information from 1,039,823 records mapped to rice, Arabidopsis, and Chinese cabbage.

  13. The HISTMAG database: combining historical, archaeomagnetic and volcanic data

    NASA Astrophysics Data System (ADS)

    Arneitz, Patrick; Leonhardt, Roman; Schnepp, Elisabeth; Heilig, Balázs; Mayrhofer, Franziska; Kovacs, Peter; Hejda, Pavel; Valach, Fridrich; Vadasz, Gergely; Hammerl, Christa; Egli, Ramon; Fabian, Karl; Kompein, Niko

    2017-09-01

    Records of the past geomagnetic field can be divided into two main categories. These are instrumental historical observations on the one hand, and field estimates based on the magnetization acquired by rocks, sediments and archaeological artefacts on the other hand. In this paper, a new database combining historical, archaeomagnetic and volcanic records is presented. HISTMAG is a relational database, implemented in MySQL, and can be accessed via a web-based interface (http://www.conrad-observatory.at/zamg/index.php/data-en/histmag-database). It combines available global historical data compilations covering the last ∼500 yr as well as archaeomagnetic and volcanic data collections from the last 50 000 yr. Furthermore, new historical and archaeomagnetic records, mainly from central Europe, have been acquired. In total, 190 427 records are currently available in the HISTMAG database, whereby the majority is related to historical declination measurements (155 525). The original database structure was complemented by new fields, which allow for a detailed description of the different data types. A user-comment function provides the possibility for a scientific discussion about individual records. Therefore, HISTMAG database supports thorough reliability and uncertainty assessments of the widely different data sets, which are an essential basis for geomagnetic field reconstructions. A database analysis revealed systematic offset for declination records derived from compass roses on historical geographical maps through comparison with other historical records, while maps created for mining activities represent a reliable source.

  14. The structure of partially-premixed methane/air flames under varying premixing

    NASA Astrophysics Data System (ADS)

    Kluzek, Celine; Karpetis, Adonios

    2008-11-01

    The present work examines the spatial and scalar structure of laminar, partially premixed methane/air flames with the objective of developing flamelet mappings that capture the effect of varying premixture strength (air addition in fuel.) Experimental databases containing full thermochemistry measurements within laminar axisymmetric flames were obtained at Sandia National Laboratories, and the measurements of all major species and temperature are compared to opposed-jet one-dimensional flow simulation using Cantera and the full chemical kinetic mechanism of GRI 3.0. Particular emphasis is placed on the scalar structure of the laminar flames, and the formation of flamelet mappings that capture all of the salient features of thermochemistry in a conserved scalar representation. Three different premixture strengths were examined in detail: equivalence ratios of 1.8, 2.2, and 3.17 resulted in clear differences in the flame scalar structure, particularly in the position of the rich premixed flame zone and the attendant levels of major and intermediate species (carbon monoxide and hydrogen).

  15. Visualization and manipulating the image of a formal data structure (FDS)-based database

    NASA Astrophysics Data System (ADS)

    Verdiesen, Franc; de Hoop, Sylvia; Molenaar, Martien

    1994-08-01

    A vector map is a terrain representation with a vector-structured geometry. Molenaar formulated an object-oriented formal data structure for 3D single valued vector maps. This FDS is implemented in a database (Oracle). In this study we describe a methodology for visualizing a FDS-based database and manipulating the image. A data set retrieved by querying the database is converted into an import file for a drawing application. An objective of this study is that an end-user can alter and add terrain objects in the image. The drawing application creates an export file, that is compared with the import file. Differences between these files result in updating the database which involves checks on consistency. In this study Autocad is used for visualizing and manipulating the image of the data set. A computer program has been written for the data exchange and conversion between Oracle and Autocad. The data structure of the FDS is compared to the data structure of Autocad and the data of the FDS is converted into the structure of Autocad equal to the FDS.

  16. Human population, urban settlement patterns and their impact on Plasmodium falciparum malaria endemicity.

    PubMed

    Tatem, Andrew J; Guerra, Carlos A; Kabaria, Caroline W; Noor, Abdisalan M; Hay, Simon I

    2008-10-27

    The efficient allocation of financial resources for malaria control and the optimal distribution of appropriate interventions require accurate information on the geographic distribution of malaria risk and of the human populations it affects. Low population densities in rural areas and high population densities in urban areas can influence malaria transmission substantially. Here, the Malaria Atlas Project (MAP) global database of Plasmodium falciparum parasite rate (PfPR) surveys, medical intelligence and contemporary population surfaces are utilized to explore these relationships and other issues involved in combining malaria risk maps with those of human population distribution in order to define populations at risk more accurately. First, an existing population surface was examined to determine if it was sufficiently detailed to be used reliably as a mask to identify areas of very low and very high population density as malaria free regions. Second, the potential of international travel and health guidelines (ITHGs) for identifying malaria free cities was examined. Third, the differences in PfPR values between surveys conducted in author-defined rural and urban areas were examined. Fourth, the ability of various global urban extent maps to reliably discriminate these author-based classifications of urban and rural in the PfPR database was investigated. Finally, the urban map that most accurately replicated the author-based classifications was analysed to examine the effects of urban classifications on PfPR values across the entire MAP database. Masks of zero population density excluded many non-zero PfPR surveys, indicating that the population surface was not detailed enough to define areas of zero transmission resulting from low population densities. In contrast, the ITHGs enabled the identification and mapping of 53 malaria free urban areas within endemic countries. Comparison of PfPR survey results showed significant differences between author-defined 'urban' and 'rural' designations in Africa, but not for the remainder of the malaria endemic world. The Global Rural Urban Mapping Project (GRUMP) urban extent mask proved most accurate for mapping these author-defined rural and urban locations, and further sub-divisions of urban extents into urban and peri-urban classes enabled the effects of high population densities on malaria transmission to be mapped and quantified. The availability of detailed, contemporary census and urban extent data for the construction of coherent and accurate global spatial population databases is often poor. These known sources of uncertainty in population surfaces and urban maps have the potential to be incorporated into future malaria burden estimates. Currently, insufficient spatial information exists globally to identify areas accurately where population density is low enough to impact upon transmission. Medical intelligence does however exist to reliably identify malaria free cities. Moreover, in Africa, urban areas that have a significant effect on malaria transmission can be mapped.

  17. MAPPER: A personal computer map projection tool

    NASA Technical Reports Server (NTRS)

    Bailey, Steven A.

    1993-01-01

    MAPPER is a set of software tools designed to let users create and manipulate map projections on a personal computer (PC). The capability exists to generate five popular map projections. These include azimuthal, cylindrical, mercator, lambert, and sinusoidal projections. Data for projections are contained in five coordinate databases at various resolutions. MAPPER is managed by a system of pull-down windows. This interface allows the user to intuitively create, view and export maps to other platforms.

  18. Long term volcanic hazard analysis in the Canary Islands

    NASA Astrophysics Data System (ADS)

    Becerril, L.; Galindo, I.; Laín, L.; Llorente, M.; Mancebo, M. J.

    2009-04-01

    Historic volcanism in Spain is restricted to the Canary Islands, a volcanic archipelago formed by seven volcanic islands. Several historic eruptions have been registered in the last five hundred years. However, and despite the huge amount of citizens and tourist in the archipelago, only a few volcanic hazard studies have been carried out. These studies are mainly focused in the developing of hazard maps in Lanzarote and Tenerife islands, especially for land use planning. The main handicap for these studies in the Canary Islands is the lack of well reported historical eruptions, but also the lack of data such as geochronological, geochemical or structural. In recent years, the use of Geographical Information Systems (GIS) and the improvement in the volcanic processes modelling has provided an important tool for volcanic hazard assessment. Although this sophisticated programs are really useful they need to be fed by a huge amount of data that sometimes, such in the case of the Canary Islands, are not available. For this reason, the Spanish Geological Survey (IGME) is developing a complete geo-referenced database for long term volcanic analysis in the Canary Islands. The Canarian Volcanic Hazard Database (HADA) is based on a GIS helping to organize and manage volcanic information efficiently. HADA includes the following groups of information: (1) 1:25.000 scale geologic maps, (2) 1:25.000 topographic maps, (3) geochronologic data, (4) geochemical data, (5) structural information, (6) climatic data. Data must pass a quality control before they are included in the database. New data are easily integrated in the database. With the HADA database the IGME has started a systematic organization of the existing data. In the near future, the IGME will generate new information to be included in HADA, such as volcanological maps of the islands, structural information, geochronological data and other information to assess long term volcanic hazard analysis. HADA will permit having enough quality information to map volcanic hazards and to run more reliable models of volcanic hazards, but in addition it aims to become a sharing system, improving communication between researchers, reducing redundant work and to be the reference for geological research in the Canary Islands.

  19. Local Community Verification of Coastal Erosion Risks in the Arctic: Insights from Alaska's North Slope

    NASA Astrophysics Data System (ADS)

    Brady, M.

    2016-12-01

    During his historic trip to Alaska in 2015, U.S. President Barack Obama announced a collaborative effort to update maps of the Arctic region in anticipation of increased maritime access and resource development and to support climate resilience. Included in this effort is development of an Arctic-wide satellite-based digital elevation model (DEM) to provide a baseline to monitor landscape change such as coastal erosion. Focusing in Alaska's North Slope, an objective of this study is to transform emerging Arctic environment spatial data products including the new DEM into information that can support local level planning and decision-making in the face of extreme coastal erosion and related environmental threats. In pursuit of this, in 2016, 4 workshops were held in three North Slope villages highly exposed to coastal erosion. The first workshop with approximately 10 managers in Barrow solicited feedback on an erosion risk database developed in a previous research stage and installed onto the North Slope's planning Web portal. The database includes a physical risk indicator based on factors such as historical erosion and effects of sea ice loss summarized at asset locations. After a demonstration of the database, participants discussed usability aspects such as data reliability. The focus of the mapping workshops in Barrow and two smaller villages Wainwright and Kaktovik was to verify and expand the risk database by interactively mapping erosion observations and community asset impacts. Using coded stickers and paper maps of the shoreline showing USGS erosion rates, a total of 50 participants provided feedback on erosion data accuracy. Approximately 25 of the total 50 participants were elders and hunters who also provided in-depth community risk information. The workshop with managers confirmed physical risk factors used in the risk database, and revealed that the information may be relied upon to support some development decisions and better engage developers about erosion risks. Results from the three mapping workshops revealed that most participants agree that the USGS data are consistent with their observations. Also, in-depth contributions from elders and hunters confirmed that there is a need to monitor loss of specific assets including hunting grounds and historic places and associated community impacts.

  20. Exploiting rice-sorghum synteny for targeted development of EST-SSRs to enrich the sorghum genetic linkage map.

    PubMed

    Ramu, P; Kassahun, B; Senthilvel, S; Ashok Kumar, C; Jayashree, B; Folkertsma, R T; Reddy, L Ananda; Kuruvinashetti, M S; Haussmann, B I G; Hash, C T

    2009-11-01

    The sequencing and detailed comparative functional analysis of genomes of a number of select botanical models open new doors into comparative genomics among the angiosperms, with potential benefits for improvement of many orphan crops that feed large populations. In this study, a set of simple sequence repeat (SSR) markers was developed by mining the expressed sequence tag (EST) database of sorghum. Among the SSR-containing sequences, only those sharing considerable homology with rice genomic sequences across the lengths of the 12 rice chromosomes were selected. Thus, 600 SSR-containing sorghum EST sequences (50 homologous sequences on each of the 12 rice chromosomes) were selected, with the intention of providing coverage for corresponding homologous regions of the sorghum genome. Primer pairs were designed and polymorphism detection ability was assessed using parental pairs of two existing sorghum mapping populations. About 28% of these new markers detected polymorphism in this 4-entry panel. A subset of 55 polymorphic EST-derived SSR markers were mapped onto the existing skeleton map of a recombinant inbred population derived from cross N13 x E 36-1, which is segregating for Striga resistance and the stay-green component of terminal drought tolerance. These new EST-derived SSR markers mapped across all 10 sorghum linkage groups, mostly to regions expected based on prior knowledge of rice-sorghum synteny. The ESTs from which these markers were derived were then mapped in silico onto the aligned sorghum genome sequence, and 88% of the best hits corresponded to linkage-based positions. This study demonstrates the utility of comparative genomic information in targeted development of markers to fill gaps in linkage maps of related crop species for which sufficient genomic tools are not available.

  1. Identification and analysis of mutational hotspots in oncogenes and tumour suppressors.

    PubMed

    Baeissa, Hanadi; Benstead-Hume, Graeme; Richardson, Christopher J; Pearl, Frances M G

    2017-03-28

    The key to interpreting the contribution of a disease-associated mutation in the development and progression of cancer is an understanding of the consequences of that mutation both on the function of the affected protein and on the pathways in which that protein is involved. Protein domains encapsulate function and position-specific domain based analysis of mutations have been shown to help elucidate their phenotypes. In this paper we examine the domain biases in oncogenes and tumour suppressors, and find that their domain compositions substantially differ. Using data from over 30 different cancers from whole-exome sequencing cancer genomic projects we mapped over one million mutations to their respective Pfam domains to identify which domains are enriched in any of three different classes of mutation; missense, indels or truncations. Next, we identified the mutational hotspots within domain families by mapping small mutations to equivalent positions in multiple sequence alignments of protein domainsWe find that gain of function mutations from oncogenes and loss of function mutations from tumour suppressors are normally found in different domain families and when observed in the same domain families, hotspot mutations are located at different positions within the multiple sequence alignment of the domain. By considering hotspots in tumour suppressors and oncogenes independently, we find that there are different specific positions within domain families that are particularly suited to accommodate either a loss or a gain of function mutation. The position is also dependent on the class of mutation.We find rare mutations co-located with well-known functional mutation hotspots, in members of homologous domain superfamilies, and we detect novel mutation hotspots in domain families previously unconnected with cancer. The results of this analysis can be accessed through the MOKCa database (http://strubiol.icr.ac.uk/extra/MOKCa).

  2. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain.

    PubMed

    Dominkovics, Pau; Granell, Carlos; Pérez-Navarro, Antoni; Casals, Martí; Orcau, Angels; Caylà, Joan A

    2011-11-29

    Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web processing services together with web-based mapping capabilities suit the needs of health practitioners in epidemiological analysis scenarios.

  3. Digital mapping techniques '06 - Workshop proceedings

    USGS Publications Warehouse

    Soller, David R.

    2007-01-01

    The Digital Mapping Techniques `06 (DMT`06) workshop was attended by more than 110 technical experts from 51 agencies, universities, and private companies, including representatives from 27 state geological surveys (see Appendix A of these Proceedings). This workshop was similar in nature to the previous nine meetings, which were held in Lawrence, Kansas (Soller, 1997), Champaign, Illinois (Soller, 1998), Madison, Wisconsin (Soller, 1999), Lexington, Kentucky (Soller, 2000), Tuscaloosa, Alabama (Soller, 2001), Salt Lake City, Utah (Soller, 2002), Millersville, Pennsylvania (Soller, 2003), Portland, Oregon (Soller, 2004), and Baton Rouge, Louisiana (Soller, 2005). This year?s meeting was hosted by the Ohio Geological Survey, from June 11-14, 2006, on the Ohio State University campus in Columbus, Ohio. As in the previous meetings, the objective was to foster informal discussion and exchange of technical information. It is with great pleasure that I note that the objective was successfully met, as attendees continued to share and exchange knowledge and information, and renew friendships and collegial work begun at past DMT workshops.Each DMT workshop has been coordinated by the Association of American State Geologists (AASG) and U.S. Geological Survey (USGS) Data Capture Working Group, the latter of which was formed in August 1996 to support the AASG and the USGS in their effort to build a National Geologic Map Database (see Soller, this volume, and http://ngmdb.usgs.gov/info/standards/datacapt/). The Working Group was formed because increased production efficiencies, standardization, and quality of digital map products were needed for the database - and for the State and Federal geological surveys - to provide more high-quality digital maps to the public.At the 2006 meeting, oral and poster presentations and special discussion sessions emphasized: 1) methods for creating and publishing map products (here, "publishing" includes Web-based release); 2) field data capture software and techniques, including the use of LIDAR; 3) digital cartographic techniques; 4) migration of digital maps into ArcGIS Geodatabase format; 5) analytical GIS techniques; and 6) continued development of the National Geologic Map Database.

  4. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain

    PubMed Central

    2011-01-01

    Background Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Methods Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. Results The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. Conclusions In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web processing services together with web-based mapping capabilities suit the needs of health practitioners in epidemiological analysis scenarios. PMID:22126392

  5. Digital mapping techniques '00, workshop proceedings - May 17-20, 2000, Lexington, Kentucky

    USGS Publications Warehouse

    Soller, David R.

    2000-01-01

    Introduction: The Digital Mapping Techniques '00 (DMT'00) workshop was attended by 99 technical experts from 42 agencies, universities, and private companies, including representatives from 28 state geological surveys (see Appendix A). This workshop was similar in nature to the first three meetings, held in June, 1997, in Lawrence, Kansas (Soller, 1997), in May, 1998, in Champaign, Illinois (Soller, 1998a), and in May, 1999, in Madison, Wisconsin (Soller, 1999). This year's meeting was hosted by the Kentucky Geological Survey, from May 17 to 20, 2000, on the University of Kentucky campus in Lexington. As in the previous meetings, the objective was to foster informal discussion and exchange of technical information. When, based on discussions at the workshop, an attendee adopts or modifies a newly learned technique, the workshop clearly has met that objective. Evidence of learning and cooperation among participating agencies continued to be a highlight of the DMT workshops (see example in Soller, 1998b, and various papers in this volume). The meeting's general goal was to help move the state geological surveys and the USGS toward development of more cost-effective, flexible, and useful systems for digital mapping and geographic information systems (GIS) analysis. Through oral and poster presentations and special discussion sessions, emphasis was given to: 1) methods for creating and publishing map products (here, 'publishing' includes Web-based release); 2) continued development of the National Geologic Map Database; 3) progress toward building a standard geologic map data model; 4) field data-collection systems; and 5) map citation and authorship guidelines. Four representatives of the GIS hardware and software vendor community were invited to participate. The four annual DMT workshops were coordinated by the AASG/USGS Data Capture Working Group, which was formed in August, 1996, to support the Association of American State Geologists and the USGS in their effort to build a National Geologic Map Database (see Soller and Berg, this volume, and http://ncgmp.usgs.gov/ngmdbproject/standards/datacapt/). The Working Group was formed because increased production efficiencies, standardization, and quality of digital map products were needed to help the Database, and the State and Federal geological surveys, provide more high-quality digital maps to the public.

  6. Standardizing clinical laboratory data for secondary use.

    PubMed

    Abhyankar, Swapna; Demner-Fushman, Dina; McDonald, Clement J

    2012-08-01

    Clinical databases provide a rich source of data for answering clinical research questions. However, the variables recorded in clinical data systems are often identified by local, idiosyncratic, and sometimes redundant and/or ambiguous names (or codes) rather than unique, well-organized codes from standard code systems. This reality discourages research use of such databases, because researchers must invest considerable time in cleaning up the data before they can ask their first research question. Researchers at MIT developed MIMIC-II, a nearly complete collection of clinical data about intensive care patients. Because its data are drawn from existing clinical systems, it has many of the problems described above. In collaboration with the MIT researchers, we have begun a process of cleaning up the data and mapping the variable names and codes to LOINC codes. Our first step, which we describe here, was to map all of the laboratory test observations to LOINC codes. We were able to map 87% of the unique laboratory tests that cover 94% of the total number of laboratory tests results. Of the 13% of tests that we could not map, nearly 60% were due to test names whose real meaning could not be discerned and 29% represented tests that were not yet included in the LOINC table. These results suggest that LOINC codes cover most of laboratory tests used in critical care. We have delivered this work to the MIMIC-II researchers, who have included it in their standard MIMIC-II database release so that researchers who use this database in the future will not have to do this work. Published by Elsevier Inc.

  7. New digital magnetic anomaly database for North America

    USGS Publications Warehouse

    Finn, C.A.; Pilkington, M.; Cuevas, A.; Hernandez, I.; Urrutia, J.

    2001-01-01

    The Geological Survey of Canada (GSC), U.S. Geological Survey (USGS), and Consejo de Recursos Minerales of Mexico (CRM) are compiling an upgraded digital magnetic anomaly database and map for North America. This trinational project is expected to be completed by late 2002.

  8. An automated BPM characterization system for LEDA

    NASA Astrophysics Data System (ADS)

    Shurter, R. B.; Gilpatrick, J. D.; Ledford, J.; O'Hara, J.; Power, J.

    1998-12-01

    An automated and highly accurate system for "mapping" 5 cm-diameter beam position monitors (BPMs) used in the Low Energy Demonstrator Accelerator (LEDA) at Los Alamos is described. Two-dimensional data is accumulated from the four micro-stripline electrodes in the probe by sweeping an antenna driven at the LEDA bunching frequency of 350 MHz in discrete steps across the aperture. These data are then used to determine the centroid, first- and third-order sensitivities of the BPM. These probe response coefficients are then embedded in the LEDA control system database to provide normalized beam position information to the operators. A short summary of previous systems we have fielded is given, along with their attributes and deficiencies that had a bearing on this latest design. Lessons learned from this system will, in turn, be used on the next mappers that are currently being designed for 15 cm and 2.5 cm BPMs.

  9. Nencki Genomics Database--Ensembl funcgen enhanced with intersections, user data and genome-wide TFBS motifs.

    PubMed

    Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal

    2013-01-01

    We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql -h database.nencki-genomics.org -u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface.

  10. Nencki Genomics Database—Ensembl funcgen enhanced with intersections, user data and genome-wide TFBS motifs

    PubMed Central

    Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal

    2013-01-01

    We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql –h database.nencki-genomics.org –u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface. Database URL: http://www.nencki-genomics.org. PMID:24089456

  11. Remote sensing as tool for development of landslide databases: The case of the Messina Province (Italy) geodatabase

    NASA Astrophysics Data System (ADS)

    Ciampalini, Andrea; Raspini, Federico; Bianchini, Silvia; Frodella, William; Bardi, Federica; Lagomarsino, Daniela; Di Traglia, Federico; Moretti, Sandro; Proietti, Chiara; Pagliara, Paola; Onori, Roberta; Corazza, Angelo; Duro, Andrea; Basile, Giuseppe; Casagli, Nicola

    2015-11-01

    Landslide geodatabases, including inventories and thematic data, today are fundamental tools for national and/or local authorities in susceptibility, hazard and risk management. A well organized landslide geo-database contains different kinds of data such as past information (landslide inventory maps), ancillary data and updated remote sensing (space-borne and ground based) data, which can be integrated in order to produce landslide susceptibility maps, updated landslide inventory maps and hazard and risk assessment maps. Italy is strongly affected by landslide phenomena which cause victims and significant economic damage to buildings and infrastructure, loss of productive soils and pasture lands. In particular, the Messina Province (southern Italy) represents an area where landslides are recurrent and characterized by high magnitude, due to several predisposing factors (e.g. morphology, land use, lithologies) and different triggering mechanisms (meteorological conditions, seismicity, active tectonics and volcanic activity). For this area, a geodatabase was created by using different monitoring techniques, including remote sensing (e.g. SAR satellite ERS1/2, ENVISAT, RADARSAT-1, TerraSAR-X, COSMO-SkyMed) data, and in situ measurements (e.g. GBInSAR, damage assessment). In this paper a complete landslide geodatabase of the Messina Province, designed following the requirements of the local and national Civil Protection authorities, is presented. This geo-database was used to produce maps (e.g. susceptibility, ground deformation velocities, damage assessment, risk zonation) which today are constantly used by the Civil Protection authorities to manage the landslide hazard of the Messina Province.

  12. SIDD: A Semantically Integrated Database towards a Global View of Human Disease

    PubMed Central

    Cheng, Liang; Wang, Guohua; Li, Jie; Zhang, Tianjiao; Xu, Peigang; Wang, Yadong

    2013-01-01

    Background A number of databases have been developed to collect disease-related molecular, phenotypic and environmental features (DR-MPEs), such as genes, non-coding RNAs, genetic variations, drugs, phenotypes and environmental factors. However, each of current databases focused on only one or two DR-MPEs. There is an urgent demand to develop an integrated database, which can establish semantic associations among disease-related databases and link them to provide a global view of human disease at the biological level. This database, once developed, will facilitate researchers to query various DR-MPEs through disease, and investigate disease mechanisms from different types of data. Methodology To establish an integrated disease-associated database, disease vocabularies used in different databases are mapped to Disease Ontology (DO) through semantic match. 4,284 and 4,186 disease terms from Medical Subject Headings (MeSH) and Online Mendelian Inheritance in Man (OMIM) respectively are mapped to DO. Then, the relationships between DR-MPEs and diseases are extracted and merged from different source databases for reducing the data redundancy. Conclusions A semantically integrated disease-associated database (SIDD) is developed, which integrates 18 disease-associated databases, for researchers to browse multiple types of DR-MPEs in a view. A web interface allows easy navigation for querying information through browsing a disease ontology tree or searching a disease term. Furthermore, a network visualization tool using Cytoscape Web plugin has been implemented in SIDD. It enhances the SIDD usage when viewing the relationships between diseases and DR-MPEs. The current version of SIDD (Jul 2013) documents 4,465,131 entries relating to 139,365 DR-MPEs, and to 3,824 human diseases. The database can be freely accessed from: http://mlg.hit.edu.cn/SIDD. PMID:24146757

  13. Digital Mapping Techniques '10-Workshop Proceedings, Sacramento, California, May 16-19, 2010

    USGS Publications Warehouse

    Soller, David R.; Soller, David R.

    2012-01-01

    The Digital Mapping Techniques '10 (DMT'10) workshop was attended by 110 technical experts from 40 agencies, universities, and private companies, including representatives from 19 State geological surveys (see Appendix A). This workshop, hosted by the California Geological Survey, May 16-19, 2010, in Sacramento, California, was similar in nature to the previous 13 meetings (see Appendix B). The meeting was coordinated by the U.S. Geological Survey's (USGS) National Geologic Map Database project. As in the previous meetings, the objective was to foster informal discussion and exchange of technical information. It is with great pleasure that I note that the objective was again successfully met, as attendees continued to share and exchange knowledge and information, and renew friendships and collegial work begun at past DMT workshops. At this meeting, oral and poster presentations and special discussion sessions emphasized (1) methods for creating and publishing map products ("publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase format; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.

  14. Preliminary integrated geologic map databases for the United States: Digital data for the geology of southeast Alaska

    USGS Publications Warehouse

    Gehrels, George E.; Berg, Henry C.

    2006-01-01

    The growth in the use of Geographic Information Systems (GIS) has highlighted the need for digital geologic maps that have been attributed with information about age and lithology. Such maps can be conveniently used to generate derivative maps for manifold special purposes such as mineral-resource assessment, metallogenic studies, tectonic studies, and environmental research. This report is part of a series of integrated geologic map databases that cover the entire United States. Three national-scale geologic maps that portray most or all of the United States already exist; for the conterminous U.S., King and Beikman (1974a,b) compiled a map at a scale of 1:2,500,000, Beikman (1980) compiled a map for Alaska at 1:2,500,000 scale, and for the entire U.S., Reed and others (2005a,b) compiled a map at a scale of 1:5,000,000. A digital version of the King and Beikman map was published by Schruben and others (1994). Reed and Bush (2004) produced a digital version of the Reed and others (2005a) map for the conterminous U.S. The present series of maps is intended to provide the next step in increased detail. State geologic maps that range in scale from 1:100,000 to 1:1,000,000 are available for most of the country, and digital versions of these state maps are the basis of this product. The digital geologic maps presented here are in a standardized format as ARC/INFO export files and as ArcView shape files. Data tables that relate the map units to detailed lithologic and age information accompany these GIS files. The map is delivered as a set of 1:250,000-scale quadrangle files. To the best of our ability, these quadrangle files are edge-matched with respect to geology. When the maps are merged, the combined attribute tables can be used directly with the merged maps to make derivative maps.

  15. Detecting Spatial Patterns of Natural Hazards from the Wikipedia Knowledge Base

    NASA Astrophysics Data System (ADS)

    Fan, J.; Stewart, K.

    2015-07-01

    The Wikipedia database is a data source of immense richness and variety. Included in this database are thousands of geotagged articles, including, for example, almost real-time updates on current and historic natural hazards. This includes usercontributed information about the location of natural hazards, the extent of the disasters, and many details relating to response, impact, and recovery. In this research, a computational framework is proposed to detect spatial patterns of natural hazards from the Wikipedia database by combining topic modeling methods with spatial analysis techniques. The computation is performed on the Neon Cluster, a high performance-computing cluster at the University of Iowa. This work uses wildfires as the exemplar hazard, but this framework is easily generalizable to other types of hazards, such as hurricanes or flooding. Latent Dirichlet Allocation (LDA) modeling is first employed to train the entire English Wikipedia dump, transforming the database dump into a 500-dimension topic model. Over 230,000 geo-tagged articles are then extracted from the Wikipedia database, spatially covering the contiguous United States. The geo-tagged articles are converted into an LDA topic space based on the topic model, with each article being represented as a weighted multidimension topic vector. By treating each article's topic vector as an observed point in geographic space, a probability surface is calculated for each of the topics. In this work, Wikipedia articles about wildfires are extracted from the Wikipedia database, forming a wildfire corpus and creating a basis for the topic vector analysis. The spatial distribution of wildfire outbreaks in the US is estimated by calculating the weighted sum of the topic probability surfaces using a map algebra approach, and mapped using GIS. To provide an evaluation of the approach, the estimation is compared to wildfire hazard potential maps created by the USDA Forest service.

  16. California Fault Parameters for the National Seismic Hazard Maps and Working Group on California Earthquake Probabilities 2007

    USGS Publications Warehouse

    Wills, Chris J.; Weldon, Ray J.; Bryant, W.A.

    2008-01-01

    This report describes development of fault parameters for the 2007 update of the National Seismic Hazard Maps and the Working Group on California Earthquake Probabilities (WGCEP, 2007). These reference parameters are contained within a database intended to be a source of values for use by scientists interested in producing either seismic hazard or deformation models to better understand the current seismic hazards in California. These parameters include descriptions of the geometry and rates of movements of faults throughout the state. These values are intended to provide a starting point for development of more sophisticated deformation models which include known rates of movement on faults as well as geodetic measurements of crustal movement and the rates of movements of the tectonic plates. The values will be used in developing the next generation of the time-independent National Seismic Hazard Maps, and the time-dependant seismic hazard calculations being developed for the WGCEP. Due to the multiple uses of this information, development of these parameters has been coordinated between USGS, CGS and SCEC. SCEC provided the database development and editing tools, in consultation with USGS, Golden. This database has been implemented in Oracle and supports electronic access (e.g., for on-the-fly access). A GUI-based application has also been developed to aid in populating the database. Both the continually updated 'living' version of this database, as well as any locked-down official releases (e.g., used in a published model for calculating earthquake probabilities or seismic shaking hazards) are part of the USGS Quaternary Fault and Fold Database http://earthquake.usgs.gov/regional/qfaults/ . CGS has been primarily responsible for updating and editing of the fault parameters, with extensive input from USGS and SCEC scientists.

  17. EpiCollect: linking smartphones to web applications for epidemiology, ecology and community data collection.

    PubMed

    Aanensen, David M; Huntley, Derek M; Feil, Edward J; al-Own, Fada'a; Spratt, Brian G

    2009-09-16

    Epidemiologists and ecologists often collect data in the field and, on returning to their laboratory, enter their data into a database for further analysis. The recent introduction of mobile phones that utilise the open source Android operating system, and which include (among other features) both GPS and Google Maps, provide new opportunities for developing mobile phone applications, which in conjunction with web applications, allow two-way communication between field workers and their project databases. Here we describe a generic framework, consisting of mobile phone software, EpiCollect, and a web application located within www.spatialepidemiology.net. Data collected by multiple field workers can be submitted by phone, together with GPS data, to a common web database and can be displayed and analysed, along with previously collected data, using Google Maps (or Google Earth). Similarly, data from the web database can be requested and displayed on the mobile phone, again using Google Maps. Data filtering options allow the display of data submitted by the individual field workers or, for example, those data within certain values of a measured variable or a time period. Data collection frameworks utilising mobile phones with data submission to and from central databases are widely applicable and can give a field worker similar display and analysis tools on their mobile phone that they would have if viewing the data in their laboratory via the web. We demonstrate their utility for epidemiological data collection and display, and briefly discuss their application in ecological and community data collection. Furthermore, such frameworks offer great potential for recruiting 'citizen scientists' to contribute data easily to central databases through their mobile phone.

  18. Spatial databases of the Humboldt Basin mineral resource assessment, northern Nevada

    USGS Publications Warehouse

    Mihalasky, Mark J.; Moyer, Lorre A.

    2004-01-01

    This report describes the origin, generation, and format of tract map databases for deposit types that accompany the metallic mineral resource assessment for the Humboldt River Basin, northern Nevada, (Wallace and others, 2004, Chapter 2). The deposit types include pluton-related polymetallic, sedimentary rock-hosted Au-Ag, and epithermal Au-Ag. The tract maps constitute only part of the assessment, which also includes new research and data for northern Nevada, discussions on land classification, and interpretation of the assessment maps. The purpose of the assessment was to identify areas that may have a greater favorability for undiscovered metallic mineral deposits, provide analysis of the mineral-resource favorability, and present the assessment of the Humboldt River basin and adjacent areas in a digital format using a Geographic Information System (GIS).

  19. Application of an adaptive neuro-fuzzy inference system to ground subsidence hazard mapping

    NASA Astrophysics Data System (ADS)

    Park, Inhye; Choi, Jaewon; Jin Lee, Moung; Lee, Saro

    2012-11-01

    We constructed hazard maps of ground subsidence around abandoned underground coal mines (AUCMs) in Samcheok City, Korea, using an adaptive neuro-fuzzy inference system (ANFIS) and a geographical information system (GIS). To evaluate the factors related to ground subsidence, a spatial database was constructed from topographic, geologic, mine tunnel, land use, and ground subsidence maps. An attribute database was also constructed from field investigations and reports on existing ground subsidence areas at the study site. Five major factors causing ground subsidence were extracted: (1) depth of drift; (2) distance from drift; (3) slope gradient; (4) geology; and (5) land use. The adaptive ANFIS model with different types of membership functions (MFs) was then applied for ground subsidence hazard mapping in the study area. Two ground subsidence hazard maps were prepared using the different MFs. Finally, the resulting ground subsidence hazard maps were validated using the ground subsidence test data which were not used for training the ANFIS. The validation results showed 95.12% accuracy using the generalized bell-shaped MF model and 94.94% accuracy using the Sigmoidal2 MF model. These accuracy results show that an ANFIS can be an effective tool in ground subsidence hazard mapping. Analysis of ground subsidence with the ANFIS model suggests that quantitative analysis of ground subsidence near AUCMs is possible.

  20. Bedrock geologic map of the Nashua South quadrangle, Hillsborough County, New Hampshire, and Middlesex County, Massachusetts

    USGS Publications Warehouse

    Walsh, Gregory J.; Jahns, Richard H.; Aleinikoff, John N.

    2013-01-01

    The bedrock geology of the 7.5-minute Nashua South quadrangle consists primarily of deformed Silurian metasedimentary rocks of the Berwick Formation. The metasedimentary rocks are intruded by a Late Silurian to Early Devonian diorite-gabbro suite, Devonian rocks of the Ayer Granodiorite, Devonian granitic rocks of the New Hampshire Plutonic Suite including pegmatite and the Chelmsford Granite, and Jurassic diabase dikes. The bedrock geology was mapped to study the tectonic history of the area and to provide a framework for ongoing hydrogeologic characterization of the fractured bedrock of Massachusetts and New Hampshire. This report presents mapping by G.J. Walsh and R.H. Jahns and zircon U-Pb geochronology by J.N. Aleinikoff. The complete report consists of a map, text pamphlet, and GIS database. The map and text pamphlet are only available as downloadable files (see frame at right). The GIS database is available for download in ESRITM shapefile and Google EarthTM formats, and includes contacts of bedrock geologic units, faults, outcrops, structural geologic information, photographs, and a three-dimensional model.

  1. Feasibility of Smartphone Based Photogrammetric Point Clouds for the Generation of Accessibility Maps

    NASA Astrophysics Data System (ADS)

    Angelats, E.; Parés, M. E.; Kumar, P.

    2018-05-01

    Accessible cities with accessible services are an old claim of people with reduced mobility. But this demand is still far away of becoming a reality as lot of work is required to be done yet. First step towards accessible cities is to know about real situation of the cities and its pavement infrastructure. Detailed maps or databases on street slopes, access to sidewalks, mobility in public parks and gardens, etc. are required. In this paper, we propose to use smartphone based photogrammetric point clouds, as a starting point to create accessible maps or databases. This paper analyses the performance of these point clouds and the complexity of the image acquisition procedure required to obtain them. The paper proves, through two test cases, that smartphone technology is an economical and feasible solution to get the required information, which is quite often seek by city planners to generate accessible maps. The proposed approach paves the way to generate, in a near term, accessibility maps through the use of point clouds derived from crowdsourced smartphone imagery.

  2. The integrated web service and genome database for agricultural plants with biotechnology information

    PubMed Central

    Kim, ChangKug; Park, DongSuk; Seol, YoungJoo; Hahn, JangHo

    2011-01-01

    The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information from 1,039,823 records mapped to rice, Arabidopsis, and Chinese cabbage. PMID:21887015

  3. Road Extraction from AVIRIS Using Spectral Mixture and Q-Tree Filter Techniques

    NASA Technical Reports Server (NTRS)

    Gardner, Margaret E.; Roberts, Dar A.; Funk, Chris; Noronha, Val

    2001-01-01

    Accurate road location and condition information are of primary importance in road infrastructure management. Additionally, spatially accurate and up-to-date road networks are essential in ambulance and rescue dispatch in emergency situations. However, accurate road infrastructure databases do not exist for vast areas, particularly in areas with rapid expansion. Currently, the US Department of Transportation (USDOT) extends great effort in field Global Positioning System (GPS) mapping and condition assessment to meet these informational needs. This methodology, though effective, is both time-consuming and costly, because every road within a DOT's jurisdiction must be field-visited to obtain accurate information. Therefore, the USDOT is interested in identifying new technologies that could help meet road infrastructure informational needs more effectively. Remote sensing provides one means by which large areas may be mapped with a high standard of accuracy and is a technology with great potential in infrastructure mapping. The goal of our research is to develop accurate road extraction techniques using high spatial resolution, fine spectral resolution imagery. Additionally, our research will explore the use of hyperspectral data in assessing road quality. Finally, this research aims to define the spatial and spectral requirements for remote sensing data to be used successfully for road feature extraction and road quality mapping. Our findings will facilitate the USDOT in assessing remote sensing as a new resource in infrastructure studies.

  4. Characterizing and Mapping of Ecosystem Services (CMESs) Literature Database Version 1.0

    EPA Science Inventory

    Ecosystem services (ESs) represent an ecosystem’s capacity for satisfying essential human needs, directly or indirectly, above that required to maintain ecosystem integrity (structure, function and processes). The spatial characterization and mapping of ESs is an essential first ...

  5. ExpEdit: a webserver to explore human RNA editing in RNA-Seq experiments.

    PubMed

    Picardi, Ernesto; D'Antonio, Mattia; Carrabino, Danilo; Castrignanò, Tiziana; Pesole, Graziano

    2011-05-01

    ExpEdit is a web application for assessing RNA editing in human at known or user-specified sites supported by transcript data obtained by RNA-Seq experiments. Mapping data (in SAM/BAM format) or directly sequence reads [in FASTQ/short read archive (SRA) format] can be provided as input to carry out a comparative analysis against a large collection of known editing sites collected in DARNED database as well as other user-provided potentially edited positions. Results are shown as dynamic tables containing University of California, Santa Cruz (UCSC) links for a quick examination of the genomic context. ExpEdit is freely available on the web at http://www.caspur.it/ExpEdit/.

  6. Mars Global Digital Dune Database; MC-1

    USGS Publications Warehouse

    Hayward, R.K.; Fenton, L.K.; Tanaka, K.L.; Titus, T.N.; Colaprete, A.; Christensen, P.R.

    2010-01-01

    The Mars Global Digital Dune Database presents data and describes the methodology used in creating the global database of moderate- to large-size dune fields on Mars. The database is being released in a series of U.S. Geological Survey (USGS) Open-File Reports. The first release (Hayward and others, 2007) included dune fields from 65 degrees N to 65 degrees S (http://pubs.usgs.gov/of/2007/1158/). The current release encompasses ~ 845,000 km2 of mapped dune fields from 65 degrees N to 90 degrees N latitude. Dune fields between 65 degrees S and 90 degrees S will be released in a future USGS Open-File Report. Although we have attempted to include all dune fields, some have likely been excluded for two reasons: (1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or (2) resolution of THEMIS IR coverage (100m/pixel) certainly caused us to exclude smaller dune fields. The smallest dune fields in the database are ~ 1 km2 in area. While the moderate to large dune fields are likely to constitute the largest compilation of sediment on the planet, smaller stores of sediment of dunes are likely to be found elsewhere via higher resolution data. Thus, it should be noted that our database excludes all small dune fields and some moderate to large dune fields as well. Therefore, the absence of mapped dune fields does not mean that such dune fields do not exist and is not intended to imply a lack of saltating sand in other areas. Where availability and quality of THEMIS visible (VIS), Mars Orbiter Camera narrow angle (MOC NA), or Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) images allowed, we classified dunes and included some dune slipface measurements, which were derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. It was beyond the scope of this report to look at the detail needed to discern subtle dune modification. It was also beyond the scope of this report to measure all slipfaces. We attempted to include enough slipface measurements to represent the general circulation (as implied by gross dune morphology) and to give a sense of the complex nature of aeolian activity on Mars. The absence of slipface measurements in a given direction should not be taken as evidence that winds in that direction did not occur. When a dune field was located within a crater, the azimuth from crater centroid to dune field centroid was calculated, as another possible indicator of wind direction. Output from a general circulation model (GCM) is also included. In addition to polygons locating dune fields, the database includes THEMIS visible (VIS) and Mars Orbiter Camera Narrow Angle (MOC NA) images that were used to build the database. The database is presented in a variety of formats. It is presented as an ArcReader project which can be opened using the free ArcReader software. The latest version of ArcReader can be downloaded at http://www.esri.com/software/arcgis/arcreader/download.html. The database is also presented in an ArcMap project. The ArcMap project allows fuller use of the data, but requires ESRI ArcMap(Registered) software. A fuller description of the projects can be found in the NP_Dunes_ReadMe file (NP_Dunes_ReadMe folder_ and the NP_Dunes_ReadMe_GIS file (NP_Documentation folder). For users who prefer to create their own projects, the data are available in ESRI shapefile and geodatabase formats, as well as the open Geography Markup Language (GML) format. A printable map of the dunes and craters in the database is available as a Portable Document Format (PDF) document. The map is also included as a JPEG file. (NP_Documentation folder) Documentation files are available in PDF and ASCII (.txt) files. Tables are available in both Excel and ASCII (.txt)

  7. SoyBase, The USDA-ARS Soybean Genetics and Genomics Database

    USDA-ARS?s Scientific Manuscript database

    SoyBase, the USDA-ARS soybean genetic database, is a comprehensive repository for professionally curated genetics, genomics and related data resources for soybean. SoyBase contains the most current genetic, physical and genomic sequence maps integrated with qualitative and quantitative traits. The...

  8. A Toposcopic Investigation of Brain Electrical Activity Induced by Motion Sickness

    DTIC Science & Technology

    1992-12-01

    This hypothesis explains motion sickness symptoms as the body’s natural response when the infcr- mation transmitted by the eyes, the vestibular system...consisting of the summed pixel values of their respective sets. Each of these images are then converted to a map of the mean values and a map of the variances ...Statistical mapping requires a sizable normative database of maps, a signif - icant investment of resources (11:25). Location-by-location comparisons be

  9. Relax with CouchDB - Into the non-relational DBMS era of Bioinformatics

    PubMed Central

    Manyam, Ganiraju; Payton, Michelle A.; Roth, Jack A.; Abruzzo, Lynne V.; Coombes, Kevin R.

    2012-01-01

    With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. PMID:22609849

  10. Database of potential sources for earthquakes larger than magnitude 6 in Northern California

    USGS Publications Warehouse

    ,

    1996-01-01

    The Northern California Earthquake Potential (NCEP) working group, composed of many contributors and reviewers in industry, academia and government, has pooled its collective expertise and knowledge of regional tectonics to identify potential sources of large earthquakes in northern California. We have created a map and database of active faults, both surficial and buried, that forms the basis for the northern California portion of the national map of probabilistic seismic hazard. The database contains 62 potential sources, including fault segments and areally distributed zones. The working group has integrated constraints from broadly based plate tectonic and VLBI models with local geologic slip rates, geodetic strain rate, and microseismicity. Our earthquake source database derives from a scientific consensus that accounts for conflict in the diverse data. Our preliminary product, as described in this report brings to light many gaps in the data, including a need for better information on the proportion of deformation in fault systems that is aseismic.

  11. Database for the Geologic Map of Upper Eocene to Holocene Volcanic and Related Rocks of the Cascade Range, Oregon

    USGS Publications Warehouse

    Nimz, Kathryn; Ramsey, David W.; Sherrod, David R.; Smith, James G.

    2008-01-01

    Since 1979, Earth scientists of the Geothermal Research Program of the U.S. Geological Survey have carried out multidisciplinary research in the Cascade Range. The goal of this research is to understand the geology, tectonics, and hydrology of the Cascades in order to characterize and quantify geothermal resource potential. A major goal of the program is compilation of a comprehensive geologic map of the entire Cascade Range that incorporates modern field studies and that has a unified and internally consistent explanation. This map is one of three in a series that shows Cascade Range geology by fitting published and unpublished mapping into a province-wide scheme of rock units distinguished by composition and age; map sheets of the Cascade Range in Washington (Smith, 1993) and California will complete the series. The complete series forms a guide to exploration and evaluation of the geothermal resources of the Cascade Range and will be useful for studies of volcano hazards, volcanology, and tectonics. This digital release contains all the information used to produce the geologic map published as U.S. Geological Survey Geologic Investigations Series I-2569 (Sherrod and Smith, 2000). The main component of this digital release is a geologic map database prepared using ArcInfo GIS. This release also contains files to view or print the geologic map and accompanying descriptive pamphlet from I-2569.

  12. Sensitivity and specificity for detecting early glaucoma in eyes with high myopia from normative database of macular ganglion cell complex thickness obtained from normal non-myopic or highly myopic Asian eyes.

    PubMed

    Nakanishi, Hideo; Akagi, Tadamichi; Hangai, Masanori; Kimura, Yugo; Suda, Kenji; Kumagai, Kyoko Kawashima; Morooka, Satoshi; Ikeda, Hanako Ohashi; Yoshimura, Nagahisa

    2015-07-01

    We aimed to determine the sensitivity and specificity of the normative database of non-myopic and highly myopic eyes of the macular ganglion cell complex (mGCC) thickness embedded in the NIDEK RS-3000 spectral-domain optical coherence tomography (SD-OCT) for detecting early glaucoma in highly myopic eyes. Forty-seven highly myopic eyes (axial length ≥26.0 mm) of 47 subjects were studied. The SD-OCT images were used to determine the mGCC thickness within a 9-mm diameter circle centered on the fovea. The sensitivity and specificity of the non-myopic database were compared to that of the highly myopic database for distinguishing the early glaucomatous eyes from the non-glaucomatous eyes. The mGCC scans were classified as abnormal if at least one of the eight sectors of the significance map was < 1 % of the normative thickness. Twenty-one eyes were diagnosed to be non-glaucomatous and 26 eyes to have early glaucoma. . The average mGCC thickness was significantly thinner (80.9 ± 8.5 μm) in the early glaucoma group than in the non-glaucomatous group (91.2 ± 7.5 μm; p <1 × 10(-4)). The sensitivity was 96.2 % and specificity was 47.6 % when the non-myopic database was used, and the sensitivity was 92.3 % and the specificity was 90.5 % when the highly myopic database was used. The difference in the specificity was significant (p < 0.01). The significantly higher specificity of the myopic normative database for detecting early glaucoma in highly myopic eyes will lead to fewer false positive diagnoses. The database obtained from highly myopic eyes should be used when evaluating the mGCC thickness of highly myopic eyes.

  13. Mapping Shoreline Change Using Digital Orthophotogrammetry on Maui, Hawaii

    USGS Publications Warehouse

    Fletcher, C.; Rooney, J.; Barbee, M.; Lim, S.-C.; Richmond, B.

    2003-01-01

    Digital, aerial orthophotomosaics with 0.5-3.0 m horizontal accuracy, used with NOAA topographic maps (T-sheets), document past shoreline positions on Maui Island, Hawaii. Outliers in the shoreline position database are determined using a least median of squares regression. Least squares linear regression of the reweighted data (outliers excluded) is used to determine a shoreline trend termed the reweighted linear squares (RLS). To determine the annual erosion hazard rate (AEHR) for use by shoreline managers the RLS data is smoothed in the longshore direction using a weighted moving average five transects wide with the smoothed rate applied to the center transect. Weightings within each five transect group are 1,3,5,3,1. AEHR's (smoothed RLS values) are plotted on a 1:3000 map series for use by shoreline managers and planners. These maps are displayed on the web for public reference at http://www.co.maui.hi.us/ departments/Planning/erosion.htm. An end-point rate of change is also calculated using the earliest T-sheet and the latest collected shoreline (1997 or 2002). The resulting database consists of 3565 separate erosion rates spaced every 20 m along 90 km of sandy shoreline. Three regions are analyzed: Kihei, West Maui, and North Shore coasts. The Kihei Coast has an average AEHR of about 0.3 m/yr, an end point rate (EPR) of 0.2 m/yr, 2.8 km of beach loss and 19 percent beach narrowing in the period 1949-1997. Over the same period the West Maui coast has an average AEHR of about 0.2 m/yr, an average EPR of about 0.2 m/yr, about 4.5 km of beach loss and 25 percent beach narrowing. The North Shore has an average AEHR of about 0.4 m/yr, an average EPR of about 0.3 m/yr, 0.8 km of beach loss and 15 percent beach narrowing. The mean, island-wide EPR of eroding shorelines is 0.24 m/yr and the average AEHR of eroding shorelines is about 0.3 m/yr. The overall shoreline change rate, erosion and accretion included, as measured using the unsmoothed RLS technique is 0.21 m/yr. Island wide changes in beach width show a 19 percent decrease over the period 1949/ 1950 to 1997/2002. Island-wide, about 8 km of dry beach has been lost since 1949 (i.e., high water against hard engineering structures and natural rock substrate).

  14. An editor for pathway drawing and data visualization in the Biopathways Workbench.

    PubMed

    Byrnes, Robert W; Cotter, Dawn; Maer, Andreia; Li, Joshua; Nadeau, David; Subramaniam, Shankar

    2009-10-02

    Pathway models serve as the basis for much of systems biology. They are often built using programs designed for the purpose. Constructing new models generally requires simultaneous access to experimental data of diverse types, to databases of well-characterized biological compounds and molecular intermediates, and to reference model pathways. However, few if any software applications provide all such capabilities within a single user interface. The Pathway Editor is a program written in the Java programming language that allows de-novo pathway creation and downloading of LIPID MAPS (Lipid Metabolites and Pathways Strategy) and KEGG lipid metabolic pathways, and of measured time-dependent changes to lipid components of metabolism. Accessed through Java Web Start, the program downloads pathways from the LIPID MAPS Pathway database (Pathway) as well as from the LIPID MAPS web server http://www.lipidmaps.org. Data arises from metabolomic (lipidomic), microarray, and protein array experiments performed by the LIPID MAPS consortium of laboratories and is arranged by experiment. Facility is provided to create, connect, and annotate nodes and processes on a drawing panel with reference to database objects and time course data. Node and interaction layout as well as data display may be configured in pathway diagrams as desired. Users may extend diagrams, and may also read and write data and non-lipidomic KEGG pathways to and from files. Pathway diagrams in XML format, containing database identifiers referencing specific compounds and experiments, can be saved to a local file for subsequent use. The program is built upon a library of classes, referred to as the Biopathways Workbench, that convert between different file formats and database objects. An example of this feature is provided in the form of read/construct/write access to models in SBML (Systems Biology Markup Language) contained in the local file system. Inclusion of access to multiple experimental data types and of pathway diagrams within a single interface, automatic updating through connectivity to an online database, and a focus on annotation, including reference to standardized lipid nomenclature as well as common lipid names, supports the view that the Pathway Editor represents a significant, practicable contribution to current pathway modeling tools.

  15. Star-Mapping Tools Enable Tracking of Endangered Animals

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Software programmer Jason Holmberg of Portland, Oregon, partnered with a Goddard Space Flight Center astrophysicist to develop a method for tracking the elusive whale shark using the unique spot patterns on the fish s skin. Employing a star-mapping algorithm originally designed for the Hubble Space Telescope, Holmberg created the Shepherd Project, a photograph database and pattern-matching system that can identify whale sharks by their spots and match images contributed to the database by photographers from around the world. The system has been adapted for tracking other rare and endangered animals, including polar bears and ocean sunfish.

  16. Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System

    PubMed Central

    Jan, Shau-Shiun; Yeh, Shuo-Ju; Liu, Ya-Wen

    2015-01-01

    The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS) measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs) at measured points, the signal propagation model of the Wi-Fi access point (AP) in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging. PMID:26343673

  17. Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System.

    PubMed

    Jan, Shau-Shiun; Yeh, Shuo-Ju; Liu, Ya-Wen

    2015-08-28

    The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS) measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs) at measured points, the signal propagation model of the Wi-Fi access point (AP) in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging.

  18. Unconsolidated Aquifers in Tompkins County, New York

    USGS Publications Warehouse

    Miller, Todd S.

    2000-01-01

    Unconsolidated aquifers consisting of saturated sand and gravel are capable of supplying large quantities of good-quality water to wells in Tompkins County, but little published geohydrologic inform ation on such aquifers is available. In 1986, the U.S.Geological Survey (USGS) began collecting geohydrologic information and well data to construct an aquifer map showing the extent of unconsolidated aquifers in Tompkins county. Data sources included (1) water-well drillers. logs; (2) highway and other construction test-boring logs; (3) well data gathered by the Tompkins County Department of Health, (4) test-well logs from geohydrologic consultants that conducted projects for site-specific studies, and (5) well data that had been collected during past investigations by the USGS and entered into the National Water Information System (NWIS) database. In 1999, the USGS, in cooperation with the Tompkins County Department of Planning, compiled these data to construct this map. More than 600 well records were entered into the NWIS database in 1999 to supplement the 350 well records already in the database; this provided a total of 950 well records. The data were digitized and imported into a geographic information system (GIS) coverage so that well locations could be plotted on a map, and well data could be tabulated in a digital data base through ARC/INFO software. Data on the surficial geology were used with geohydrologic data from well records and previous studies to delineate the extent of aquifers on this map. This map depicts (1) the extent of unconsolidated aquifers in Tompkins County, and (2) locations of wells whose records were entered into the USGS NWIS database and made into a GIS digital coverage. The hydrologic information presented here is generalized and is not intended for detailed site evaluations. Precise locations of geohydrologic-unit boundaries, and a description of the hydrologic conditions within the units, would require additional detailed, site-specific information.

  19. Improving data management and dissemination in web based information systems by semantic enrichment of descriptive data aspects

    NASA Astrophysics Data System (ADS)

    Gebhardt, Steffen; Wehrmann, Thilo; Klinger, Verena; Schettler, Ingo; Huth, Juliane; Künzer, Claudia; Dech, Stefan

    2010-10-01

    The German-Vietnamese water-related information system for the Mekong Delta (WISDOM) project supports business processes in Integrated Water Resources Management in Vietnam. Multiple disciplines bring together earth and ground based observation themes, such as environmental monitoring, water management, demographics, economy, information technology, and infrastructural systems. This paper introduces the components of the web-based WISDOM system including data, logic and presentation tier. It focuses on the data models upon which the database management system is built, including techniques for tagging or linking metadata with the stored information. The model also uses ordered groupings of spatial, thematic and temporal reference objects to semantically tag datasets to enable fast data retrieval, such as finding all data in a specific administrative unit belonging to a specific theme. A spatial database extension is employed by the PostgreSQL database. This object-oriented database was chosen over a relational database to tag spatial objects to tabular data, improving the retrieval of census and observational data at regional, provincial, and local areas. While the spatial database hinders processing raster data, a "work-around" was built into WISDOM to permit efficient management of both raster and vector data. The data model also incorporates styling aspects of the spatial datasets through styled layer descriptions (SLD) and web mapping service (WMS) layer specifications, allowing retrieval of rendered maps. Metadata elements of the spatial data are based on the ISO19115 standard. XML structured information of the SLD and metadata are stored in an XML database. The data models and the data management system are robust for managing the large quantity of spatial objects, sensor observations, census and document data. The operational WISDOM information system prototype contains modules for data management, automatic data integration, and web services for data retrieval, analysis, and distribution. The graphical user interfaces facilitate metadata cataloguing, data warehousing, web sensor data analysis and thematic mapping.

  20. Bedrock geologic map of the Uxbridge quadrangle, Worcester County, Massachusetts, and Providence County, Rhode Island

    USGS Publications Warehouse

    Walsh, Gregory J.

    2014-01-01

    The bedrock geology of the 7.5-minute Uxbridge quadrangle consists of Neoproterozoic metamorphic and igneous rocks of the Avalon zone. In this area, rocks of the Avalon zone lie within the core of the Milford antiform, south and east of the terrane-bounding Bloody Bluff fault zone. Permian pegmatite dikes and quartz veins occur throughout the quadrangle. The oldest metasedimentary rocks include the Blackstone Group, which represents a Neoproterozoic peri-Gondwanan marginal shelf sequence. The metasedimentary rocks are intruded by Neoproterozoic arc-related plutonic rocks of the Rhode Island batholith. This report presents mapping by G.J. Walsh. The complete report consists of a map, text pamphlet, and GIS database. The map and text pamphlet are available only as downloadable files (see frame at right). The GIS database is available for download in ESRI™ shapefile and Google Earth™ formats, and includes contacts of bedrock geologic units, faults, outcrops, structural geologic information, geochemical data, and photographs.

Top